PHP Download file, limit max speed and calculate downloading speed - php

I have written a script that gives you ability to download the file with my maximum file speed that I allow, however when I allow 'unlimited' speed like 10000kB/s then the ftell works strange, it behaves like it downloads with 10000kBps speed, which is not true and I can not make calculations in database like time remaining, current download speed and so on...
So browser downloads file after some time, but in database it is already like 'downloaded', how could I make some precision calculations even I set the unlimited speed so user can download a file at the speed of the network and the database values are also counted by his network speed not by the ftell(); which depends on $download_rate; ...?
Thanks in advance!
<?php
while(!feof($fopen)) {
//echo fread($fopen, 4096);
$this->get_allowed_speed_limit($download_rate);
//$download_rate = 350;
print fread($fopen, round($download_rate * 1024));
sleep(1); //needed for download speed limit
if(connection_status() != 0 || connection_aborted()) {
$bytes_transferred = ftell($fopen);
if($bytes_transferred < $bytes) {
//CANCELLED
$this->download_unsuccessfull($file_name);
} else {
//CANCELLED (but gets executed only on strange networks like eduroam in CZE)
$this->download_unsuccessfull($file_name);}
flush();
die;
} else {
$progress = ftell($fopen) / $bytes * 100;
if($progress >= 100) {
//DONE
$this->download_successfull($file_name);
flush();
} else {
//DOWNLOADING
if(ftell($fopen) != 0) {
$bytes_transferred = ftell($fopen);
$time_end = microtime(true);
$time = $time_end - $time_start;
$dl_speed = floor(($bytes_transferred / $time) / 1000);
///////HERE THE CALCULATIONS ARE TOTALLY WRONG, BECAUSE IT ALL DEPENDS ON THE INPUT OF $download_rate;
mysqli_query($con, "UPDATE `download_meter` SET `current_speed` = '".mysqli_real_escape_string($con, $bytes_transferred)."'");
$this->update_active_downloads($file_name, $bytes_transferred, $dl_speed);
}
flush();
}
}
//Activate this for delay download.
//flush();
//sleep(1);
}
?>

Limiting download speed is up to your webserver. PHP is too high level. It knows nothing of the outgoing data.
Apache: https://stackoverflow.com/a/13355834/247372
Nginx: http://www.nginxtips.com/how-to-limit-nginx-download-speed/
The same goes for measuring: the webserver will know and might tell you somehow. Logs, unix socket, after-the-fact, I don't know. Those links will know.

How about (re)adding that sleep(1); thing to the WHILE loop? From what I can see the script outputs the file almost all at once (as fast as it can) and there's nothing that pauses it so it can actually limit the download speed.
That way you will know that each second you send just 64kbytes (or whatever) and even though you can't be sure that the user can in fact recieve this much data/second (whoa, so fast!), it could be a bit more precise than what you have in there now.
Or am I getting this wrong?

Related

how to use fopen, fgets, flose properly? It works fine but later sometime the count just goes down to random number

It works fine but later sometime the count just goes down to random
number. My guess is my code cannot process multiple visits at a time.
Where increment heppens
Where it displays the count
<?php
$args_loveteam = array('child_of' => 474);
$loveteam_children = get_categories($args_loveteam);
if(in_category('loveteams', $post->ID)){
foreach ($loveteam_children as $loveteam_child) {
$post_slug = $loveteam_child->slug;
echo "<script>console.log('".$post_slug."');</script>";
if(in_category($loveteam_child->name)){
/* counter */
// opens file to read saved hit number
if($loveteam_child->slug == "loveteam-mayward"){
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug."-2.txt","r");
}else{
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug.".txt","r");
}
$count = fgets($datei,1000);
fclose($datei);
$count=$count + 1 ;
// opens file to change new hit number
if($loveteam_child->slug == "loveteam-mayward"){
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug."-2.txt","w");
}else{
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug.".txt","w");
}
fwrite($datei, $count);
fclose($datei);
}
}
}
?>
I would at least change your code to this
foreach ($loveteam_children as $loveteam_child) {
$post_slug = $loveteam_child->slug;
echo "<script>console.log('".$post_slug."');</script>";
if($loveteam_child->slug == "loveteam-mayward"){
$filename = "{$_SERVER['DOCUMENT_ROOT']}/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-{$post_slug}.txt";
}else{
$filename = "{$_SERVER['DOCUMENT_ROOT']}/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-{$post_slug}-2.txt";
}
$count = file_get_contents($filename);
file_get_contents($filename, ++$count, LOCK_EX);
}
You could also try flock on the file to get a lock before modifying it. That way if another process comes along it has to wait on the first one. But file_put_contents works great for things like logging where you may have many processes competing for the same file.
Database should be ok, but even that may not be fast enough. It shouldn't mess up your data though.
Anyway hope it helps. This is kind of an odd question, concurrency can be a real pain if you have a high chance of process collisions and race conditions etc etc.
However as I mentioned (in the comments) using the filesystem is probably not going to provide the consistency you need. Probably the best for this may be some kind of in memory storage such as Redis. But that is hard to say without full knowing what you use it for. For example if it should persist on server reboot.
Hope it helps, good luck.

A portable way of providing an IP-based cooldown period?

I have a PHP API front end running on a webserver. This specific PHP program is subject to distribution, thus it should be as portable as possible.
The feature I want to implement is an IP cooldown period, meaning that the same IP can only request the API a maximum of two times per second, meaning at least a 500ms delay.
The approach I had in mind is storing the IP in an MySQL database, along with the latest request timestamp. I get the IP by:
if (getenv('REMOTE_ADDR'))
$ipaddress = getenv('REMOTE_ADDR');
But some servers might not have a MySQL database or the user installling this has no access. Another issue is the cleanup of the database.
Is there a more portable way of temporarily storing the IPs (keeping IPv6 in mind)?
and
How can I provide an automatic cleanup of IPs that are older than 500ms, with the least possible performance impact?
Also: I have no interest at looking at stored IPs, it is just about the delay.
This is how I solved it for now, using a file.
Procedure
Get client IP and hash it (to prevent file readout).
Open IP file and scan each line
Compare the time of the current record to the current time
If difference is greater than set timeout goto 5., else 7.
If IP matches client, create updated record, else
drop record.
If IP matches client, provide failure message, else copy record.
Example code
<?php
$sIPHash = md5($_SERVER[REMOTE_ADDR]);
$iSecDelay = 10;
$sPath = "bucket.cache";
$bReqAllow = false;
$iWait = -1;
$sContent = "";
if ($nFileHandle = fopen($sPath, "c+")) {
flock($nFileHandle, LOCK_EX);
$iCurLine = 0;
while (($sCurLine = fgets($nFileHandle, 4096)) !== FALSE) {
$iCurLine++;
$bIsIPRec = strpos($sCurLine, $sIPHash);
$iLastReq = strtok($sCurLine, '|');
// this record expired anyway:
if ( (time() - $iLastReq) > $iSecDelay ) {
// is it also our IP?
if ($bIsIPRec !== FALSE) {
$sContent .= time()."|".$sIPHash.PHP_EOL;
$bReqAllow = true;
}
} else {
if ($bIsIPRec !== FALSE) $iWait = ($iSecDelay-(time()-$iLastReq));
$sContent .= $sCurLine.PHP_EOL;
}
}
}
if ($iWait == -1 && $bReqAllow == false) {
// no record yet, create one
$sContent .= time()."|".$sIPHash.PHP_EOL;
echo "Request from new user successful!";
} elseif ($bReqAllow == true) {
echo "Request from old user successful!";
} else {
echo "Request failed! Wait " . $iWait . " seconds!";
}
ftruncate($nFileHandle, 0);
rewind($nFileHandle);
fwrite($nFileHandle, $sContent);
flock($nFileHandle, LOCK_UN);
fclose($nFileHandle);
?>
Remarks
New users
If the IP hash doesn't match any record, a new record is created. Attention: Access might fail if you do not have rights to do that.
Memory
If you expect much traffic, switch to a database solution like this all together.
Redundant code
"But minxomat", you might say, "now each client loops through the whole file!". Yes, indeed, and that is how I want it for my solution. This way, every client is responsible for the cleanup of the whole file. Even so, the performance impact is held low, because if every client is cleaning, file size will be kept at the absolute minimum. Change this, if this way doesn't work for you.

PHP Prevent simultaneous function execution (throttling limits via PHP)

I have a function that send HTTP request via CURL to www.server.com
My task is to make sure that www.server.com gets no more than one request every 2 seconds.
Possible solution:
Create a function checktime() that will store current call time in database and check with database on every next call and make system pause for 2 seconds:
$oldTime = $this->getTimeFromDatabase();
if ($oldTime < (time() - 2) ) { // if its been 2 seconds
$this->setNewTimeInDatabase();
return true;
} else {
sleep(2);
return false;
}
The problem/question:
Lets say, the last request to www.server.com was on 1361951000. Then 10 other users attempt to do request on 1361951001 (1 seconds later). checktime() Function will be called.
As since it only has been 1 second, the function will return false. All 10 users will wait 2 seconds. Does it means that on 1361951003 there are 10 requests will be sent simultaneously? And is it possible that the time of last request will not be changed in database, because of the missed call of $this->setNewTimeInDatabase() in checktime()?
Thank you!
UPDATE:
I have just been told that using a loop might solve the problem:
for($i=0;$i<300;$i++)
{
$oldTime = $this->getTimeFromDatabase();
if ($oldTime < (time() - 2) ) { // if its been 2 seconds
$this->setNewTimeInDatabase();
return true;
} else {
sleep(2);
return false;
}
}
But i don't really see logic in it.
I believe you need some implementation of a semaphore. The database could work, as long as you can guarantee that only one thread gets to write to the db and then make the request.
For example, you might use an update request to the db and then check for the updated rows (in order to check whether the update actually happened). If the update was succesful you can assume you got the mutex lock and then make the request (assuming the time is right to make it). Something like this:
$oldTime = $this->getTimeFromDatabase();
if ($oldTime < (time() - 2) && $this->getLock()) { // if its been 2 seconds
$this->setNewTimeInDatabase();
$this->releaseLock();
return true;
} else {
sleep(2);
return false;
}
function getLock()
{
return $mysqli->query('UPDATE locktable set locked = 1 WHERE locked = 0');
}
function releaseLock()
{
$mysqli->query('UPDATE locktable set locked = 0');
}
I'm not sure about the mysql functions, but I believe it's ok to get the general idea.
Watch out with using a database. For example MySQL is not always 100% in sync with its sessions, and for that reason it is not safe to rely on that for locking purposes.
You could use a file-lock through the method flock, where you would save the access time in. Then you could be sure to lock the file, so no two or more processes would ever access it at the same time.
It would probably go something like this:
$filepath = "lockfile_for_source";
touch($filepath);
$fp = fopen("lockfile_for_resource", "r") or die("Could not open file.");
while(true){
while(!flock($fp, LOCK_EX)){
sleep(0.25); //wait to get file-lock.
}
$time = file_get_contents($filepath);
$diff = time() - $time;
if ($diff >= 2){
break;
}else{
flock($fp, LOCK_UN);
}
}
//Following code would never be executed simultaneously by two scripts.
//You should access and use your resource here.
fwrite($fp, time());
fflush($fp);
flock($fp, LOCK_UN); //remove lock on file.
fclose($fp);
Please be aware that I have not tested the code.

Outputting exec() ping result progressively

I'm trying to write a function that pings a few hundred addresses and returns their values (milliseconds). So far I've achieved the initial idea which is to ping and get the result but the problem arises when using the same code for hundreds of addresses, the PHP page stalls until it either times out or reaches the last ping command.
I would be glad if I could get some suggestions to output the results progressively, here is my current code:
<?php
// "for" loop added according to suggestion for browser compatibility (IE, FF, CHR, OPR, SFR)
for($i = 0; $i < 5000; $i++)
{
echo ' ';
}
function GetPing($ip = NULL) {
// Returns the client ping if no address has been passed to the function
if(empty($ip)) {
$ip = $_SERVER['REMOTE_ADDR'];
}
// Check which OS is being run by the client
if(getenv('OS') == 'Windows_NT') {
//echo '<b>Detected local system:</b> Windows NT/2000/XP/2003/2008/Vista/7<p>';
$exec = exec("ping -n 1 -l 32 -i 128 " . $ip);
return end(explode(' ', $exec));
}
else {
//echo '<b>Detected local system:</b> Linux/Unix<p>';
$exec = exec("ping -c 1 -s 32 -t 128 " . $ip);
$array = explode('/', end(explode('=', $exec )));
return ceil($array[1]) . 'ms';
}
// ob_flush and flush added according to suggestion for buffer output
ob_flush();
flush();
}
// Added to test 20 sequential outputs
for($count = 0; $count < 20; $count++)
echo GetPing('8.8.8.8') . '<div>';
?>
After some feedback, I've added a for loop as well as ob_flush() and flush() to my script and I've also set output_buffering to 0 in php.ini. It seems to work for most browsers that I tested so far (IE8, Firefox 12, Chrome 19, Opera 11, Safari 5). It seems the current code is now working as intended but any suggestion to improve on it is immensely appreciated.
Thank you for your feedback.
this is just a guess; I've written years ago a very wobbly chat script that used output buffering as well (and fetching new messages in while(true) loop) ..
While doing this I've encountered the same problems that sometimes the script stalled (blank screen), sometimes it took a while until the characters appeared and additionally this was also browser specific.
Here are the relevant code snippets I've added to the script to have it work with IE6 and FF2 (as I said, years ago ...)
<?php
// End output buffering
ob_end_flush();
// IE and Safari Workaround
// They will only display the webpage if it's completely loaded or
// at least 5000 bytes have been "printed".
for($i=0;$i<5000;$i++)
{
echo ' ';
}
while( ... )
{
echo 'Message';
ob_flush();
flush();
}
?>
It worked for me, so maybe you could give it a try as well. (Altough I have no idea how modern browsers and server infrastrucutre will behave to this).
I think what you may be looking for is progressively running and outputting the script, rather than asynchronous functions.
See Is there a way to make PHP progressively output as the script executes?

Php, wait 5 seconds before executing an action

I have a .php script that I use for creating the list of my products.
I am on shared hosting, so I can't do a lot of queries otherwise I get a blank page.
This is how I use my script now:
script.php?start=0&end=500&indexOfFile=0 ->> make a product0.txt file with first 500 products
script.php?start=501&end=1000&indexOfFile=1 ->> product1.txt file with another 500 products
script.php?start=1001&end=1500&indexOfFile=2 ->> product2.txt file with last 500 products
How can I modify the script so it will make all these files automatically, so that I don't have to change each time the link manually?
I would like to click a button which will do this:
make the product0.txt file with the first 500 products
wait 5 seconds
make the product1.txt file with with another 500 products
wait 5 seconds
make the product2.txt file with the last 500 products
use:
sleep(NUMBER_OF_SECONDS);
before starting your actions, use
sleep(5);
or:
usleep(NUMBER_OF_MICRO_SECONDS);
In Jan2018 the only solution worked for me:
<?php
if (ob_get_level() == 0) ob_start();
for ($i = 0; $i<10; $i++){
echo "<br> Line to show.";
echo str_pad('',4096)."\n";
ob_flush();
flush();
sleep(2);
}
echo "Done.";
ob_end_flush();
?>
i use this
$i = 1;
$last_time = $_SERVER['REQUEST_TIME'];
while($i > 0){
$total = $_SERVER['REQUEST_TIME'] - $last_time;
if($total >= 2){
// Code Here
$i = -1;
}
}
you can use
function WaitForSec($sec){
$i = 1;
$last_time = $_SERVER['REQUEST_TIME'];
while($i > 0){
$total = $_SERVER['REQUEST_TIME'] - $last_time;
if($total >= 2){
return 1;
$i = -1;
}
}
}
and run code =>
WaitForSec(your_sec);
Example :
WaitForSec(5);
OR
you can use sleep
Example :
sleep(5);
I am on shared hosting, so I can't do a lot of queries otherwise I get a blank page.
That sounds very peculiar. I've got the cheapest PHP hosting package I could find for my last project - and it does not behave like this. I would not pay for a service which did. Indeed, I'm stumped to even know how I could configure a server to replicate this behaviour.
Regardless of why it behaves this way, adding a sleep in the middle of the script cannot resolve the problem.
Since, presumably, you control your product catalog, new products should be relatively infrequent (or are you trying to get stock reports?). If you control when you change the data, why run the scripts automatically? Or do you mean that you already have these URLs and you get the expected files when you run them one at a time?
In https://www.php.net/manual/es/function.usleep.php
<?php
// Wait 2 seconds
usleep(2000000);
// if you need 5 seconds
usleep(5000000);
?>

Categories