I write the php code in iis to serve file for download with speed limit, so i need to use sleep function for the speed limit.
Here, few lines of my code:
set_time_limit(0);
while(!feof($file))
{
echo fread($file, 1024*10);
ob_flush();
flush();
sleep(1);
if (connection_status()!=0)
{
#fclose($file);
exit;
}
}
But the browser say: 'Waiting for mysite'. If i remove sleep(1) everything is right. I also test in apache and everything is right too.
So I have a problem in IIS with the sleep function.
You need to have your server properly configured for that. TBH you should use something on the server to do that, rather then relying on PHP, the sleep(1); causes it to send a chunk, pause, send a chunk pause, etc. It does not maintain 10kbps but goes from like 500kbps for a second to 0 kbps for a second, it may average out to 10kbps, but it is not the same and some programs won't treat it correct and may terminate the download. You should look into QoS (How to Limit Download Speeds from my Website on my IIS Windows Server?)
What exactly is the problem with IIS? Note that waiting for 1 second will mean that your script may exceed the timeout limit (this can be as low as 30 seconds) so IIS will kill your script.
If you want to serve large files, I recommend serving them directly from IIS and using IIS' built-in rate limiter rather than via PHP.
See here: http://www.iis.net/configreference/system.applicationhost/sites/site/limits
Related
Is it possible for nginx to trigger a php-fpm process, but then close the nginx worker and quickly return an empty page with status 200?
I have some slow php processes that need kicking off a few times a week. They can take between 3 and 4 minutes each. I trigger them with a cron manager site. The php process writes a lock file at the start, and when the process is complete an email is sent and finally the lock file is removed.
Following this guide, in my php-fpm worker pool, I have this: request_terminate_timeout = 300 and in my nginx site config I have fastcgi_read_timeout 300;
It works, but I don't care about the on-screen result. And the cron service I use has a time limit of 5 seconds, and after repeated timeouts, it disables the job.
Yes, I know I could fork a process in php, let it run in the background, and return a 200 to nginx. And yes, I could pay and upgrade my cron service. Nonetheless, it would be an interesting and useful thing to know, anyway.
So, is this possible, or does php-fpm require an open and "live" socket? I ask that because on the "increase your timeout" page referred to above, one answer says
"Lowest of three. It’s line chain. Nginx->PHP-FPM->PHP. Whoever dies
first will break the chain".
In other words, does that mean that I can never "trigger" a process, but then close the nginx part of the trigger?
You can.
exec a php cli script by adding a trailing &, redirecting output to a log file or /dev/null, pass any parameters as json or serialized (use escapeshellarg()), the exec will return 0 immediately (no error); or
use php's ignore_user_abort(), send a Connection: close header, flush any output buffers as well as a normal flush(). Put any slow code after that. You'll need to test this under Nginx.
Either way, return a 1xx code to signify acceptance but no response. And it's up you to make sure your script doesn't run forever; give it a heartbeat so it touch()es a file every so often. If the file is old and it's still running, kill it.
Thanks to #Walf's answer combined with this example from the php site, this SO answer and a little fiddling, this appears to be a solution for nginx that requires no messing with any php or nginx ini or conf files.
$start = microtime(true);
ob_end_clean();
header("Connection: close\r\n");
header('X-Accel-Buffering: no');
header("Content-Encoding: none\r\n");
ignore_user_abort(true); // optional
ob_start();
echo ('Text user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
ob_end_clean();
sleep(35); // simulate something longer than default 30s timeout
$time_elapsed_secs = microtime(true) - $start;
echo $time_elapsed_secs; // you will never see this!
Or, at least, it works perfectly for what I want it to do. Thanks for the answers.
I need to read a large file to find some labels and create a dynamic form. I can not use file() or file_get_contents() because the file size.
If I read the file line by line with the following code
set_time_limit(0);
$handle = fopen($file, 'r');
set_time_limit(0);
if ($handle) {
while (!feof($handle)) {
$line = fgets($handle);
if ($line) {
//do something.
}
}
}
echo 'Read complete';
I get the following error in Chrome:
Error 101 (net::ERR_CONNECTION_RESET)
This error occurs after several minutes so that the constant max_input_time, I think not is the problem.(is set to 60).
What browser software do you use? Apache, nginx? You should set the max accepted file upload at somewhere higher than 500MB. Furthermore, the max upload size in the php.ini should be bigger than 500MB, too, and I think that PHP must be allowed to spawn processes larger than 500MB. (check this in your php config).
Set the memory limit ini_set("memory_limit","600M");also you need to set the time out limit
set_time_limit(0);
Generally long running processes should not be done while the users waits for them to complete.
I'd recommend using a background job oriented tool that can handle this type of work and can be queried about the status of the job (running/finished/error).
My first guess is that something in the middle breaks the connection because of a timeout. Whether it's a timeout in the web server (which PHP cannot know about) or some firewall, it doesn't really matter, PHP gets a signal to close the connection and the script stops running. You could circumvent this behaviour by using ignore-user-abort(true), this along with set_time_limit(0) should do the trick.
The caveat is that whatever caused the connection abort will still do it, though the script would still finish it's job. One very annoying side effect is that this script could possibly be executed multiple times in parallel without neither of them ever completing.
Again, I recommend using some background task to do it and an interface for the end-user (browser) to verify the status of that task. You could also implement a basic one yourself via cron jobs and database/text files that hold the status.
I've written a simple PHP script to download a hidden file if the user has proper authentication. The whole set up works fine: it sends the proper headers, and the file transfer begins just fine (and ends just fine - for small files).
However, when I try to serve a 150 MB file, the connection gets mysteriously interrupted somewhere close to the middle of the file. Here's the relevant code fragment (taken from somewhere on the Internet and adapted by me):
function readfile_chunked($filename, $retbytes = TRUE) {
$handle = fopen($filename, 'rb');
if ($handle === false) return false;
while (!feof($handle) and (connection_status()==0)) {
print(fread($handle, 1024*1024));
set_time_limit(0);
ob_flush();
flush();
}
return fclose($handle);
}
I also do some other code BEFORE calling that function above, to try to solve the issue, but as far as I can tell, it does nothing:
session_write_close();
ob_end_clean();
ignore_user_abort();
set_time_limit(0);
As you can see, it doesn't attempt to load the whole file in memory at once or anything insane like that. To make it even more puzzling, the actual point in the transfer where it kills it seems to float between 50 and 110 MB, and it seems to kill ALL connections to the same file within a few seconds of each other (tried this by trying to download simultaneously with a friend). Nothing is appended to the interrupted file, and I see no errors on the logs.
I'm using Dreamhost, so I suspect that their watchdog might be killing my process because it's been running for too long. Does anyone have any experience to share on the matter? Could something else be the issue? Is there any workaround?
For the record, my Dreamhost is setup to use PHP 5.2.1 FastCGI.
I have little experience with Dreamhost, but you could use mod_xsendilfe instead (if Dreamhost allows it).
I want to have a simple PHP script that loops to do something every ten minutes. It would be hosted offsite, and I would activate it via my browser. I don't have access to the server other than my web space, so 'cron' as such isn't an option.
(I'm happy to have this stop after a certain time or number of job cycles. I just need it to continue running after I point the browser away from the page script.)
Is such a thing possible? Thanks.
It's possible, see ignore_user_abort():
set_time_limit(0);
ignore_user_abort(true);
while (true) // forever
{
// your code
}
You can use this two functions with a combination of sleep(), usleep(), time_nanosleep() or even better - time_sleep_until() to achieve a CRON-like effect.
PHP scripts timeout after a certain amount of time - they're not designed for long-running programs. You'll have to find some way to prod it every ten minutes.
Have a look at set_time_limit.
This is from the above page:
You can do set_time_limit(0); so that the script will run forever - however this is not recommended and your web server might catch you out with an imposed HTTP timeout (usually around 5 minutes).
Maybe you can write another script on a computer which you have access and then make that script request the other one periodically.
Yop can look at pnctl_fork.
Here's a hack for your problem:
// Anything before disconnecting, but nothing to be output to the client!
ob_end_clean();
header('Connection: close');
ob_start();
// Here you can output anything before disconnecting
echo "Bla bla bla";
$outsize = ob_get_length();
header('Content-Length: '.$outsize);
ob_end_flush();
flush();
// Do your background processing here
// and feel free to quit anytime you want.
A way to do this might be to launch a new php process from the web page, e.g.
<?php
exec("php script_that_runs_for_a_while.php > /dev/null");
?>
Adding the /dev/null means (on a linux system) that your script will complete, rather than waiting for the execution to finish.
So then that script that launches can do whatever it likes, since it is basically just a new process running on the server.
Note that at the start of your long running script, you will want to use the set_time_limit function to set the max execution time to some large value.
I'm trying to make a PHP script, I have the script finished but it takes like 10 minutes to finish the process it is designed to do. This is not a problem, however I presume I have to keep the page loaded all this time which is annoying. Can I have it so that I start the process and then come back 10mins later and just view the log file it has generated?
Well, you can use "ignore_user_abort(true)"
So the script will continue to work (keep an eye on script duration, perhaps add "set_time_limit(0)")
But a warning here: You will not be able to stop a script with these two lines:
ignore_user_abort(true);
set_time_limit(0);
Except you can directly access the server and kill the process there! (Been there, done an endless loop, calling itself over and over again, made the server come to a screeching stop, got shouted at...)
Sounds like you should have a queue and an external script for processing the queue.
For example, your PHP script should put an entry into a database table and return right away. Then, a cron running every minute checks the queue and forks a process for each job.
The advantage here is that you don't lock an apache thread up for 10 minutes.
I had lots of issues with this sort of process under windows; My situation was a little different in that I didn't care about the response of the "script"- I wanted the script to start and allow other page requests to go through while it was busy working away.
For some reason; I had issues with it either hanging other requests or timing out after about 60 seconds (both apache and php were set to time out after about 20 minutes); It also turns out that firefox times out after 5 minutes (by default) anyway so after that point you can't know what's going on through the browser without changing settings in firefox.
I ended up using the process open and process close methods to open up a php in cli mode like so:
pclose(popen("start php myscript.php", "r"));
This would ( using start ) open the php process and then kill the start process leaving php running for however long it needed - again you'd need to kill the process to manually shut it down. It didn't need you to set any time outs and you could let the current page that called it continue and output some more details.
The only issue with this is that if you need to send the script any data, you'd either do it via another source or pass it along the "command line" as parameters; which isn't so secure.
Worked nicely for what we needed though and ensures the script always starts and is allowed to run without any interruptions.
I think shell_exec command is what you are looking for.
However, it is disables in safe mode.
The PHP manual article about it is here: http://php.net/shell_exec
There is an article about it here: http://nsaunders.wordpress.com/2007/01/12/running-a-background-process-in-php/
There is another option which you can use, run the script CLI...It will run in the background and you can even run it as a cronjob if you want.
e.g
> #!/usr/bin/php -q
<?php
//process logs
?>
This can be setup as a cronjob and will execute with no time limitation....this examples is for unix based operation system though.
FYI
I have a php script running with an infinite loop which does some processing and has been running for the past 3 months non stop.
You could use ignore_user_abort() - that way the script will continue to run even if you close your browser or go to a different page.
Think about Gearman
Gearman is a generic application framework for farming out work to
multiple machines or processes. It allows applications to complete
tasks in parallel, to load balance processing, and to call functions
between languages. The framework can be used in a variety of
applications, from high-availability web sites to the transport of
database replication events.
This extension provides classes for writing Gearman clients and
workers.
- Source php manual
Offical website of Gearman
In addition to bastiandoeen's answer you can combine ignore_user_abort(true); with a cUrl request.
Fake a request abortion setting a low CURLOPT_TIMEOUT_MS and keep processing after the connection closed:
function async_curl($background_process=''){
//-------------get curl contents----------------
$ch = curl_init($background_process);
curl_setopt_array($ch, array(
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER =>true,
CURLOPT_NOSIGNAL => 1, //to timeout immediately if the value is < 1000 ms
CURLOPT_TIMEOUT_MS => 50, //The maximum number of mseconds to allow cURL functions to execute
CURLOPT_VERBOSE => 1,
CURLOPT_HEADER => 1
));
$out = curl_exec($ch);
//-------------parse curl contents----------------
//$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
//$header = substr($out, 0, $header_size);
//$body = substr($out, $header_size);
curl_close($ch);
return true;
}
async_curl('http://example.com/background_process_1.php');
NB
If you want cURL to timeout in less than one second, you can use
CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like
systems" that causes libcurl to timeout immediately if the value is <
1000 ms with the error "cURL Error (28): Timeout was reached". The
explanation for this behavior is:
[...]
The solution is to disable signals using CURLOPT_NOSIGNAL
pros
No need to switch methods (Compatible windows & linux)
No need to implement connection handling via headers and buffer (Independent from Browser and PHP version)
cons
Need curl extension
Resources
curl timeout less than 1000ms always fails?
http://www.php.net/manual/en/function.curl-setopt.php#104597
http://php.net/manual/en/features.connection-handling.php
Zuk.
I'm pretty sure this will work:
<?php
pclose(popen('php /path/to/file/server.php &'));
echo "Server started. [OK]";
?>
The '&' is important. It tells the shell not to wait for the process to exit.
Also You can use this code in your php (as "bastiandoeen" said)
ignore_user_abort(true);
set_time_limit(0);
in your server stop command:
<?php
$output;
exec('ps aux | grep -ie /path/to/file/server.php | awk \'{print $2}\' | xargs kill -9 ', $output);
echo "Server stopped. [OK]";
?>
Just call StartBuffer() before any output, and EndBuffer() when you want client to close connection. The code after calling EndBuffer() will be executed on server without client connection.
private function StartBuffer(){
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
#set_time_limit(0);
#ob_implicit_flush(1);
#ob_start();
}
private function EndBuffer(){
$size = ob_get_length();
header("Content-Length: $size");
header('Connection: close');
ob_flush();ob_implicit_flush(1);
}