Suppose I make an AJAX HTTP Request from jQuery to a backend PHP script. The request is made, the PHP script starts running and doing its magic. Suppose I then change to another website, away from the site where the original AJAX Request was made. As well, I do this before the PHP script finishes and has time to do a HTTP Response back. Does the PHP script finish running and doing its thing even though I've switched to another website before I got the HTTP Response?
So the order is this.
I'm on website www.xyz.com
I have a jQuery handler that kicks off an AJAX request to blah.php
blah.php starts running
I go to website www.abc.com soon after without waiting for a response from blah.php
What's going on with blah.php? Is execution still going on? Did it stop? I mean it didn't get a chance to respond so...
This may depend on your server configuration, but in general the script will continue to execute despite a closed HTTP connection.
I have tested this with Apache 2 + PHP 5 as mod_php. I would expect similar behaviour with PHP as CGI and with other webservers but do not know for certain.
The best way to determine for certain on your configuration is, as #tdammers suggests: set up a test script something like the following and monitor the log.
<?php
error_log('Test script started.');
for ($i = 1; $i < 13; $i++) {
sleep(10);
error_log('Test script got to ' . (10 * $i) . ' seconds.');
}
error_log('Test script got to the end.');
?>
Access this script (at /test.php or whatever) then before you get any results, hit stop on your browser. This is equivalent to navigating away before your XHR returns. You could even have it as the target of an XHR and navigate away.
Then check your error log: you should have a start and then messages every 10 seconds for two minutes and an end. You can modify how high $i gets to ensure your script will reach its anticipated maximum execution time if you'd like to test that too.
You don't have to use error_log() - you could write to a file, or make some other persistent change on the server that can be checked without needing to keep the client connection open.
The script execution time may stop before then because of the max_execution_time php.ini directive - but in any case this should be distinct from when the webserver times out.
Try ignore_user_abort(true);
ignore_user_abort(true);
it should not abort proccessing of your code
You might want to check out the answers to This Question.
Basically when you make your ajax call to a php function which calls the exec() function as shown in the answers to that question, you'll get an ajax response almost immediately, since your php function doesn't actually need to process anything. This way, it shouldn't matter if the user leaves the page.
Here's a small example:
ajax call in html file: $.ajax({url: 'blah.php'});
blah.php file: exec('bash -c "exec nohup setsid php really_slow_script.php > /dev/null 2>&1 &"');
And then finally in really_slow_script.php, just include the actual code you want to run.
I successfully used this kind of logic to allow users to post an already uploaded video from their account on my website to youtube. (The video had to be sent to youtube, and since videos are generally large files, I didn't want the user to have to wait while the video was being uploaded to youtube)
Navigating away will trigger a disconnect message on the server. The implications of that entirely depends on what what your server has been configured to do.
By default, the server will be set up so that a disconnect will not interrupt the way that the program functions. It is possible, however, to make it so that a user disconnect will trigger the function which has been registered with register_shutdown_function, garbage collection will occur, and the script will terminate.
Because it is something which can be configured several different places, it might be easiest to just run a test, but this is a php.ini directive. If you want to configure this on a global level, you can set ignore_user_abort = Off in php.ini. If you want this on a site-specific level, you can use php_value ignore_user_abort off in the htaccess in the parent directory of the current site. Otherwise you can use ignore_user_abort(false);.
Of course, there is no guarantee on a shared server that you have control of htaccess or php.ini, so you might just need to use ignore_user_abort(false);.
Related
I'm currently running an Apache server (2.2) on my local machine (Windows) which I'm using to run some PHP scripts to take care of some tedious work. One of the scripts involves a ton of moving, resizing, and download / uploading files to another server. I would very much like the script to run constantly so that I don't have to baby the script by starting it up again every time it times out.
set_time_limit(0);
ignore_user_abort(1);
Both are set in my script, but after about 30mins to an hour the script stops and I get the 504 Gateway Time-out message in my browser. Is there something I missing in Apache or PHP to prevent the timeout? Or should I be running the script a different way?
Or should I be running the script a different way?
Definitely. You should run your script from command line (CLI)
if i should implement something like this i would you 2 different scripts:
A. process_controller.php
B. process.php
The workflow should be:
the user call the script A by using a browser
the script A start the script B by using a system() or exec() and pass to it a "process token" via command line.
the script B write the execution status into a shared space: a file named as the token, a database table. in general something that can be read also by the script A by using the token as reference
the script A contains an AJAX call, in polling, that ask to the script A the status of the process for a given token
Ajax polling:
<script>
var $myToken;
function ajaxPolling()
{
$.get('process_controller.php?action=getStatus&token='+$myToken, function(data) {
$('.result').html(data);
});
}
setInterval("ajaxPolling()",60*1000); //Every minute
</script>
there are some considerations about the communication between the 2 processes, depending on how many instances of the script B you would be able to run in parallel
Just one: you don't need a random/unique token
One per user: session_start(); $token = session_id();
More than one per user: session_start(); $token = session_id().microtime();
If you need to run it form your browser, You should make sure that there is not php execution limit in the php.ini file, but also that there is not limit set in mod_php (or what ever you are using) under apache.
Use php's system() to call a shell script which starts a service/background task.
I made a script that shouldn't return anything to the browser (not any echo, print or interruptions of the code with blank space, like ?> <?, and that uses ignore_user_abort(true); to avoid that, once the browser window is closed, the process stops.
Thus once the script is launched, it should go till the end.
The script is designed for newsletter, and it sends one email each 5 seconds, to respect spam policies of my provider, through mail();
Said that, what's happening is that after about 20 minutes working (the total emails are 1002 ), the script "collapses", with no error returned.
Hence my question: is there a life time limit for scripts are running with ignore_user_abort(true); ?
EDIT
Following the suggestion of Hanky (here below) I put the line:
set_time_limit(0);
But the issue persists
So whilst ignore_user_abort(true); will prevent the script stopping after a visitor browses away from a page, it is set_time_limit(0); that will remove the time limit. You can also change the PHP memory_limit in your php.ini or by setting something like php_value memory_limit 2048M in your .htaccess file.
In order to list the default max_execution time you can run echo ini_get('max_execution_time'); (seconds) or echo ini_get('memory_limits'); (megabytes).
This being said, it sounds like your PHP scripts are better suited to being run from the CLI. Using the command line you can run PHP scripts, this sounds better suited to your usage as it seems, from what you have described, the script doesn't really need to serve anything to the web browser. This method is better for PHP scripts that are run in order to operate a background process rather than to return a front-end to the user.
You can run a file from the command line simply by running php script.php or php -f script.php.
Initially there was not way to solve the issue. Also the provider still investigating.
Meanwhile following your suggestions, I was able to make it running. I created a TEST file and I fired it to verify:
exec("/php5.5/bin/php -f /web/htdocs/www.mydomain.tld/home/test.php > /dev/null 2>&1 &");
In worked. I setup a sleep(600); and I sent 6 emails + one that inform me when the process is really finished.
It runs in a transparent way till the end.
Thank you so much for your support
I have a daemon program that prints in the terminal when new device is plugged or removed, now i want it to be printed in php like the way it was printed in linux. it's like realtime output. when a new device is plugged in linux it will alert php without you clicking any button it just prints in the screen. what my daemon program prints in linux also php prints.
I also have another program which scan devices but not daemon i can get it's output without a problem and prints it in php.
How am i supposed to make a real time output with my daemon program in php?
Thanks,
Comments becoming long so I add a post here.
First off the redirection of stderr and stdout to file by ~$ my-daemon >> my_logfile 2>&1 - unless your daemon has a log-file option.
Then you could perhaps use inotifywait with the -m flag on modify events (if you want to parse/do something on system outside PHP, i.e. by bash.)
Inotify can give you notification on various changes - This is i..e a short few lines of a bash script I use to check for new files in a specific directory:
notify()
{
...
inotifywait -m -e moved_to --format "%f" "$path_mon" 2>&- |
awk ' { print $0; fflush() }' |
while read buf; do
printf "NEW:[file]: %s\n" "$buf" >> "$file_noti_log"
blah blah blah
...
done
}
What this does is: each time a file get moved to $path_mon the script enters inside the while loop and perform various actions defined by the script.
Haven't used inotify on PHP but this looks perhaps like what you want:
inotify_init (separate module in PHP).
inotify check various events in one or several directories, or you can target a specific file. Check man inotifywait or inotify. You would most likely want to use the "modify" flag, "IN_MODIFY" under PHP: Inotify Constants.
You could also write your own in C. Haven't read this page, but IBM's pages use to be quite OK : Monitor file system activity with inotify
Another option could be to use PCNTL or similar under PHP.
it will alert php without you clicking any button
So you're talking about client side PHP.
The big problem is alerting the client browser.
For short lengths of time you could ignore the problem and just disable all buffering and send the daemon output to the browser. It's neither elegant nor really working in the long run, and it has... aesthetic issues. Moreover, you can't really manipulate the output client side at all, not easily or cleanly at least.
So you need to have a program running on the client, which means Javascript. The JS and the PHP programs must communicate, and PHP must also talk to the daemon, or at least monitor what it's doing.
There are ways of doing the first using Web Sockets, or maybe multipart-x-mixed-replace, but they're not very portable yet.
You could refresh the Web page but that's wasteful, and slow.
The problem of getting the notification to the client browser is then, in my opinion, best solved with an AJAX poll. You don't get an immediate alert, but you do get alerted within seconds.
You would send a query to PHP from AJAX every, say, 10 seconds (10000 ms)
function poll_devices() {
$.ajax({
url: '/json/poll-devices.php',
dataType: 'json',
error: function(data) {
// ...
},
success: function(packet) {
setTimeout(function() { poll_devices(); }, 10000);
// Display packet information
},
contentType: 'application/json'
});
}
and the PHP would check the accumulating log and send the situation.
Another possibility is to have the PHP script block up to 20 seconds, not enough to make AJAX time out and give up, and immediately return in case of changes. You would then employ an asynchronous AJAX function to drive the poll back-to-back.
This way, the asynchronous function starts and immediately goes to sleep while the PHP script is sleeping too. After 20 seconds, the call returns and is immediately re-issued, sleeping again.
The net effect is to keep one connection constantly open, and changes being echoed back to client side Javascript immediately. You have to manage connection interruptions, though. But this way, every 20 seconds you only issue one call, and still manage to be alerted almost instantly.
Server side PHP can check the log file's size at the start (last read position being saved in the session), and keep it open read only in shared mode and block reads with fgets(), if the daemon allows it.
Or you could pipe the daemon to logger, and get messages to syslog. Configure syslog to send those messages to a specific unbuffered file readable by PHP. Now PHP should be able to do everything with fopen(), ftell() and fgets(), without requiring additional notification systems.
if i call a php file via jquery ajax, that contains a script to do some stuff that takes a while — for instance uploading a big video — and then I close the page: does the php script keep loading the video or not?
See here:
http://php.net/manual/en/function.ignore-user-abort.php
int ignore_user_abort ([ bool $value ] )
Sets whether a client disconnect should cause a script to be aborted.
When running PHP as a command line script, and the script's tty goes away without the script being terminated then the script will die the next time it tries to write anything, unless value is set to TRUE
There also is a PHP configuration option of the same name:
http://php.net/manual/en/misc.configuration.php
By default, if you do nothing, according to the PHP manual the default is to abort the script.
http://php.net/manual/en/features.connection-handling.php
NECESSARY UPDATE
It seems I (unknowingly) tricked my way to "reputation points", because I did NOT supply the (correct) answer, but here it is now thanks to testing and continued nudging from "mellamokb":
Quote:
"Ok, I took a look at the PHP source code and, if I didn't miss anything, I now have the answer. The "ignore_user_abort" flag is only checked when PHP receive an error trying to output something to the user. So, in my understanding, there is no way to interrupt code which doesn't produce any output."
Okay, I wasn't totally off, but it is important to know that it all depends on whether or not your script produced any output!
If you read THIS, also DO check out the comments below.
A PHP Script running through a web server will not stop until:
someone kill the server
the server kill the php scrip
When the user abort the script, PHP will continue until it try to send something back to the browser.
For example still script will continue fore ever even if the user abort:
while(true){
echo 'go'.PHP_EOL;
}
It will go on forever because the "echo", will write into the buffer, and the buffer will not be sent to the browser until the script finish, which will never happen.
The following script will stop as soon as the user abort:
while(true){
echo 'go'.PHP_EOL;
flush();
ob_flush();
}
This script will stop, because flush() and ob_flush() will force PHP to send its buffer to the browser, which will stop the PHP script if the user has aborted.
The function ignore-user-abort() will force PHP to ignore the abort in this case.
Moreover if you are using PHP session, they are another tricky situation.
For example, if you are doing AJAX, and you actually send two AJAX request to a PHP script and that PHP script has need of session with session_start().
The first AJAX query will work normally, however the second one will have to wait until the first call is finish, because the first script has a locked on the session.
The first script could eventually prematurely release the session with session_write_close();
By default no. See Connection Handling documentation, especially:
You can decide whether or not you want
a client disconnect to cause your
script to be aborted. Sometimes it is
handy to always have your scripts run
to completion even if there is no
remote browser receiving the output.
The default behaviour is however for
your script to be aborted when the
remote client disconnects.
The script will run the time set by max_execution_time (default is 30s)
Warning This function has no effect when PHP is running in safe mode.
There is no workaround other than turning off safe mode or changing
the time limit in the php.ini.
Note: The set_time_limit() function and
the configuration directive max_execution_time only affect the
execution time of the script itself. Any time spent on activity that
happens outside the execution of the script such as system calls using
system(), stream operations, database queries, etc. is not included
when determining the maximum time that the script has been running.
This is not true on Windows where the measured time is real.
quote from http://php.net/manual/en/function.set-time-limit.php
you can test this by running
<?php
unlink('cocorico.txt');
while(true){
file_put_contents('cocorico.txt', microtime(true).PHP_EOL, FILE_APPEND);
}
and it will stop after 30s (despite you close your browser or not)
you can get you default exec time by echo ini_get('max_execution_time'); and can be set like set_time_limit(3);
The answer marked as accepted is only correct about the ignore_user_abort but don't panic that your "fail" scripts will run forever if you don't set max exec time to 0 - unlimited;
From my little understanding of how these stuff works. By the point of view of the HTTP protocol I would say yes, the script would keep running, because the browser just sends a request to the server asking for the page, then the server starts executing the script and does not sends or receives information from the browser untill the script is done loading and producing the html output, and just then the server sends the resulting output to the browser and has done the job.
See, there is no way for a browser to "tell" the server that the user is not viewing the page anymore through the HTTP protocol. However, the HTTP protocol runs on top of the TCP connection through stream sockets, the TCP connection is kept alive till one of the ends choses to abort the connection (or a certain timeout is reached), now I really don't know how the browser handles this. The browser could just open a connection, send a request and close the connection, then the server waits for the script and sends the response on another connection. Or the browser could open a connection, KEEP this connection alive till the server responds on the same connection. If the thing works that way then the server would really have a way to know if the user is not viewing the page anymore simply by checking if the connection is still alive or has been shutdown by the client. So that would be a no.
Dunno much about that tho.
I'm currently running an Apache server (2.2) on my local machine (Windows) which I'm using to run some PHP scripts to take care of some tedious work. One of the scripts involves a ton of moving, resizing, and download / uploading files to another server. I would very much like the script to run constantly so that I don't have to baby the script by starting it up again every time it times out.
set_time_limit(0);
ignore_user_abort(1);
Both are set in my script, but after about 30mins to an hour the script stops and I get the 504 Gateway Time-out message in my browser. Is there something I missing in Apache or PHP to prevent the timeout? Or should I be running the script a different way?
Or should I be running the script a different way?
Definitely. You should run your script from command line (CLI)
if i should implement something like this i would you 2 different scripts:
A. process_controller.php
B. process.php
The workflow should be:
the user call the script A by using a browser
the script A start the script B by using a system() or exec() and pass to it a "process token" via command line.
the script B write the execution status into a shared space: a file named as the token, a database table. in general something that can be read also by the script A by using the token as reference
the script A contains an AJAX call, in polling, that ask to the script A the status of the process for a given token
Ajax polling:
<script>
var $myToken;
function ajaxPolling()
{
$.get('process_controller.php?action=getStatus&token='+$myToken, function(data) {
$('.result').html(data);
});
}
setInterval("ajaxPolling()",60*1000); //Every minute
</script>
there are some considerations about the communication between the 2 processes, depending on how many instances of the script B you would be able to run in parallel
Just one: you don't need a random/unique token
One per user: session_start(); $token = session_id();
More than one per user: session_start(); $token = session_id().microtime();
If you need to run it form your browser, You should make sure that there is not php execution limit in the php.ini file, but also that there is not limit set in mod_php (or what ever you are using) under apache.
Use php's system() to call a shell script which starts a service/background task.