I have some basic website tracking software that sends a JSON object with jQuery AJAX from a webpage cross-domain to a server where the data is processed by a php script. This is triggered on window.onbeforeunload.
When benchmarking my php script I have realised that the client website on a different domain is still waiting for the php script to finish running before loading the next page. For example, a visitor to a client site navigates to another page. We send the JSON object cross domain to the server to process it. If I add sleep(30); to my php script the client website will not load the next page until this php script finishes (30+ seconds).
I do not need to return any values after running this script so how can I ensure this php script runs without having any impact on the client site?
I hope I've explained myself well enough. Ask any questions if I haven't, thanks.
SOLUTION:
This is what worked for me (http://php.net/manual/en/features.connection-handling.php#93441):
ob_end_clean();
header("Connection: close\r\n");
header("Content-Encoding: none\r\n");
ignore_user_abort(true); // optional
ob_start();
echo ('Text user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
ob_end_clean();
//do processing here
sleep(5);
echo('Text user will never see');
//do some processing
For PHP-fpm:
To close connection with client (send response), but continue running script and processing some data, this function can help you - fastcgi_finish_request
For apache:
See this link - close a connection early
Related
When running PHP, and you want it to immediately return HTML to the browser, close the connection (ish), and then continue processing...
The following works when the connection is HTTP/1.1, but does not when using Apache 2.4.25, with mod_http2 enabled, and you have a browser that supports HTTP/2 (e.g. Firefox 52 or Chrome 57).
What happens is the Connection: close header is not sent.
<?php
function http_connection_close($output_html = '') {
apache_setenv('no-gzip', 1); // Disable mod_gzip or mod_deflate
ignore_user_abort(true);
// Close session (if open)
while (ob_get_level() > 0) {
$output_html = ob_get_clean() . $output_html;
}
$output_html = str_pad($output_html, 1023); // Prompt server to send packet.
$output_html .= "\n"; // For when the client is using fgets()
header('Connection: close');
header('Content-Length: ' . strlen($output_html));
echo $output_html;
flush();
}
http_connection_close('<html>...</html>');
// Do stuff...
?>
For similar approaches to this problem, see:
close a connection early
Continue processing after closing connection
Continue php script after connection close
And as to why the connection header is removed, the documentation for the nghttp2 library (as used by Apache) states:
https://github.com/nghttp2/nghttp2/blob/master/doc/programmers-guide.rst
HTTP/2 prohibits connection-specific header fields. The
following header fields must not appear: "Connection"...
So if we cannot tell the browser to close the connection via this header, how do we get this to work?
Or is there another way of telling the browser that it has everything for the HTML response, and that it shouldn't keep waiting for more data to arrive.
How to return HTTP response to the user and resume PHP processing
This answer works only when web server communicates to PHP over FastCGI protocol.
To send the reply to user (web server) and resume processing in the background, without involving OS calls, invoke the fastcgi_finish_request() function.
Example:
<?php
echo '<h1>This is a heading</h1>'; // Output sent
fastcgi_finish_request(); // "Hang up" with web-server, the user receives what was echoed
while(true)
{
// Do a long task here
// while(true) is used to indicate this might be a long-running piece of code
}
What to look out for
Even if user does receive the output, php-fpm child process will be busy and unable to accept new requests until they're done with processing this long running task.
If all available php-fpm child processes are busy, then your users will experience hanging page. Use with caution.
nginx and apache servers both know how to deal with FastCGI protocol so there should be no requirement to swap out apache server for nginx.
You can serve your slow PHP scripts via HTTP/1.1 using a special subdomain.
All you need to do is to set a second VirtualHost that responds with HTTP/1.1 using Apache's Protocols directive : https://httpd.apache.org/docs/2.4/en/mod/core.html#protocols
The big advantage of this technic is that your slow scripts can send some datas to the browser long after everything else has been sent thru the HTTP/2 stream.
Is it possible for nginx to trigger a php-fpm process, but then close the nginx worker and quickly return an empty page with status 200?
I have some slow php processes that need kicking off a few times a week. They can take between 3 and 4 minutes each. I trigger them with a cron manager site. The php process writes a lock file at the start, and when the process is complete an email is sent and finally the lock file is removed.
Following this guide, in my php-fpm worker pool, I have this: request_terminate_timeout = 300 and in my nginx site config I have fastcgi_read_timeout 300;
It works, but I don't care about the on-screen result. And the cron service I use has a time limit of 5 seconds, and after repeated timeouts, it disables the job.
Yes, I know I could fork a process in php, let it run in the background, and return a 200 to nginx. And yes, I could pay and upgrade my cron service. Nonetheless, it would be an interesting and useful thing to know, anyway.
So, is this possible, or does php-fpm require an open and "live" socket? I ask that because on the "increase your timeout" page referred to above, one answer says
"Lowest of three. It’s line chain. Nginx->PHP-FPM->PHP. Whoever dies
first will break the chain".
In other words, does that mean that I can never "trigger" a process, but then close the nginx part of the trigger?
You can.
exec a php cli script by adding a trailing &, redirecting output to a log file or /dev/null, pass any parameters as json or serialized (use escapeshellarg()), the exec will return 0 immediately (no error); or
use php's ignore_user_abort(), send a Connection: close header, flush any output buffers as well as a normal flush(). Put any slow code after that. You'll need to test this under Nginx.
Either way, return a 1xx code to signify acceptance but no response. And it's up you to make sure your script doesn't run forever; give it a heartbeat so it touch()es a file every so often. If the file is old and it's still running, kill it.
Thanks to #Walf's answer combined with this example from the php site, this SO answer and a little fiddling, this appears to be a solution for nginx that requires no messing with any php or nginx ini or conf files.
$start = microtime(true);
ob_end_clean();
header("Connection: close\r\n");
header('X-Accel-Buffering: no');
header("Content-Encoding: none\r\n");
ignore_user_abort(true); // optional
ob_start();
echo ('Text user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
ob_end_clean();
sleep(35); // simulate something longer than default 30s timeout
$time_elapsed_secs = microtime(true) - $start;
echo $time_elapsed_secs; // you will never see this!
Or, at least, it works perfectly for what I want it to do. Thanks for the answers.
I want to ask if anyone knows a way I can delay a php script without actually occupying a connection slot all the time. I am not completely aware of this but I was told that apache has a limit of connections or a limit of scripts running at the same time I can't exactly recall and this script of mine needs to run for about 1 to 3 hours and it doesn't really do anything heavy, it actually sleeps like 90% of the time.
If you're running a script and not expecting any response you can either run it in the terminal of the server computer with php "dir/to/php/script.php".
If the initialization of the script happens remotely then you could have the script exit so the script continues running but does not keep the connection alive. header('Connection: Close');
Example:
<?php
echo "The server is now doing some complex actions in the background..."; //even maybe a redirect instead
header('Connection: Close');
file_put_contents(file_get_contents("largest_file_in_the_world.txt"),"/tmp/test.txt");
?>
In addition, just sending the connection: close header wasn't enough, here's how the connection gets closed:
ignore_user_abort(true);
header("Connection: close", true);
header("Content-Length: 0", true);
ob_end_flush();
flush();
fastcgi_finish_request();
Source
I have at the moment an AJAX request going to sendMail.php, it closes the connection immediately(using the header Connection: Close) and continues processing for approx 30 seconds.
But the problem I'm experincing now is that when that same client trys to load any PHP page from that server, it has to wait until sendMail.php has finished processing.
Is there any way around this?
I was reading on some other SO questions that it may be session related, but I'm not using any sessions, I even tried calling session_write_close() at the start of the sendMail.php script.
Example code (this is hacky and over done, but it works):
//File: SendMail.php
//error_reporting(E_ALL);
error_reporting(0);
session_write_close();
set_time_limit(0);
ignore_user_abort(1);
ignore_user_abort(true);
ini_set('ignore_user_abort','1');
apache_setenv('no-gzip', 1);
apache_setenv('KeepAlive',0);
ini_set('zlib.output_compression', 0);
ini_set('output_buffering', 0);
$size = ob_get_length();
// send headers to tell the browser to close the connection
header("Content-Length: $size");
header('Connection: Close');
// flush all output
ignore_user_abort(true);
ob_end_flush();
ob_flush();
flush();
sleep(30);//The real code has more stuff, but just for example lets just say it sleeps for 30 seconds
The rest of the referenced material is a normal navigation via GET.
It almost sounds like your AJAX call is calling a script that is chewing up your available resources on the server (whether that be memory/CPU/Disk I/O/etc) and that is preventing any new scripts from being able to run. Double check to make sure that you have enough resources allocated to get the job done.
On a side note, typically you don't allow your everyday user to spawn processes that run that long on the server simply because you're opening up the opportunity for them to overload your system and give them an easy opportunity for a DDOS attack. I'd consider Ibere's suggestion of moving this to a cron if that's a possibility.
I want to have a simple PHP script that loops to do something every ten minutes. It would be hosted offsite, and I would activate it via my browser. I don't have access to the server other than my web space, so 'cron' as such isn't an option.
(I'm happy to have this stop after a certain time or number of job cycles. I just need it to continue running after I point the browser away from the page script.)
Is such a thing possible? Thanks.
It's possible, see ignore_user_abort():
set_time_limit(0);
ignore_user_abort(true);
while (true) // forever
{
// your code
}
You can use this two functions with a combination of sleep(), usleep(), time_nanosleep() or even better - time_sleep_until() to achieve a CRON-like effect.
PHP scripts timeout after a certain amount of time - they're not designed for long-running programs. You'll have to find some way to prod it every ten minutes.
Have a look at set_time_limit.
This is from the above page:
You can do set_time_limit(0); so that the script will run forever - however this is not recommended and your web server might catch you out with an imposed HTTP timeout (usually around 5 minutes).
Maybe you can write another script on a computer which you have access and then make that script request the other one periodically.
Yop can look at pnctl_fork.
Here's a hack for your problem:
// Anything before disconnecting, but nothing to be output to the client!
ob_end_clean();
header('Connection: close');
ob_start();
// Here you can output anything before disconnecting
echo "Bla bla bla";
$outsize = ob_get_length();
header('Content-Length: '.$outsize);
ob_end_flush();
flush();
// Do your background processing here
// and feel free to quit anytime you want.
A way to do this might be to launch a new php process from the web page, e.g.
<?php
exec("php script_that_runs_for_a_while.php > /dev/null");
?>
Adding the /dev/null means (on a linux system) that your script will complete, rather than waiting for the execution to finish.
So then that script that launches can do whatever it likes, since it is basically just a new process running on the server.
Note that at the start of your long running script, you will want to use the set_time_limit function to set the max execution time to some large value.