I want to use the fastcgi_finish_request() function.
I have CPanel installed on my server and PHP and Apache are both configured through that. Since I cannot edit Apache or PHP configuration manually (because of CPanel), I used easyApache in WHM to build it in order to get fastcgi.
I saw an option caled Mod FCGID, so I enabled it.
After rebuilding PHP and Apache with that option enabled, I still get call to undefined function when calling the fastcgi_finish_request function.
A little late, but good info for people. In my experience working with PHP 5.5.7.
PHP using mod_php (standard Apache):
ob_start();
header("Connection: close\r\n");
header('Content-Encoding: none\r\n');
// your code here
$size = ob_get_length();
header("Content-Length: ". $size . "\r\n");
// send info immediately and close connection
ob_end_flush();
flush();
// run other process without the client attached.
For PHP using FastCGI and PHP_FPM:
// your code here
fastcgi_finish_request();
// run other process without the client attached.
Note that for us, after fastcgi_finish_request() was executed, log_error no longer worked. I assume it is because the connection to Apache is also severed and it cannot communicate with FastCGI to log the error.
fastcgi_finish_request is PHP-FPM SAPI specific function, unavailable in standard php-fcgi binary (used by Apache [mod_fcgid, mod_fastcgi], nginx, lighttpd etc).
Related
When using php-mod and fastcgi the code executes perfectly and every second i get an output but switching to php-fpm the code lags a few seconds before outputting depending on output size
Tried the following and combinations of
setting output buffer 0 in php ini
ob_implicit_flush
ob_start
ob_end_flush
header Content-Encoding = none
implicit_flush 1
ob_end_clean
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
while( true ){
$time = date('r');
echo "retry:1000\r\n";
echo "data: ".$time;
echo "\r\n\r\n";
ob_flush();
flush();
sleep(1);
}
?>
This is for a production server and php-mod is not an option i also got it to work in Fastcgi with
FcgidOutputBufferSize 0
is there a way to make the code work on php-fpm so the output is send immediately as in php-mod and fastcgi ?
P.S Running : Ubuntu 18.04, Apache 2.4.29, PHP 7.2
After a few days i have discovered the only way to get this to work in php-fpm is to fill the output buffer. This is really inefficient ! Let me explain :
Say you are using Server-send events and your output buffer is 4096, you process every second even if you do not return anything you still send about 4Kb output to client where mod_php and fast-cgi sends only data when there is an output.
If anyone else has this problem this is my best solution : run main site on php-fpm ex. example.com and make a sub-domain ex. push.example.com and setup fast-cgi / php_mod[NOT RECOMMENDED PRODUCTION] on sub-domain now you can keep the connection open and process data without sending output to client.
PS. I saved Session variables in database so both domain and sub-domain can access it see https://github.com/dominicklee/PHP-MySQL-Sessions the other thing is to let sub-domain send CORS. in PHP add header('Access-Control-Allow-Origin: https://example.com');
I use Ubuntu 17.04, Apache 2.4, proxy_fcgi, and php-fpm. Everything works and connects nicely, except for flushing for Server Sent Events.
Flushing used to work nicely with mod_fastcgi and fastcgiexternalserver with "-flush". Now with Ubuntu 17.04, it doesn't include mod_fastcgi, and proxy_fcgi is recommended.
With proxy_fcgi I've disabled gzip, output buffering, use "Content-Encoding: none", the only real way for connection_aborted and flush to work is if you send around 32K (I'm guessing this is because of proxy buffering?)
It says in the Apache Docs that you cannot set ProxyReceiveBufferSize or ProxyIOBufferSize less than 512.
There really should be an easier way to do this with proxy_fcgi.
Example code of sending data for Server Sent Events:
while (!connection_aborted() ) {
echo('data: {}' . PHP_EOL . PHP_EOL);
flush();
} // While //
Edit: I've tried ob_flush() too, but I disabled Output Buffering (ob_*) with ob_end_clean() previously, and ob_flush() will return an error.
Albeit this question has been asked some years ago, I just ran into a similar problem with Apache 2.4 and mod_fcgid. The PHP application did directly return data without buffering (tested with internal server php -S 0.0.0.0:8080 index.php) - but it was buffered when being used with Apache.
The following configuration disables output buffering for mod_fcgid (default size is 65536 bytes)
FcgidOutputBufferSize 0
https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#FcgidOutputBufferSize
When running PHP, and you want it to immediately return HTML to the browser, close the connection (ish), and then continue processing...
The following works when the connection is HTTP/1.1, but does not when using Apache 2.4.25, with mod_http2 enabled, and you have a browser that supports HTTP/2 (e.g. Firefox 52 or Chrome 57).
What happens is the Connection: close header is not sent.
<?php
function http_connection_close($output_html = '') {
apache_setenv('no-gzip', 1); // Disable mod_gzip or mod_deflate
ignore_user_abort(true);
// Close session (if open)
while (ob_get_level() > 0) {
$output_html = ob_get_clean() . $output_html;
}
$output_html = str_pad($output_html, 1023); // Prompt server to send packet.
$output_html .= "\n"; // For when the client is using fgets()
header('Connection: close');
header('Content-Length: ' . strlen($output_html));
echo $output_html;
flush();
}
http_connection_close('<html>...</html>');
// Do stuff...
?>
For similar approaches to this problem, see:
close a connection early
Continue processing after closing connection
Continue php script after connection close
And as to why the connection header is removed, the documentation for the nghttp2 library (as used by Apache) states:
https://github.com/nghttp2/nghttp2/blob/master/doc/programmers-guide.rst
HTTP/2 prohibits connection-specific header fields. The
following header fields must not appear: "Connection"...
So if we cannot tell the browser to close the connection via this header, how do we get this to work?
Or is there another way of telling the browser that it has everything for the HTML response, and that it shouldn't keep waiting for more data to arrive.
How to return HTTP response to the user and resume PHP processing
This answer works only when web server communicates to PHP over FastCGI protocol.
To send the reply to user (web server) and resume processing in the background, without involving OS calls, invoke the fastcgi_finish_request() function.
Example:
<?php
echo '<h1>This is a heading</h1>'; // Output sent
fastcgi_finish_request(); // "Hang up" with web-server, the user receives what was echoed
while(true)
{
// Do a long task here
// while(true) is used to indicate this might be a long-running piece of code
}
What to look out for
Even if user does receive the output, php-fpm child process will be busy and unable to accept new requests until they're done with processing this long running task.
If all available php-fpm child processes are busy, then your users will experience hanging page. Use with caution.
nginx and apache servers both know how to deal with FastCGI protocol so there should be no requirement to swap out apache server for nginx.
You can serve your slow PHP scripts via HTTP/1.1 using a special subdomain.
All you need to do is to set a second VirtualHost that responds with HTTP/1.1 using Apache's Protocols directive : https://httpd.apache.org/docs/2.4/en/mod/core.html#protocols
The big advantage of this technic is that your slow scripts can send some datas to the browser long after everything else has been sent thru the HTTP/2 stream.
Is it possible for nginx to trigger a php-fpm process, but then close the nginx worker and quickly return an empty page with status 200?
I have some slow php processes that need kicking off a few times a week. They can take between 3 and 4 minutes each. I trigger them with a cron manager site. The php process writes a lock file at the start, and when the process is complete an email is sent and finally the lock file is removed.
Following this guide, in my php-fpm worker pool, I have this: request_terminate_timeout = 300 and in my nginx site config I have fastcgi_read_timeout 300;
It works, but I don't care about the on-screen result. And the cron service I use has a time limit of 5 seconds, and after repeated timeouts, it disables the job.
Yes, I know I could fork a process in php, let it run in the background, and return a 200 to nginx. And yes, I could pay and upgrade my cron service. Nonetheless, it would be an interesting and useful thing to know, anyway.
So, is this possible, or does php-fpm require an open and "live" socket? I ask that because on the "increase your timeout" page referred to above, one answer says
"Lowest of three. It’s line chain. Nginx->PHP-FPM->PHP. Whoever dies
first will break the chain".
In other words, does that mean that I can never "trigger" a process, but then close the nginx part of the trigger?
You can.
exec a php cli script by adding a trailing &, redirecting output to a log file or /dev/null, pass any parameters as json or serialized (use escapeshellarg()), the exec will return 0 immediately (no error); or
use php's ignore_user_abort(), send a Connection: close header, flush any output buffers as well as a normal flush(). Put any slow code after that. You'll need to test this under Nginx.
Either way, return a 1xx code to signify acceptance but no response. And it's up you to make sure your script doesn't run forever; give it a heartbeat so it touch()es a file every so often. If the file is old and it's still running, kill it.
Thanks to #Walf's answer combined with this example from the php site, this SO answer and a little fiddling, this appears to be a solution for nginx that requires no messing with any php or nginx ini or conf files.
$start = microtime(true);
ob_end_clean();
header("Connection: close\r\n");
header('X-Accel-Buffering: no');
header("Content-Encoding: none\r\n");
ignore_user_abort(true); // optional
ob_start();
echo ('Text user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
ob_end_clean();
sleep(35); // simulate something longer than default 30s timeout
$time_elapsed_secs = microtime(true) - $start;
echo $time_elapsed_secs; // you will never see this!
Or, at least, it works perfectly for what I want it to do. Thanks for the answers.
I've got apache as a back-end server, which runs php scripts and nginx as a reverse-proxy server which deals with static content.
A php-script, which gives me ID of some process and then performs this process (pretty long). I need to pass to browser only the ID of that proccess.
// ...
ob_start();
echo json_encode($arResult); // only this data should be passed to browser
$contentLength = ob_get_length();
header('Connection: close');
header('Content-Length: ' . $contentLength);
ob_end_flush();
ob_flush();
flush();
// then performed a long process
(I check the status of the proccess with another ajax-script)
This works fine under apache alone. But I have problems when apache is behind nginx. In this case I get response only when the proccess is completly finished.
nginx settings:
server {
#...
proxy_set_header Connection close;
proxy_pass_header Content-Length;
#...
}
But I still get Connection keep-alive in FireBug.
How can I get nginx to immediately give a response from apache?
Hope the question is clear.
Thanks.
Have you tried proxy_buffering off in nginx? Not sure it will close the connection but at least the response will be transmited as is to the client. :-)
Nginx does not support any of the flush() methods in PHP when used in a fastcgi or reverse_proxy scheme.
I have tested all of the many proxy_buffering_*, buffer_size_* configurations without success under nginx/0.8.49. It will always wait until the PHP process exits.
If your content is big you have to tune the proxy buffers
client closed prematurely connection while sending to client, in nginx