When running PHP, and you want it to immediately return HTML to the browser, close the connection (ish), and then continue processing...
The following works when the connection is HTTP/1.1, but does not when using Apache 2.4.25, with mod_http2 enabled, and you have a browser that supports HTTP/2 (e.g. Firefox 52 or Chrome 57).
What happens is the Connection: close header is not sent.
<?php
function http_connection_close($output_html = '') {
apache_setenv('no-gzip', 1); // Disable mod_gzip or mod_deflate
ignore_user_abort(true);
// Close session (if open)
while (ob_get_level() > 0) {
$output_html = ob_get_clean() . $output_html;
}
$output_html = str_pad($output_html, 1023); // Prompt server to send packet.
$output_html .= "\n"; // For when the client is using fgets()
header('Connection: close');
header('Content-Length: ' . strlen($output_html));
echo $output_html;
flush();
}
http_connection_close('<html>...</html>');
// Do stuff...
?>
For similar approaches to this problem, see:
close a connection early
Continue processing after closing connection
Continue php script after connection close
And as to why the connection header is removed, the documentation for the nghttp2 library (as used by Apache) states:
https://github.com/nghttp2/nghttp2/blob/master/doc/programmers-guide.rst
HTTP/2 prohibits connection-specific header fields. The
following header fields must not appear: "Connection"...
So if we cannot tell the browser to close the connection via this header, how do we get this to work?
Or is there another way of telling the browser that it has everything for the HTML response, and that it shouldn't keep waiting for more data to arrive.
How to return HTTP response to the user and resume PHP processing
This answer works only when web server communicates to PHP over FastCGI protocol.
To send the reply to user (web server) and resume processing in the background, without involving OS calls, invoke the fastcgi_finish_request() function.
Example:
<?php
echo '<h1>This is a heading</h1>'; // Output sent
fastcgi_finish_request(); // "Hang up" with web-server, the user receives what was echoed
while(true)
{
// Do a long task here
// while(true) is used to indicate this might be a long-running piece of code
}
What to look out for
Even if user does receive the output, php-fpm child process will be busy and unable to accept new requests until they're done with processing this long running task.
If all available php-fpm child processes are busy, then your users will experience hanging page. Use with caution.
nginx and apache servers both know how to deal with FastCGI protocol so there should be no requirement to swap out apache server for nginx.
You can serve your slow PHP scripts via HTTP/1.1 using a special subdomain.
All you need to do is to set a second VirtualHost that responds with HTTP/1.1 using Apache's Protocols directive : https://httpd.apache.org/docs/2.4/en/mod/core.html#protocols
The big advantage of this technic is that your slow scripts can send some datas to the browser long after everything else has been sent thru the HTTP/2 stream.
Related
how can i detect a client disconnection in php. I have a web service that uses nusoap library and I want to detect web service client disconnection.
I try with this code:
ignore_user_abort(true); // Continue running the script even when the client aborts/disconnects
ob_flush();
flush(); // Clean PHP's output buffer
usleep(500000);
echo "0\r\n\r\n";
ob_flush(); // Clean output buffer
flush(); // Clean PHP's output buffer
if(connection_aborted() != 0) {
//do something
}
It works but it has 2 problem:
Flush() function adds additional header causing this warning: Warning: Cannot modify header information - headers already sent in .\lib\nusoap-0_9_5\lib\nusoap.php on line...
The response of my web service is not well formatted due to additional character that i send echo "0\r\n\r\n" to check client connection.
How can I resolve problems listed above? There are other ways to detect web service client disonnection?
Thanks
I was wanting to log to Keen.IO (an external logger service) after a php session was over but I didn't want the user to need to wait for that process to finish. Is there a way to end the connection with the client before this happens? Additionally, I didn't want the client session to hang if for some reason the Keen.IO service goes down.
You can forcefully close the connection with fastcgi_finish_request() if you're using PHP-FPM. See this answer for more details.
<?php
echo 'Page content';
fastcgi_finish_request(); // cut connection
do_logging();
I have tried other methods before (eg. setting HTTP headers to disable keep-alive and defining the response length), but none of them worked. So if you're not using FPM, you're out of luck.
I'm converting php code to hhvm. One page in particular sometimes needs to flush() a status-message to the browser before sending some emails and a few other slow tasks and then updating the status message.
Before hhvm (using php-fpm and nginx) I used:
header('Content-Encoding: none;');
echo "About to send emails...";
if (ob_get_level() > 0) { ob_end_flush(); }
flush();
// Emails sent here
echo "Emails sent.";
So the content-encoding stops gzip being used, then the flush sends the first message, then the second message is sent when the page ends.
Using HHVM (and nginx), setting the Content-encoding header works (it shows up in the browser), but either hhvm or nginx is ignoring it and sending the page as gzipped content, so the browser interprets the content-encoding=none with binary data.
How can I disable gzip inside php code on HHVM?
(I know I could turn it off in the config files, but I want it kept on for nearly every page load except a few that will run slower.)
While my suggestion would be to have different nginx location paths with different gzip configuration, here's a better alternative solution to achieve what you want to happen.
Better Solution:
It is often referred to as bad practice to keep a connection open (and the browser loading bar spinning) while you're doing work in the background.
Since PHP 5.3.3 there is a method fastcgi_finish_request() which flushes the data and closes the connection, while it continues to work in the background.
Now, this is unfortunately not supported yet on HHVM. However, there is an alternative way of doing this.
HHVM alternative:
You can use register_postsend_function('function_name'); instead. This closes the connection, and the given function will be executed in the background.
Here is an example:
<?php
echo "and ...";
register_postsend_function(function() {
echo "... you should not be seeing this";
sleep("10"); // do a lot of work
});
die();
I've got apache as a back-end server, which runs php scripts and nginx as a reverse-proxy server which deals with static content.
A php-script, which gives me ID of some process and then performs this process (pretty long). I need to pass to browser only the ID of that proccess.
// ...
ob_start();
echo json_encode($arResult); // only this data should be passed to browser
$contentLength = ob_get_length();
header('Connection: close');
header('Content-Length: ' . $contentLength);
ob_end_flush();
ob_flush();
flush();
// then performed a long process
(I check the status of the proccess with another ajax-script)
This works fine under apache alone. But I have problems when apache is behind nginx. In this case I get response only when the proccess is completly finished.
nginx settings:
server {
#...
proxy_set_header Connection close;
proxy_pass_header Content-Length;
#...
}
But I still get Connection keep-alive in FireBug.
How can I get nginx to immediately give a response from apache?
Hope the question is clear.
Thanks.
Have you tried proxy_buffering off in nginx? Not sure it will close the connection but at least the response will be transmited as is to the client. :-)
Nginx does not support any of the flush() methods in PHP when used in a fastcgi or reverse_proxy scheme.
I have tested all of the many proxy_buffering_*, buffer_size_* configurations without success under nginx/0.8.49. It will always wait until the PHP process exits.
If your content is big you have to tune the proxy buffers
client closed prematurely connection while sending to client, in nginx
I'm running a php application which responds to the client in about 1 minute (keeps loading all this time) . However the response is displayed all at once so I would like to know if there is any config in the apache server/php to display the response at the time is processed . For example I have
echo "test";
$rez = file_get_contents($URL);
do something ...
But the result from echo is displayed only after the application completed all the tasks(file_get_contents and everything else). So I need to config the server/php to display it at the execution time.
1) http://php.net/manual/en/function.flush.php
2) output_buffering = off for PHP
3) Disable gzip for PHP
4) Disable gzip in apache
use php's flush function
echo "test";
$rez = file_get_contents($URL);
flush();
http://php.net/flush
If $URL is sending data in real-time, and it isn't source of stopping anyway, you can try to connect by sockets (manually send HTTP request), and when reading incoming data to socket, you can display output continuously writing buffer used to receive socket data and flush()ing it to user browser.