We have code similar to this:
<?php
ob_implicit_flush(true);
ob_end_flush();
foreach ($arrayOfStrings as $string) {
echo time_expensive_function($string);
}
?>
In Apache, this would send each echo to the browser as it was output. In nginx/FastCGI however, this doesn't work due to the way nginx works (by default).
Is it possible to make this work on nginx/FastCGI, and if so, how?
First php has to correctly flush everything :
#ob_end_flush();
#flush();
Then, I found two working solutions:
1) Via Nginx configuration:
fastcgi_buffering off;
2) Via HTTP header in the php code
header('X-Accel-Buffering: no');
Easy solution:
fastcgi_keep_conn on; # < solution
proxy_buffering off;
gzip off;
I didn't want to have to turn off gzip for the whole server or a whole directory, just for a few scripts, in a few specific cases.
All you need is this before anything is echo'ed:
header('Content-Encoding: none;');
Then do the flush as normal:
ob_end_flush();
flush();
Nginx seems to pick up on the encoding having been turned off and doesn't gzip.
Add the flush() function in your loop:
foreach ($arrayOfStrings as $string) {
echo time_expensive_function($string);
flush();
}
It might work, but not necessarily on each iteration (there's some magic involved!)
Add the -flush to the FastCGI config, refer to the manual:
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiServer
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiConfig
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiExternalServer
From http://mailman.fastcgi.com/pipermail/fastcgi-developers/2009-July/000286.html
I needed both of those two lines at the beginning of my script:
header('X-Accel-Buffering: no');
ob_implicit_flush(true);
Each line alone would also work, combining them makes my browser getting the result from the server even faster. Cannot explain it, just experienced it.
My configuration is nginx with php-fpm.
Related
I have a need for a longish-running (7-8 seconds) php script to output partial results to the user as they are found. I have previously been able to accomplish this with an older version of php-fpm and nginx by doing the following:
Using these config settings in php:
#ini_set('output_buffering', 0);
#ini_set('implicit_flush', 1);
#ini_set('zlib.output_compression', 0);
#ob_end_clean();
set_time_limit(0);
header('X-Accel-Buffering: no');
and running ob_implicit_flush(1); flush(); every time I needed to output partial results.
Using these directives for nginx:
fastcgi_keep_conn on;
proxy_buffering off;
gzip off;
However, with an upgrade to PHP 7 and nginx 1.10.3, these settings no longer work.
I have tried adding these directives to nginx:
fastcgi_max_temp_file_size 0;
fastcgi_store off;
fastcgi_buffering off;
But these don't seem to do anything, either. The results are still buffered until php script finishes running and then sent all at once.
Is what I am asking for still possible?
(I appreciate suggestions that there are other ways to send partial results that don't involve disabling buffers, but that's not part of my question).
Think the only way you can do that is if you can split the initial script in multiple scripts.
Each script you can then call from frontend using ajax and append the content to the dom.
PHP scripts are sync for the most part. But by doing ajax calls (those run async) you can execute multiple php scripts in parallel.
Outputting content as soon as PHP generates it is fine when using Apache with PHP as a module as you can simply disable output_buffering in PHP and use flush() or implicit_flush(1). This is what I previously used and it worked fine.
I'm running into an issue since having switched to PHP-FPM wherein I cannot get Apache (2.4) to output PHP's content until the entire script has completed. I still have output_buffering off and flushing in place but that's not enough. Apache isn't using mod_gzip (and that would have affected both the PHP module as well anyway).
Nginx has an option to disable proxy_buffering which, from reading other people's comments fixes this, but I cannot find any way of doing this in Apache.
Here's how PHP is currently being called within Apache:
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost/"
</FilesMatch>
<Proxy fcgi://localhost/ enablereuse=on retry=0 timeout=7200 max=500 flushpackets=on>
</Proxy>
The Apache documentation mentions flushpackets (used above) which appears to be what is needed, but then it also goes on to say that it only applies to AJS for now, not all proxied content so it won't do anything in this case.
Echoing enough whitespace to fill the buffer may work, but it's a messy workaround which is far from ideal.
In short: Does anyone know the correct way of having Apache send PHP content as soon as it's echo'd rather than waiting until script completion?
I successfully disabled output buffering by rewriting your Proxy section (based on this answer):
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost"
</FilesMatch>
<Proxy fcgi://localhost>
ProxySet enablereuse=on flushpackets=on
</Proxy>
Reposting the answer I just posted to a very similar question here: How to disable buffering with apache2 and mod_proxy_fcgi?
A few notes, since I just spent the past few hours experimenting to find the answer to this question:
It's not possible to entirely disable output buffering when using mod_proxy/mod_proxy_fcgi, however, you can still have responses streamed in chunks.
It seems, based on my experimentation, that chunks have to be at least 4096 bytes before the output will be flushed to the browser.
You can disable output buffering with the mod_fastcgi or mod_fcgi module, but those mods aren't as popular/widely used with Apache 2.4.
If you have mod_deflate enabled and don't set SetEnv no-gzip 1 for the virtualhost/directory/etc. that's streaming data, then gzip will not allow the buffer to flush until the request is complete.
I was testing things out to see how to best use Drupal 8's new BigPipe functionality for streaming requests to the client, and I posted some more notes in this GitHub issue.
In my environment (Apache 2.4, php-fpm) it worked when turning off compression and padding the output to output_buffering, see script:
header('Content-Encoding: none;');
$padSize = ini_get('output_buffering');
for($i=0;$i<10;$i++) {
echo str_pad("$i<br>", $padSize);
flush();
sleep(1);
}
https://www.php.net/manual/en/function.fastcgi-finish-request.php was what saved my sanity. I tried all kinds of hacks and techniques to get Apache and php-fpm (7.4) to display progress in a browser for a long-running process, including Server-Sent Events, writing progress to a text file and polling it with xhr, flush()ing like crazy, etc. Nothing worked until I did something like this (in my MVC action-controller)
public function longRunningProcessAction()
{
$path = \realpath('./data/progress.sqlite');
$db = new \PDO("sqlite:$path");
$db->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
$stmt = $db->prepare("UPDATE progress SET status = :status");
$stmt->execute([':status' => "starting"]);
header("content-type: application/json");
echo json_encode(['status'=>'started']);
// this here is critical ...
session_write_close();
fastcgi_finish_request();
// otherwise it will NOT work
for ($i = 0; $i <= 150; $i++) {
usleep(250*1000);
$stmt->execute([':status' => "$i of 150"]);
// this also works
file_put_contents('./data/progress.txt',"$i of 150");
}
$stmt->execute([':status' => "done"]);
}
// and...
public function progressAction()
{
$path = \realpath('./data/progress.sqlite');
$db = new \PDO("sqlite:$path");
$query = 'SELECT status FROM progress';
$stmt = $db->query($query);
// and this is working as well..
$text = file_get_contents('./data/progress.txt');
return new JsonModel(['status'=>$stmt->fetchColumn(),'text'=>$text]);
}
and then some Javascript (jQuery)
var check_progress = function() {
$.get("/my/job/progress").then(r=>{
$("#progress").text(r.status);
if (r.status === "done") { return; }
window.setTimeout(check_progress,300);
});
$.post("/long/running/process",data).then(check_progress);
VoilĂ !
A hack to make PHP FPM with Apache 2.4 mod_proxy work:
call ob_end_clean() at the beginning of your PHP script
call flush() at least 21 times to flush your output instead of calling it once; always send at least one character between calling flush()
using ob_end_clean() without ob_start() doesn't make sense to me, but it seems to help - and it returns true (=success!)
Is it possible to echo each time the loop is executed? For example:
foreach(range(1,9) as $n){
echo $n."\n";
sleep(1);
}
Instead of printing everything when the loop is finished, I'd like to see it printing each result per time.
The easiest way to eliminate nginx's buffering is by emitting a header:
header('X-Accel-Buffering: no');
This eliminates both proxy_buffering and (if you have nginx >= 1.5.6), fastcgi_buffering. The fastcgi bit is crucial if you're using php-fpm. The header is also far more convenient to do on an as-needed basis.
Docs on X-Accel-Buffering
Docs on fastcgi_buffering
FINAL SOLUTION
So that's what I found out:
Flush would not work under Apache's mod_gzip or Nginx's gzip because, logically, it is gzipping the content, and to do that it must buffer content to gzip it. Any sort of web server gzipping would affect this. In short, at the server side, we need to disable gzip and decrease the fastcgi buffer size. So:
In php.ini:
. output_buffering = Off
. zlib.output_compression = Off
In nginx.conf:
. gzip off;
. proxy_buffering off;
Also have this lines at hand, specially if you don't have acces to php.ini:
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
set_time_limit(0);
Last, if you have it, coment the code bellow:
ob_start('ob_gzhandler');
ob_flush();
PHP test code:
ob_implicit_flush(1);
for($i=0; $i<10; $i++){
echo $i;
//this is for the buffer achieve the minimum size in order to flush data
echo str_repeat(' ',1024*64);
sleep(1);
}
Related:
php flush not working
How to flush output after each `echo` call?
PHP flushing output as soon as you call echo
Easy solution on nginx server:
fastcgi_keep_conn on; # < solution
proxy_buffering off;
gzip off;
I didn't want to have to turn off gzip for the whole server or a whole directory, just for a few scripts, in a few specific cases.
All you need is this before anything is echo'ed:
header('Content-Encoding: none;');
Then do the flush as normal:
ob_end_flush();
flush();
Nginx seems to pick up on the encoding having been turned off and doesn't gzip.
You need to flush the php's buffer to the browser
foreach(range(1,9) as $n){
echo $n."\n";
flush();
sleep(1);
}
See: http://php.net/manual/en/function.flush.php
I found that you can set:
header("Content-Encoding:identity");
in your php script to disable nginx gzipping without having to modify the nginx.conf
You can accomplish this by flushing the output buffer in the middle of the loop.
Example:
ob_start();
foreach(range(1,9) as $n){
echo $n."\n";
ob_flush();
flush();
sleep(1);
}
Note that your php.ini settings can affect whether this will work or not if you have zlib compression turned on
I had a gzip problem comming from my php-fpm engine.
this code is the only one working for me :
function myEchoFlush_init() {
ini_set('zlib.output_compression', 0);
ini_set('output_buffering', 'Off');
ini_set('output_handler', '');
ini_set('implicit_flush', 1);
ob_implicit_flush(1);
ob_end_clean();
header('Content-Encoding: none;');
}
function myEchoFlush($str) {
echo $str . str_repeat(' ', ini_get('output_buffering') * 4) . "<br>\n";
}
This is my test function : it checks max_execution_time :
public function timeOut($time = 1, $max = 0) {
myEchoFlush_init();
if ($max) ini_set('max_execution_time', $max);
myEchoFlush("Starting infinite loop for $time seconds. It shouldn't exceed : " . (ini_get('max_execution_time')));
$start = microtime(true);
$lastTick = 1;
while (true) {
$tick = ceil(microtime(true) - $start);
if ($tick > $lastTick) {
myEchoFlush(microtime(true) - $start);
$lastTick = $tick;
}
if ($tick > $time) break;
}
echo "OK";
}
Combining PHP Flush/Streaming with gzip (AWS ALB, nginx only)
My interest in PHP streaming support was to enable the browsers to fetch early/important assets early as to minimize the critical render path. Having to choose between either PHP streaming or gzip wasn't really an alternative. This used to be possible with Apache 2.2.x in the past, however it doesn't look like this is something that's being worked on for current versions.
I was able to get it to work with nginx behind an AWS Application Load Balancer using PHP 7.4 and Amazon Linux 2 (v3.3.x). Not saying it only works with the AWS stack, however the ALB sitting in front of nginx changes things and I didn't have a chance to test it with a directly exposed instance. So keep this in mind.
nginx
location ~ \.(php|phar)(/.*)?$ {
# php-fpm config
# [...]
gzip on;
gzip_comp_level 4;
gzip_proxied any;
gzip_vary on;
tcp_nodelay on;
tcp_nopush off;
}
gzip_proxies & gzip_vary are the key parameters for gzipped streaming, the tcp_ parameters are to disable nginx buffering/waiting for 200ms (not sure if that's still a default nginx setting).
In my case I only needed to enable it for the php files, you should be able to move it higher into your server config.
php.ini
output_buffering = Off
zlib.output_compression = Off
implicit_flush = Off
If you want to manually control when the buffer is sent to the server/browser, set implicit_flush = Off and use ob_flush(); flush() and finally ob_end_flush().
I thought flush(); would work, at least from what Google/Stackoverflow tell me, but on my Windows WAMP (Windows, Apache, MySQL, PHP) system it doesn't work.
Is there some PHP setting I have to set to make flush() work?
Here's my code:
<?php
echo "Fun";
flush();
sleep(5);
echo "<br>Mo";
?>
The code just outputs all together when the script is done executing (after 5 seconds).. I don't want this, I want 'Fun' to show up right away, and then after 5 seconds 'Mo'.
I've tried other combinations of flush like ob_end_flush(); or ob_implicit_flush(true); but nothing is working. Any ideas?
So that's what I found out:
Flush would not work under Apache's mod_gzip or Nginx's gzip because, logically, it is gzipping the content, and to do that it must buffer content to gzip it. Any sort of web server gzipping would affect this. In short, at the server side, we need to disable gzip and decrease the fastcgi buffer size. So:
In php.ini:
. output_buffering = Off
. zlib.output_compression = Off
In nginx.conf:
. gzip off;
. proxy_buffering off;
Also have this lines at hand, specially if you don't have acces to php.ini:
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
set_time_limit(0);
Last, if you have it, coment the code bellow:
ob_start('ob_gzhandler');
ob_flush();
PHP test code:
ob_implicit_flush(1);
for($i=0; $i<10; $i++){
echo $i;
//this is for the buffer achieve the minimum size in order to flush data
echo str_repeat(' ',1024*64);
sleep(1);
}
The script works fine from CLI, displaying "Fun", waiting 5 secs before displaying "<br>Mo".
For a browser the results might be a bit different because:
The browser wont start rendering right away. Getting 3 bytes of data for HTML document isn't enough to do anything, so it'll most likely wait for a few more.
Implicit IO buffering on the lib level will most likely be active until a newline is received.
To work around 1) use text/plain content type for your test; 2) needs newlines, so do an echo "Fun\n"; and echo "<br>Mo\n"; Of course you wont be using text/plain for real HTML data.
If you're using CGI/FastCGI, forget it! These don't implement flush. The Webserver might have it's own buffer.
You can disable all output buffering in PHP with following command:
ob_implicit_flush();
If the problem persists, although you explicitly set
implicit_flush = yes
in your php.ini, you might also want to set
output_buffering = off
which did the trick in my case (after pulling my hair for 4+hrs)
Check your php.ini for output_buffering.
Maybe the problem is Apache here, which also may have buffers...
I can't seem to get the function connection_aborted to work with nginx. The code I used to test is as follow:
<?php
ignore_user_abort(true);
ob_implicit_flush();
$i = 0;
while (!connection_aborted()) {
echo $i;
$i++;
sleep(1);
}
file_put_contents('test',$i);
In Apache, it works correctly, although it delays a little. i.e. When I press stop button on browser when "3", the 'test' file shows "8". That is an acceptable margin, but on nginx, it doesn't seem to output anything to the 'test' file.
Check your Nginx configuration, it should have
fastcgi_ignore_client_abort off;
This configuration key has off as the default, so even if you have no fastcgi_ignore_client_abort at all, your script should work as expected.
My guess is that "connection_aborted()" is unable to detect the "ABORTED" connection
(and the script is still running)