Can't get nginx to flush buffer with php7-fpm - php

I have a need for a longish-running (7-8 seconds) php script to output partial results to the user as they are found. I have previously been able to accomplish this with an older version of php-fpm and nginx by doing the following:
Using these config settings in php:
#ini_set('output_buffering', 0);
#ini_set('implicit_flush', 1);
#ini_set('zlib.output_compression', 0);
#ob_end_clean();
set_time_limit(0);
header('X-Accel-Buffering: no');
and running ob_implicit_flush(1); flush(); every time I needed to output partial results.
Using these directives for nginx:
fastcgi_keep_conn on;
proxy_buffering off;
gzip off;
However, with an upgrade to PHP 7 and nginx 1.10.3, these settings no longer work.
I have tried adding these directives to nginx:
fastcgi_max_temp_file_size 0;
fastcgi_store off;
fastcgi_buffering off;
But these don't seem to do anything, either. The results are still buffered until php script finishes running and then sent all at once.
Is what I am asking for still possible?
(I appreciate suggestions that there are other ways to send partial results that don't involve disabling buffers, but that's not part of my question).

Think the only way you can do that is if you can split the initial script in multiple scripts.
Each script you can then call from frontend using ajax and append the content to the dom.
PHP scripts are sync for the most part. But by doing ajax calls (those run async) you can execute multiple php scripts in parallel.

Related

HHVM and nginx not sending headers immediately

I have a PHP download script that dynamically bundles files into zips. So the user feels like something is happening immediately, we send a header("Content-disposition: attachment; filename=your_file.zip"); as soon as possible and then start trickling out the files. In our traditional Apache/PHP setup, that works great but we're trying to get our codebase to run on an nginx/HHVM server and it doesn't send the headers the same way.
Rather, nginx/HHVM waits to send the headers until its done a lot of the processing (I'm not sure how much) and, from HHVM's perspective, sent out several files. This means the user ends up waiting a long time before getting a Save As dialog and it's creating a bad experience.
In my nginx site config, I set
fastcgi_buffering off;
fastcgi_keep_conn on;
proxy_buffering off;
I also tried adding flush(); and header('X-Accel-Buffering: no'); in PHP but nothing seems to help.
Is there something else I need to change?

Set ini max_execution_time doesn't work

Before I use nginx and php-fpm, I used Apache, so when I wanted only one of my cron jobs to run without time execution limitation, I used these lines in my PHP code:
set_time_limit(0);
ini_set('max_execution_time', 0);
but after I migrated from Apache to nginx, this code doesn't work. I know ways to change nginx.conf to increase maximum execution time.
But I want to handle this with php code. Is there a way?
I want to specify only one file that can run PHP code without time limitation.
Try This:
Increase PHP script execution time with Nginx
You can follow the steps given below to increase the timeout value. PHP default is 30s. :
Changes in php.ini
If you want to change max execution time limit for php scripts from 30 seconds (default) to 300 seconds.
vim /etc/php5/fpm/php.ini
Set…
max_execution_time = 300
In Apache, applications running PHP as a module above would have suffice. But in our case we need to make this change at 2 more places.
Changes in PHP-FPM
This is only needed if you have already un-commented request_terminate_timeout parameter before. It is commented by default, and takes value of max_execution_time found in php.ini
Edit…
vim /etc/php5/fpm/pool.d/www.conf
Set…
request_terminate_timeout = 300
Changes in Nginx Config
To increase the time limit for example.com by
vim /etc/nginx/sites-available/example.com
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
If you want to increase time-limit for all-sites on your server, you can edit main nginx.conf file:
vim /etc/nginx/nginx.conf
Add following in http{..} section
http {
#...
fastcgi_read_timeout 300;
#...
}
Reload PHP-FPM & Nginx
Don’t forget to do this so that changes you have made will come into effect:
service php5-fpm reload
service nginx reload
or try this
fastcgi_send_timeout 50;
fastcgi_read_timeout 50;
fastcgi has it's own set of timeouts and checks to prevent it from stalling out on a locked up process. They would kick in if you for instance set php's execuction time limit to 0 (unlimited) then accidentally created an infinite loop. Or if you were running some other application besides PHP which didn't have any of it's own timeout protections and it failed.
I think that if you have php-fpm and nginx "you can't" set this time only from PHP.
What you could do is a redirect with the parameters indicating you where to continue, but you must control the time that your script is running to avoid timeout.
If your process runs in a browser window, then do it with Javascript the redirect because the browser could limit the number of redirects... or do it with ajax.
Hope that helps.
You can add request_terminate_timeout = 300 to your server's php-fpm pool configuration if you are tried all of solutions here.
ini_set('max_execution_time', 0);
do this if "Safe Mode" is off
set_time_limit(0);
Place this at the top of your PHP script and let your script loose!
Note: if your PHP setup is running in safe mode, you can only change it from the php.ini file.

NginX issues HTTP 499 error after 60 seconds despite config. (PHP and AWS)

At the end of last week I noticed a problem on one of my medium AWS instances where Nginx always returns a HTTP 499 response if a request takes more than 60 seconds. The page being requested is a PHP script
I've spent several days trying to find answers and have tried everything that I can find on the internet including several entries here on Stack Overflow, nothing works.
I've tried modifying the PHP settings, PHP-FPM settings and Nginx settings. You can see a question I raised on the NginX forums on Friday (http://forum.nginx.org/read.php?9,237692) though that has received no response so I am hoping that I might be able to find an answer here before I am forced to moved back to Apache which I know just works.
This is not the same problem as the HTTP 500 errors reported in other entries.
I've been able to replicate the problem with a fresh micro AWS instance of NginX using PHP 5.4.11.
To help anyone who wishes to see the problem in action I'm going to take you through the set-up I ran for the latest Micro test server.
You'll need to launch a new AWS Micro instance (so it's free) using the AMI ami-c1aaabb5
This PasteBin entry has the complete set-up to run to mirror my test environment. You'll just need to change example.com within the NginX config at the end
http://pastebin.com/WQX4AqEU
Once that's set-up you just need to create the sample PHP file which I am testing with which is
<?php
sleep(70);
die( 'Hello World' );
?>
Save that into the webroot and then test. If you run the script from the command line using php or php-cgi, it will work. If you access the script via a webpage and tail the access log /var/log/nginx/example.access.log, you will notice that you receive the HTTP 1.1 499 response after 60 seconds.
Now that you can see the timeout, I'll go through some of the config changes I've made to both PHP and NginX to try to get around this. For PHP I'll create several config files so that they can be easily disabled
Update the PHP FPM Config to include external config files
sudo echo '
include=/usr/local/php/php-fpm.d/*.conf
' >> /usr/local/php/etc/php-fpm.conf
Create a new PHP-FPM config to override the request timeout
sudo echo '[www]
request_terminate_timeout = 120s
request_slowlog_timeout = 60s
slowlog = /var/log/php-fpm-slow.log ' >
/usr/local/php/php-fpm.d/timeouts.conf
Change some of the global settings to ensure the emergency restart interval is 2 minutes
# Create a global tweaks
sudo echo '[global]
error_log = /var/log/php-fpm.log
emergency_restart_threshold = 10
emergency_restart_interval = 2m
process_control_timeout = 10s
' > /usr/local/php/php-fpm.d/global-tweaks.conf
Next, we will change some of the PHP.INI settings, again using separate files
# Log PHP Errors
sudo echo '[PHP]
log_errors = on
error_log = /var/log/php.log
' > /usr/local/php/conf.d/errors.ini
sudo echo '[PHP]
post_max_size=32M
upload_max_filesize=32M
max_execution_time = 360
default_socket_timeout = 360
mysql.connect_timeout = 360
max_input_time = 360
' > /usr/local/php/conf.d/filesize.ini
As you can see, this is increasing the socket timeout to 3 minutes and will help log errors.
Finally, I'll edit some of the NginX settings to increase the timeout's that side
First I edit the file /etc/nginx/nginx.conf and add this to the http directive
fastcgi_read_timeout 300;
Next, I edit the file /etc/nginx/sites-enabled/example which we created earlier (See the pastebin entry) and add the following settings into the server directive
client_max_body_size 200;
client_header_timeout 360;
client_body_timeout 360;
fastcgi_read_timeout 360;
keepalive_timeout 360;
proxy_ignore_client_abort on;
send_timeout 360;
lingering_timeout 360;
Finally I add the following into the location ~ .php$ section of the server dir
fastcgi_read_timeout 360;
fastcgi_send_timeout 360;
fastcgi_connect_timeout 1200;
Before retrying the script, start both nginx and php-fpm to ensure that the new settings have been picked up. I then try accessing the page and still receive the HTTP/1.1 499 entry within the NginX example.error.log.
So, where am I going wrong? This just works on apache when I set PHP's max execution time to 2 minutes.
I can see that the PHP settings have been picked up by running phpinfo() from a web-accessible page. I just don't get, I actually think that too much has been increased as it should just need PHP's max_execution_time, default_socket_timeout changed as well as NginX's fastcgi_read_timeout within just the server->location directive.
Update 1
Having performed some further test to show that the problem is not that the client is dying I have modified the test file to be
<?php
file_put_contents('/www/log.log', 'My first data');
sleep(70);
file_put_contents('/www/log.log','The sleep has passed');
die('Hello World after sleep');
?>
If I run the script from a web page then I can see the content of the file be set to the first string. 60 seconds later the error appears in the NginX log. 10 seconds later the contents of the file changes to the 2nd string, proving that PHP is completing the process.
Update 2
Setting fastcgi_ignore_client_abort on; does change the response from a HTTP 499 to a HTTP 200 though nothing is still returned to the end client.
Update 3
Having installed Apache and PHP (5.3.10) onto the box straight (using apt) and then increasing the execution time the problem does appear to also happen on Apache as well. The symptoms are the same as NginX now, a HTTP200 response but the actual client connection times out before hand.
I've also started to notice, in the NginX logs, that if I test using Firefox, it makes a double request (like this PHP script executes twice when longer than 60 seconds). Though that does appear to be the client requesting upon the script failing
The cause of the problem is the Elastic Load Balancers on AWS. They, by default, timeout after 60 seconds of inactivity which is what was causing the problem.
So it wasn't NginX, PHP-FPM or PHP but the load balancer.
To fix this, simply go into the ELB "Description" tab, scroll to the bottom, and click the "(Edit)" link beside the value that says "Idle Timeout: 60 seconds"
Actually I faced the same issue on one server and I figured out that after nginx configuration changes I didn't restart the nginx server, so with every hit of nginx url I was getting a 499 http response. After nginx restart it started working properly with http 200 responses.
I thought I would leave my two cents. First the problem is not related with php(still could be a php related, php always surprises me :P ). Thats for sure. its mainly caused of a server proxied to itself, more specifically hostname/aliases names issue, in your case it could be the load balancer is requesting nginx and nginx is calling back the load balancer and it keeps going that way.
I have experienced a similar issue with nginx as the load balancer and apache as the webserver/proxy
In my case - nginx was sending a request to an AWS ALB and getting a timeout with a 499 status code.
The solution was to add this line:
proxy_next_upstream off;
The default value for this in current versions of nginx is proxy_next_upstream error timeout; - which means that on a timeout it tries the next 'server' - which in the case of an ALB is the next IP in the list of resolved ips.
You need to find in which place problem live. I dont' know exact answer, but just let's try to find it.
We have here 3 elements: nginx, php-fpm, php. As you told, same php settings under apache is ok. Does it's same no same setup? Did you try apache instead of nginx on same OS/host/etc.?
If we will see, that php is not suspect, then we have two suspects: nginx & php-fpm.
To exclude nginx: try to setup same "system" on ruby. See https://github.com/garex/puppet-module-nginx to get idea to install simplest ruby setup. Or use google (may be it will be even better).
My main suspect here is php-fpm.
Try to play with these settings:
php-fpm`s request_terminate_timeout
nginx`s fastcgi_ignore_client_abort

How to disable output buffering in nginx for PHP application

We have code similar to this:
<?php
ob_implicit_flush(true);
ob_end_flush();
foreach ($arrayOfStrings as $string) {
echo time_expensive_function($string);
}
?>
In Apache, this would send each echo to the browser as it was output. In nginx/FastCGI however, this doesn't work due to the way nginx works (by default).
Is it possible to make this work on nginx/FastCGI, and if so, how?
First php has to correctly flush everything :
#ob_end_flush();
#flush();
Then, I found two working solutions:
1) Via Nginx configuration:
fastcgi_buffering off;
2) Via HTTP header in the php code
header('X-Accel-Buffering: no');
Easy solution:
fastcgi_keep_conn on; # < solution
proxy_buffering off;
gzip off;
I didn't want to have to turn off gzip for the whole server or a whole directory, just for a few scripts, in a few specific cases.
All you need is this before anything is echo'ed:
header('Content-Encoding: none;');
Then do the flush as normal:
ob_end_flush();
flush();
Nginx seems to pick up on the encoding having been turned off and doesn't gzip.
Add the flush() function in your loop:
foreach ($arrayOfStrings as $string) {
echo time_expensive_function($string);
flush();
}
It might work, but not necessarily on each iteration (there's some magic involved!)
Add the -flush to the FastCGI config, refer to the manual:
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiServer
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiConfig
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiExternalServer
From http://mailman.fastcgi.com/pipermail/fastcgi-developers/2009-July/000286.html
I needed both of those two lines at the beginning of my script:
header('X-Accel-Buffering: no');
ob_implicit_flush(true);
Each line alone would also work, combining them makes my browser getting the result from the server even faster. Cannot explain it, just experienced it.
My configuration is nginx with php-fpm.

PHP Flush that works... even in Nginx

Is it possible to echo each time the loop is executed? For example:
foreach(range(1,9) as $n){
echo $n."\n";
sleep(1);
}
Instead of printing everything when the loop is finished, I'd like to see it printing each result per time.
The easiest way to eliminate nginx's buffering is by emitting a header:
header('X-Accel-Buffering: no');
This eliminates both proxy_buffering and (if you have nginx >= 1.5.6), fastcgi_buffering. The fastcgi bit is crucial if you're using php-fpm. The header is also far more convenient to do on an as-needed basis.
Docs on X-Accel-Buffering
Docs on fastcgi_buffering
FINAL SOLUTION
So that's what I found out:
Flush would not work under Apache's mod_gzip or Nginx's gzip because, logically, it is gzipping the content, and to do that it must buffer content to gzip it. Any sort of web server gzipping would affect this. In short, at the server side, we need to disable gzip and decrease the fastcgi buffer size. So:
In php.ini:
. output_buffering = Off
. zlib.output_compression = Off
In nginx.conf:
. gzip off;
. proxy_buffering off;
Also have this lines at hand, specially if you don't have acces to php.ini:
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
set_time_limit(0);
Last, if you have it, coment the code bellow:
ob_start('ob_gzhandler');
ob_flush();
PHP test code:
ob_implicit_flush(1);
for($i=0; $i<10; $i++){
echo $i;
//this is for the buffer achieve the minimum size in order to flush data
echo str_repeat(' ',1024*64);
sleep(1);
}
Related:
php flush not working
How to flush output after each `echo` call?
PHP flushing output as soon as you call echo
Easy solution on nginx server:
fastcgi_keep_conn on; # < solution
proxy_buffering off;
gzip off;
I didn't want to have to turn off gzip for the whole server or a whole directory, just for a few scripts, in a few specific cases.
All you need is this before anything is echo'ed:
header('Content-Encoding: none;');
Then do the flush as normal:
ob_end_flush();
flush();
Nginx seems to pick up on the encoding having been turned off and doesn't gzip.
You need to flush the php's buffer to the browser
foreach(range(1,9) as $n){
echo $n."\n";
flush();
sleep(1);
}
See: http://php.net/manual/en/function.flush.php
I found that you can set:
header("Content-Encoding:identity");
in your php script to disable nginx gzipping without having to modify the nginx.conf
You can accomplish this by flushing the output buffer in the middle of the loop.
Example:
ob_start();
foreach(range(1,9) as $n){
echo $n."\n";
ob_flush();
flush();
sleep(1);
}
Note that your php.ini settings can affect whether this will work or not if you have zlib compression turned on
I had a gzip problem comming from my php-fpm engine.
this code is the only one working for me :
function myEchoFlush_init() {
ini_set('zlib.output_compression', 0);
ini_set('output_buffering', 'Off');
ini_set('output_handler', '');
ini_set('implicit_flush', 1);
ob_implicit_flush(1);
ob_end_clean();
header('Content-Encoding: none;');
}
function myEchoFlush($str) {
echo $str . str_repeat(' ', ini_get('output_buffering') * 4) . "<br>\n";
}
This is my test function : it checks max_execution_time :
public function timeOut($time = 1, $max = 0) {
myEchoFlush_init();
if ($max) ini_set('max_execution_time', $max);
myEchoFlush("Starting infinite loop for $time seconds. It shouldn't exceed : " . (ini_get('max_execution_time')));
$start = microtime(true);
$lastTick = 1;
while (true) {
$tick = ceil(microtime(true) - $start);
if ($tick > $lastTick) {
myEchoFlush(microtime(true) - $start);
$lastTick = $tick;
}
if ($tick > $time) break;
}
echo "OK";
}
Combining PHP Flush/Streaming with gzip (AWS ALB, nginx only)
My interest in PHP streaming support was to enable the browsers to fetch early/important assets early as to minimize the critical render path. Having to choose between either PHP streaming or gzip wasn't really an alternative. This used to be possible with Apache 2.2.x in the past, however it doesn't look like this is something that's being worked on for current versions.
I was able to get it to work with nginx behind an AWS Application Load Balancer using PHP 7.4 and Amazon Linux 2 (v3.3.x). Not saying it only works with the AWS stack, however the ALB sitting in front of nginx changes things and I didn't have a chance to test it with a directly exposed instance. So keep this in mind.
nginx
location ~ \.(php|phar)(/.*)?$ {
# php-fpm config
# [...]
gzip on;
gzip_comp_level 4;
gzip_proxied any;
gzip_vary on;
tcp_nodelay on;
tcp_nopush off;
}
gzip_proxies & gzip_vary are the key parameters for gzipped streaming, the tcp_ parameters are to disable nginx buffering/waiting for 200ms (not sure if that's still a default nginx setting).
In my case I only needed to enable it for the php files, you should be able to move it higher into your server config.
php.ini
output_buffering = Off
zlib.output_compression = Off
implicit_flush = Off
If you want to manually control when the buffer is sent to the server/browser, set implicit_flush = Off and use ob_flush(); flush() and finally ob_end_flush().

Categories