I can't seem to get the function connection_aborted to work with nginx. The code I used to test is as follow:
<?php
ignore_user_abort(true);
ob_implicit_flush();
$i = 0;
while (!connection_aborted()) {
echo $i;
$i++;
sleep(1);
}
file_put_contents('test',$i);
In Apache, it works correctly, although it delays a little. i.e. When I press stop button on browser when "3", the 'test' file shows "8". That is an acceptable margin, but on nginx, it doesn't seem to output anything to the 'test' file.
Check your Nginx configuration, it should have
fastcgi_ignore_client_abort off;
This configuration key has off as the default, so even if you have no fastcgi_ignore_client_abort at all, your script should work as expected.
My guess is that "connection_aborted()" is unable to detect the "ABORTED" connection
(and the script is still running)
Related
When using php-mod and fastcgi the code executes perfectly and every second i get an output but switching to php-fpm the code lags a few seconds before outputting depending on output size
Tried the following and combinations of
setting output buffer 0 in php ini
ob_implicit_flush
ob_start
ob_end_flush
header Content-Encoding = none
implicit_flush 1
ob_end_clean
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
while( true ){
$time = date('r');
echo "retry:1000\r\n";
echo "data: ".$time;
echo "\r\n\r\n";
ob_flush();
flush();
sleep(1);
}
?>
This is for a production server and php-mod is not an option i also got it to work in Fastcgi with
FcgidOutputBufferSize 0
is there a way to make the code work on php-fpm so the output is send immediately as in php-mod and fastcgi ?
P.S Running : Ubuntu 18.04, Apache 2.4.29, PHP 7.2
After a few days i have discovered the only way to get this to work in php-fpm is to fill the output buffer. This is really inefficient ! Let me explain :
Say you are using Server-send events and your output buffer is 4096, you process every second even if you do not return anything you still send about 4Kb output to client where mod_php and fast-cgi sends only data when there is an output.
If anyone else has this problem this is my best solution : run main site on php-fpm ex. example.com and make a sub-domain ex. push.example.com and setup fast-cgi / php_mod[NOT RECOMMENDED PRODUCTION] on sub-domain now you can keep the connection open and process data without sending output to client.
PS. I saved Session variables in database so both domain and sub-domain can access it see https://github.com/dominicklee/PHP-MySQL-Sessions the other thing is to let sub-domain send CORS. in PHP add header('Access-Control-Allow-Origin: https://example.com');
I have a PHP application that has multiple "nested" include() functions. For some reason the applications stops after 60 seconds. I'm using set_time_limit(0), also I have tested this without the include function in the file and it runs forever. I'm not sure what the issue is.
Working:
set_time_limit(0);
while(1 < 2){
echo 'hello';
}
Not Working:
//MASTER FILE
set_time_limit(0);
while(1 < 2){
include('file.php');
}
//INCLUDED FILE 'file.php'
echo 'hello';
First, it's bad practice to write infinite loops, especially in response to a web request. In general you also want your web requests to respond as quickly as possible and have long-running processes run separately.
That said, assuming you're running PHP behind Apache, you'll want to adjust your Apache TimeOut config. It defaults to 60 seconds.
We have code similar to this:
<?php
ob_implicit_flush(true);
ob_end_flush();
foreach ($arrayOfStrings as $string) {
echo time_expensive_function($string);
}
?>
In Apache, this would send each echo to the browser as it was output. In nginx/FastCGI however, this doesn't work due to the way nginx works (by default).
Is it possible to make this work on nginx/FastCGI, and if so, how?
First php has to correctly flush everything :
#ob_end_flush();
#flush();
Then, I found two working solutions:
1) Via Nginx configuration:
fastcgi_buffering off;
2) Via HTTP header in the php code
header('X-Accel-Buffering: no');
Easy solution:
fastcgi_keep_conn on; # < solution
proxy_buffering off;
gzip off;
I didn't want to have to turn off gzip for the whole server or a whole directory, just for a few scripts, in a few specific cases.
All you need is this before anything is echo'ed:
header('Content-Encoding: none;');
Then do the flush as normal:
ob_end_flush();
flush();
Nginx seems to pick up on the encoding having been turned off and doesn't gzip.
Add the flush() function in your loop:
foreach ($arrayOfStrings as $string) {
echo time_expensive_function($string);
flush();
}
It might work, but not necessarily on each iteration (there's some magic involved!)
Add the -flush to the FastCGI config, refer to the manual:
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiServer
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiConfig
http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiExternalServer
From http://mailman.fastcgi.com/pipermail/fastcgi-developers/2009-July/000286.html
I needed both of those two lines at the beginning of my script:
header('X-Accel-Buffering: no');
ob_implicit_flush(true);
Each line alone would also work, combining them makes my browser getting the result from the server even faster. Cannot explain it, just experienced it.
My configuration is nginx with php-fpm.
I wasn't sure how to title this thread, sorry.
I have a script that processes some logs and I echo a lot of debug information as the process goes. Since moving to the new server, it seems that the script hangs for 30 odd seconds, then spits out all the logging, then hangs again for 30 odd seconds and the process continues.
This is really odd behavior and I don't know where to start. Its like it isn't processing the file line by line but in blocks ...
PHP version is 5.1.6 on a CentOS running plesk. (My old CP was CPanel)
Any ideas?
EDIT: Simple example of my issue - Running this code:
for ($i=0; $i<100; $i++) {
echo "test $i";
sleep(1);
}
the script will hang for 100 seconds, then print out all the "test 1" ect. Sleep is required in my main script and on the other server just echoed the values in turn.
EDIT2: Have tried setting output_buffering = 0 and implicit_flush = On and didn't help.
You may have output_buffering On. Try to disable it first.
You can do it either in the php.ini file, in a .htaccess file if your server allows it, or use the following code at the beginning of your PHP script:
while (ob_get_level()) ob_end_clean();
Also, use flush() after each echo or print, and it should be all right!
Update:
You might also encounter other buffers that you cannot control from within PHP (web server, browser, ...), which is why you're still not seing anything. A workaround is to send some blank bytes after each print:
while (ob_get_level()) ob_end_clean();
for ($i=0; $i<100; $i++) {
echo "test $i";
echo str_repeat(' ', 256);
flush();
sleep(1);
}
However, while this example works for me on IE & Firefox, it does not work on Chrome!
I thought flush(); would work, at least from what Google/Stackoverflow tell me, but on my Windows WAMP (Windows, Apache, MySQL, PHP) system it doesn't work.
Is there some PHP setting I have to set to make flush() work?
Here's my code:
<?php
echo "Fun";
flush();
sleep(5);
echo "<br>Mo";
?>
The code just outputs all together when the script is done executing (after 5 seconds).. I don't want this, I want 'Fun' to show up right away, and then after 5 seconds 'Mo'.
I've tried other combinations of flush like ob_end_flush(); or ob_implicit_flush(true); but nothing is working. Any ideas?
So that's what I found out:
Flush would not work under Apache's mod_gzip or Nginx's gzip because, logically, it is gzipping the content, and to do that it must buffer content to gzip it. Any sort of web server gzipping would affect this. In short, at the server side, we need to disable gzip and decrease the fastcgi buffer size. So:
In php.ini:
. output_buffering = Off
. zlib.output_compression = Off
In nginx.conf:
. gzip off;
. proxy_buffering off;
Also have this lines at hand, specially if you don't have acces to php.ini:
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
set_time_limit(0);
Last, if you have it, coment the code bellow:
ob_start('ob_gzhandler');
ob_flush();
PHP test code:
ob_implicit_flush(1);
for($i=0; $i<10; $i++){
echo $i;
//this is for the buffer achieve the minimum size in order to flush data
echo str_repeat(' ',1024*64);
sleep(1);
}
The script works fine from CLI, displaying "Fun", waiting 5 secs before displaying "<br>Mo".
For a browser the results might be a bit different because:
The browser wont start rendering right away. Getting 3 bytes of data for HTML document isn't enough to do anything, so it'll most likely wait for a few more.
Implicit IO buffering on the lib level will most likely be active until a newline is received.
To work around 1) use text/plain content type for your test; 2) needs newlines, so do an echo "Fun\n"; and echo "<br>Mo\n"; Of course you wont be using text/plain for real HTML data.
If you're using CGI/FastCGI, forget it! These don't implement flush. The Webserver might have it's own buffer.
You can disable all output buffering in PHP with following command:
ob_implicit_flush();
If the problem persists, although you explicitly set
implicit_flush = yes
in your php.ini, you might also want to set
output_buffering = off
which did the trick in my case (after pulling my hair for 4+hrs)
Check your php.ini for output_buffering.
Maybe the problem is Apache here, which also may have buffers...