Outputting content as soon as PHP generates it is fine when using Apache with PHP as a module as you can simply disable output_buffering in PHP and use flush() or implicit_flush(1). This is what I previously used and it worked fine.
I'm running into an issue since having switched to PHP-FPM wherein I cannot get Apache (2.4) to output PHP's content until the entire script has completed. I still have output_buffering off and flushing in place but that's not enough. Apache isn't using mod_gzip (and that would have affected both the PHP module as well anyway).
Nginx has an option to disable proxy_buffering which, from reading other people's comments fixes this, but I cannot find any way of doing this in Apache.
Here's how PHP is currently being called within Apache:
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost/"
</FilesMatch>
<Proxy fcgi://localhost/ enablereuse=on retry=0 timeout=7200 max=500 flushpackets=on>
</Proxy>
The Apache documentation mentions flushpackets (used above) which appears to be what is needed, but then it also goes on to say that it only applies to AJS for now, not all proxied content so it won't do anything in this case.
Echoing enough whitespace to fill the buffer may work, but it's a messy workaround which is far from ideal.
In short: Does anyone know the correct way of having Apache send PHP content as soon as it's echo'd rather than waiting until script completion?
I successfully disabled output buffering by rewriting your Proxy section (based on this answer):
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost"
</FilesMatch>
<Proxy fcgi://localhost>
ProxySet enablereuse=on flushpackets=on
</Proxy>
Reposting the answer I just posted to a very similar question here: How to disable buffering with apache2 and mod_proxy_fcgi?
A few notes, since I just spent the past few hours experimenting to find the answer to this question:
It's not possible to entirely disable output buffering when using mod_proxy/mod_proxy_fcgi, however, you can still have responses streamed in chunks.
It seems, based on my experimentation, that chunks have to be at least 4096 bytes before the output will be flushed to the browser.
You can disable output buffering with the mod_fastcgi or mod_fcgi module, but those mods aren't as popular/widely used with Apache 2.4.
If you have mod_deflate enabled and don't set SetEnv no-gzip 1 for the virtualhost/directory/etc. that's streaming data, then gzip will not allow the buffer to flush until the request is complete.
I was testing things out to see how to best use Drupal 8's new BigPipe functionality for streaming requests to the client, and I posted some more notes in this GitHub issue.
In my environment (Apache 2.4, php-fpm) it worked when turning off compression and padding the output to output_buffering, see script:
header('Content-Encoding: none;');
$padSize = ini_get('output_buffering');
for($i=0;$i<10;$i++) {
echo str_pad("$i<br>", $padSize);
flush();
sleep(1);
}
https://www.php.net/manual/en/function.fastcgi-finish-request.php was what saved my sanity. I tried all kinds of hacks and techniques to get Apache and php-fpm (7.4) to display progress in a browser for a long-running process, including Server-Sent Events, writing progress to a text file and polling it with xhr, flush()ing like crazy, etc. Nothing worked until I did something like this (in my MVC action-controller)
public function longRunningProcessAction()
{
$path = \realpath('./data/progress.sqlite');
$db = new \PDO("sqlite:$path");
$db->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
$stmt = $db->prepare("UPDATE progress SET status = :status");
$stmt->execute([':status' => "starting"]);
header("content-type: application/json");
echo json_encode(['status'=>'started']);
// this here is critical ...
session_write_close();
fastcgi_finish_request();
// otherwise it will NOT work
for ($i = 0; $i <= 150; $i++) {
usleep(250*1000);
$stmt->execute([':status' => "$i of 150"]);
// this also works
file_put_contents('./data/progress.txt',"$i of 150");
}
$stmt->execute([':status' => "done"]);
}
// and...
public function progressAction()
{
$path = \realpath('./data/progress.sqlite');
$db = new \PDO("sqlite:$path");
$query = 'SELECT status FROM progress';
$stmt = $db->query($query);
// and this is working as well..
$text = file_get_contents('./data/progress.txt');
return new JsonModel(['status'=>$stmt->fetchColumn(),'text'=>$text]);
}
and then some Javascript (jQuery)
var check_progress = function() {
$.get("/my/job/progress").then(r=>{
$("#progress").text(r.status);
if (r.status === "done") { return; }
window.setTimeout(check_progress,300);
});
$.post("/long/running/process",data).then(check_progress);
VoilĂ !
A hack to make PHP FPM with Apache 2.4 mod_proxy work:
call ob_end_clean() at the beginning of your PHP script
call flush() at least 21 times to flush your output instead of calling it once; always send at least one character between calling flush()
using ob_end_clean() without ob_start() doesn't make sense to me, but it seems to help - and it returns true (=success!)
Related
When using php-mod and fastcgi the code executes perfectly and every second i get an output but switching to php-fpm the code lags a few seconds before outputting depending on output size
Tried the following and combinations of
setting output buffer 0 in php ini
ob_implicit_flush
ob_start
ob_end_flush
header Content-Encoding = none
implicit_flush 1
ob_end_clean
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
while( true ){
$time = date('r');
echo "retry:1000\r\n";
echo "data: ".$time;
echo "\r\n\r\n";
ob_flush();
flush();
sleep(1);
}
?>
This is for a production server and php-mod is not an option i also got it to work in Fastcgi with
FcgidOutputBufferSize 0
is there a way to make the code work on php-fpm so the output is send immediately as in php-mod and fastcgi ?
P.S Running : Ubuntu 18.04, Apache 2.4.29, PHP 7.2
After a few days i have discovered the only way to get this to work in php-fpm is to fill the output buffer. This is really inefficient ! Let me explain :
Say you are using Server-send events and your output buffer is 4096, you process every second even if you do not return anything you still send about 4Kb output to client where mod_php and fast-cgi sends only data when there is an output.
If anyone else has this problem this is my best solution : run main site on php-fpm ex. example.com and make a sub-domain ex. push.example.com and setup fast-cgi / php_mod[NOT RECOMMENDED PRODUCTION] on sub-domain now you can keep the connection open and process data without sending output to client.
PS. I saved Session variables in database so both domain and sub-domain can access it see https://github.com/dominicklee/PHP-MySQL-Sessions the other thing is to let sub-domain send CORS. in PHP add header('Access-Control-Allow-Origin: https://example.com');
I use Ubuntu 17.04, Apache 2.4, proxy_fcgi, and php-fpm. Everything works and connects nicely, except for flushing for Server Sent Events.
Flushing used to work nicely with mod_fastcgi and fastcgiexternalserver with "-flush". Now with Ubuntu 17.04, it doesn't include mod_fastcgi, and proxy_fcgi is recommended.
With proxy_fcgi I've disabled gzip, output buffering, use "Content-Encoding: none", the only real way for connection_aborted and flush to work is if you send around 32K (I'm guessing this is because of proxy buffering?)
It says in the Apache Docs that you cannot set ProxyReceiveBufferSize or ProxyIOBufferSize less than 512.
There really should be an easier way to do this with proxy_fcgi.
Example code of sending data for Server Sent Events:
while (!connection_aborted() ) {
echo('data: {}' . PHP_EOL . PHP_EOL);
flush();
} // While //
Edit: I've tried ob_flush() too, but I disabled Output Buffering (ob_*) with ob_end_clean() previously, and ob_flush() will return an error.
Albeit this question has been asked some years ago, I just ran into a similar problem with Apache 2.4 and mod_fcgid. The PHP application did directly return data without buffering (tested with internal server php -S 0.0.0.0:8080 index.php) - but it was buffered when being used with Apache.
The following configuration disables output buffering for mod_fcgid (default size is 65536 bytes)
FcgidOutputBufferSize 0
https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#FcgidOutputBufferSize
I'm converting php code to hhvm. One page in particular sometimes needs to flush() a status-message to the browser before sending some emails and a few other slow tasks and then updating the status message.
Before hhvm (using php-fpm and nginx) I used:
header('Content-Encoding: none;');
echo "About to send emails...";
if (ob_get_level() > 0) { ob_end_flush(); }
flush();
// Emails sent here
echo "Emails sent.";
So the content-encoding stops gzip being used, then the flush sends the first message, then the second message is sent when the page ends.
Using HHVM (and nginx), setting the Content-encoding header works (it shows up in the browser), but either hhvm or nginx is ignoring it and sending the page as gzipped content, so the browser interprets the content-encoding=none with binary data.
How can I disable gzip inside php code on HHVM?
(I know I could turn it off in the config files, but I want it kept on for nearly every page load except a few that will run slower.)
While my suggestion would be to have different nginx location paths with different gzip configuration, here's a better alternative solution to achieve what you want to happen.
Better Solution:
It is often referred to as bad practice to keep a connection open (and the browser loading bar spinning) while you're doing work in the background.
Since PHP 5.3.3 there is a method fastcgi_finish_request() which flushes the data and closes the connection, while it continues to work in the background.
Now, this is unfortunately not supported yet on HHVM. However, there is an alternative way of doing this.
HHVM alternative:
You can use register_postsend_function('function_name'); instead. This closes the connection, and the given function will be executed in the background.
Here is an example:
<?php
echo "and ...";
register_postsend_function(function() {
echo "... you should not be seeing this";
sleep("10"); // do a lot of work
});
die();
My flush mechanism stopped working, i'm not sure why.
I'm trying to run a simple flush example now, with no luck:
echo "before sleep";
flush();
sleep(5);
echo "after sleep";
after doing some reading, and understanding ngin x was was installed on my server lately, I requested it to be disabled for my domain. (the server admin said he disabled it for this specific domain)
also, i tried disabling gzip, added these lines to .htaccess
SetOutputFilter DEFLATE
SetEnv no-gzip dont-vary
also, tried adding these to my php file
ini_set('output_buffering','on');
ini_set('zlib.output_compression', 0);
nothing helps. its sleeping 5 seconds and then displaying all the content together.
I've been using it before, and have been using also through the output buffer (ob_start, ob_flush etc., now just trying to make the simplest example work)
"Stopped working" is a pretty high level. You should actually take a look what works or not to find out more.
This can be done by monitoring the network traffic. You will see how much of the response is already done and in which encoding it's send.
If the response is getting compressed, most compression functions need a certain number of bytes before they can compress them. So even you do a flush() to signal PHP to flush the output buffer, there still can be a place either within PHP output filtering or the server waiting for more to do the compression. So next to compression done by apache, check if your PHP configuration does compression as well and disable it.
If you don't want to monitor your network traffic, the curl command-line utility is doing a pretty well job to display what's going on as well and it might be easier to use it instead of network monitoring.
curl -Ni --raw URL
Make sure you use the -N switch which will disable buffering by curl so you see your scripts/servers output directly.
Please see the section Inspecting HTTP Compression Problems with Curl in a previous answer of mine that shows some curl commands to look into the output of a request while it's done with compression as well.
curl is able to show you eventually compressed data uncompressed and you can disable compression per request, so regardless of the server or PHP output compression settings, you can test more differentiated.
<?php
ini_set('zlib.output_handler', '');
ini_set('zlib.output_compression', 0);
ini_set('output_handler', '');
ini_set('output_buffering', false);
ini_set('implicit_flush', true);
apache_setenv( 'no-gzip', '1' );
for($i = 0; $i < 5; $i++){
echo str_repeat(chr(0), 4096); #flood apache some null bytes so it feels the packet is big enough to be sent...
echo "$i<br/>";
flush();
sleep(1);
}
?>
I thought flush(); would work, at least from what Google/Stackoverflow tell me, but on my Windows WAMP (Windows, Apache, MySQL, PHP) system it doesn't work.
Is there some PHP setting I have to set to make flush() work?
Here's my code:
<?php
echo "Fun";
flush();
sleep(5);
echo "<br>Mo";
?>
The code just outputs all together when the script is done executing (after 5 seconds).. I don't want this, I want 'Fun' to show up right away, and then after 5 seconds 'Mo'.
I've tried other combinations of flush like ob_end_flush(); or ob_implicit_flush(true); but nothing is working. Any ideas?
So that's what I found out:
Flush would not work under Apache's mod_gzip or Nginx's gzip because, logically, it is gzipping the content, and to do that it must buffer content to gzip it. Any sort of web server gzipping would affect this. In short, at the server side, we need to disable gzip and decrease the fastcgi buffer size. So:
In php.ini:
. output_buffering = Off
. zlib.output_compression = Off
In nginx.conf:
. gzip off;
. proxy_buffering off;
Also have this lines at hand, specially if you don't have acces to php.ini:
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
set_time_limit(0);
Last, if you have it, coment the code bellow:
ob_start('ob_gzhandler');
ob_flush();
PHP test code:
ob_implicit_flush(1);
for($i=0; $i<10; $i++){
echo $i;
//this is for the buffer achieve the minimum size in order to flush data
echo str_repeat(' ',1024*64);
sleep(1);
}
The script works fine from CLI, displaying "Fun", waiting 5 secs before displaying "<br>Mo".
For a browser the results might be a bit different because:
The browser wont start rendering right away. Getting 3 bytes of data for HTML document isn't enough to do anything, so it'll most likely wait for a few more.
Implicit IO buffering on the lib level will most likely be active until a newline is received.
To work around 1) use text/plain content type for your test; 2) needs newlines, so do an echo "Fun\n"; and echo "<br>Mo\n"; Of course you wont be using text/plain for real HTML data.
If you're using CGI/FastCGI, forget it! These don't implement flush. The Webserver might have it's own buffer.
You can disable all output buffering in PHP with following command:
ob_implicit_flush();
If the problem persists, although you explicitly set
implicit_flush = yes
in your php.ini, you might also want to set
output_buffering = off
which did the trick in my case (after pulling my hair for 4+hrs)
Check your php.ini for output_buffering.
Maybe the problem is Apache here, which also may have buffers...