I have a web application on a site that takes a while (~10 seconds) to complete a portion of the page near the bottom - it has been as optimized as it can be, and caching is not an option.
We have compression enabled on the server via an .htaccess directive SetOutputFilter DEFLATE the problem is this causes the whole page to be held until completion before it starts outputting to the user, this is not optimal as the user sees nothing until the page completes.
I have also tried it via the php ob_start("ob_gzhandler"); method.
Currently I have a <FilesMatch > in my .htaccess restricting this specific script from being compressed.
Basically my question is this - Is there a way to say chunk gzip or deflate so that the user gets it in pieces, so they can see that the page has begun loading?
I would say: no. I think there is now way provided by HTTP.
If you are using the ob_start("ob_gzhandler") method, you can do this - you need to look at the flush and ob_flush functions.
Some sample code - try loading with curl, or use fiddler to inspect the actual http responses
<?php
ob_start('ob_gzhandler');
print "chunk 1";
ob_flush();
flush();
sleep(2);
print "chunk 2";
ob_end_flush();
Unfortunately, browsers don't seem to display this in chunks - I think this is because the data of each chunk is too small. You can verify this effect by calling wget -O - -q http://chunktest/chunktest.php on your test file.
There are some more useful resources here
If the page is that long of a load time, the creative way to handle it is to use a very quick loading page with an ajax call to that long-loading content on the page. We do this for the pages that pull detailed member usage statistics... Other sites, like Adsense for example, do this on their reports page.
Related
I'm converting php code to hhvm. One page in particular sometimes needs to flush() a status-message to the browser before sending some emails and a few other slow tasks and then updating the status message.
Before hhvm (using php-fpm and nginx) I used:
header('Content-Encoding: none;');
echo "About to send emails...";
if (ob_get_level() > 0) { ob_end_flush(); }
flush();
// Emails sent here
echo "Emails sent.";
So the content-encoding stops gzip being used, then the flush sends the first message, then the second message is sent when the page ends.
Using HHVM (and nginx), setting the Content-encoding header works (it shows up in the browser), but either hhvm or nginx is ignoring it and sending the page as gzipped content, so the browser interprets the content-encoding=none with binary data.
How can I disable gzip inside php code on HHVM?
(I know I could turn it off in the config files, but I want it kept on for nearly every page load except a few that will run slower.)
While my suggestion would be to have different nginx location paths with different gzip configuration, here's a better alternative solution to achieve what you want to happen.
Better Solution:
It is often referred to as bad practice to keep a connection open (and the browser loading bar spinning) while you're doing work in the background.
Since PHP 5.3.3 there is a method fastcgi_finish_request() which flushes the data and closes the connection, while it continues to work in the background.
Now, this is unfortunately not supported yet on HHVM. However, there is an alternative way of doing this.
HHVM alternative:
You can use register_postsend_function('function_name'); instead. This closes the connection, and the given function will be executed in the background.
Here is an example:
<?php
echo "and ...";
register_postsend_function(function() {
echo "... you should not be seeing this";
sleep("10"); // do a lot of work
});
die();
I work on a PHP project and I use flush().
I did a lot of search and found that PHP sends long outputs of scripts to the browser in chunk parts and does not send all the huge data when the script terminates.
I want to know the size of this data, I mean how many bytes the output must be for PHP to send them to browser.
It's not only PHP that chunks the data; it's actually the job of Apache (or Tomcat etc) to do this. That's why the default is to turn off the "chunking" in PHP and leave it to Apache. Even if you force a flush from PHP, it still can get trapped by Apache. From the manual:
flush() may not be able to override the buffering scheme of your web
server and it has no effect on any client-side buffering in the
browser. It also doesn't affect PHP's userspace output buffering
mechanism. This means you will have to call both ob_flush() and
flush() to flush the ob output buffers if you are using those.
There's a Wikipedia article on transfer encoding / chunking: http://en.wikipedia.org/wiki/Chunked_transfer_encoding
Apache gets more complicated with GZIP or deflate encoding; you'll need to hit an apache server as to how you chan configure it.
i think you are wrong
see this code
echo str_repeat(' ',1024);
for($i=0;$i<10;$i++){
echo $i;
flush();
sleep(1);
if you run it see that every 1 byte sent to browser and print
//the str_repeat is for browser buffer for showing data and nothing else
I think my question seems pretty casual but bear with me as it gets interesting (at least for me :)).
Consider a PHP page that its purpose is to read a requested file from filesystem and echo it as the response. Now the question is how to enable cache for this page? The thing to point out is that the files can be pretty huge and enabling the cache is to save the client from downloading the same content again and again.
The ideal strategy would be using the "If-None-Match" request header and "ETag" response header in order to implement a reverse proxy cache system. Even though I know this far, I'm not sure if this is possible or what should I return as response in order to implement this technique!
Serving huge or many auxiliary files with PHP is not exactly what it's made for.
Instead, look at X-accel for nginx, X-Sendfile for Lighttpd or mod_xsendfile for Apache.
The initial request gets handled by PHP, but once the download file has been determined it sets a few headers to indicate that the server should handle the file sending, after which the PHP process is freed up to serve something else.
You can then use the web server to configure the caching for you.
Static generated content
If your content is generated from PHP and particularly expensive to create, you could write the output to a local file and apply the above method again.
If you can't write to a local file or don't want to, you can use HTTP response headers to control caching:
Expires: <absolute date in the future>
Cache-Control: public, max-age=<relative time in seconds since request>
This will cause clients to cache the page contents until it expires or when a user forces a page reload (e.g. press F5).
Dynamic generated content
For dynamic content you want the browser to ping you every time, but only send the page contents if there's something new. You can accomplish this by setting a few other response headers:
ETag: <hash of the contents>
Last-Modified: <absolute date of last contents change>
When the browser pings your script again, they will add the following request headers respectively:
If-None-Match: <hash of the contents that you sent last time>
If-Modified-Since: <absolute date of last contents change>
The ETag is mostly used to reduce network traffic as in some cases, to know the contents hash, you first have to calculate it.
The Last-Modified is the easiest to apply if you have local file caches (files have a modification date). A simple condition makes it work:
if (!file_exists('cache.txt') ||
filemtime('cache.txt') > strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE'])) {
// update cache file and send back contents as usual (+ cache headers)
} else {
header('HTTP/1.0 304 Not modified');
}
If you can't do file caches, you can still use ETag to determine whether the contents have changed meanwhile.
My flush mechanism stopped working, i'm not sure why.
I'm trying to run a simple flush example now, with no luck:
echo "before sleep";
flush();
sleep(5);
echo "after sleep";
after doing some reading, and understanding ngin x was was installed on my server lately, I requested it to be disabled for my domain. (the server admin said he disabled it for this specific domain)
also, i tried disabling gzip, added these lines to .htaccess
SetOutputFilter DEFLATE
SetEnv no-gzip dont-vary
also, tried adding these to my php file
ini_set('output_buffering','on');
ini_set('zlib.output_compression', 0);
nothing helps. its sleeping 5 seconds and then displaying all the content together.
I've been using it before, and have been using also through the output buffer (ob_start, ob_flush etc., now just trying to make the simplest example work)
"Stopped working" is a pretty high level. You should actually take a look what works or not to find out more.
This can be done by monitoring the network traffic. You will see how much of the response is already done and in which encoding it's send.
If the response is getting compressed, most compression functions need a certain number of bytes before they can compress them. So even you do a flush() to signal PHP to flush the output buffer, there still can be a place either within PHP output filtering or the server waiting for more to do the compression. So next to compression done by apache, check if your PHP configuration does compression as well and disable it.
If you don't want to monitor your network traffic, the curl command-line utility is doing a pretty well job to display what's going on as well and it might be easier to use it instead of network monitoring.
curl -Ni --raw URL
Make sure you use the -N switch which will disable buffering by curl so you see your scripts/servers output directly.
Please see the section Inspecting HTTP Compression Problems with Curl in a previous answer of mine that shows some curl commands to look into the output of a request while it's done with compression as well.
curl is able to show you eventually compressed data uncompressed and you can disable compression per request, so regardless of the server or PHP output compression settings, you can test more differentiated.
<?php
ini_set('zlib.output_handler', '');
ini_set('zlib.output_compression', 0);
ini_set('output_handler', '');
ini_set('output_buffering', false);
ini_set('implicit_flush', true);
apache_setenv( 'no-gzip', '1' );
for($i = 0; $i < 5; $i++){
echo str_repeat(chr(0), 4096); #flood apache some null bytes so it feels the packet is big enough to be sent...
echo "$i<br/>";
flush();
sleep(1);
}
?>
I'm working on a PHP script which generates large (multi-MB) output on the fly without knowing the length in advance. I am writing directly to php://output via fwrite() and have tried both standard output and using Transfer-Encoding: chunked (encoding the chunks as required) but no matter what I try the browser waits until all the data is written before displaying a download dialog. I have tried flush()ing too after the headers and after each chunk but this also makes no difference.
I'm guessing that Apache is caching the output as the browser would normally display after receiving a few kB from the server.
Does anyone have any ideas on how to stop this caching and flush the data to the browser as it is generated?
Thanks,
J
First of all, like BlaM mentioned in his comment, if in the PHP configuration OutputBuffering is enabled, it wont work, so it would be useful to know your phpinfo().
Next thing, try if it works with a big file that is stored on yor webserver, output it usinf readfile. And, together with this, check if you send the correct headers. Hints on how to readfile() and send the correct headers a provided here: StackOverflow: How to force a file download in PHP
And while you are at it, call ob_end_flush() or ob_end_clean() at the top of your script.