HHVM and nginx not sending headers immediately - php

I have a PHP download script that dynamically bundles files into zips. So the user feels like something is happening immediately, we send a header("Content-disposition: attachment; filename=your_file.zip"); as soon as possible and then start trickling out the files. In our traditional Apache/PHP setup, that works great but we're trying to get our codebase to run on an nginx/HHVM server and it doesn't send the headers the same way.
Rather, nginx/HHVM waits to send the headers until its done a lot of the processing (I'm not sure how much) and, from HHVM's perspective, sent out several files. This means the user ends up waiting a long time before getting a Save As dialog and it's creating a bad experience.
In my nginx site config, I set
fastcgi_buffering off;
fastcgi_keep_conn on;
proxy_buffering off;
I also tried adding flush(); and header('X-Accel-Buffering: no'); in PHP but nothing seems to help.
Is there something else I need to change?

Related

Can't get nginx to flush buffer with php7-fpm

I have a need for a longish-running (7-8 seconds) php script to output partial results to the user as they are found. I have previously been able to accomplish this with an older version of php-fpm and nginx by doing the following:
Using these config settings in php:
#ini_set('output_buffering', 0);
#ini_set('implicit_flush', 1);
#ini_set('zlib.output_compression', 0);
#ob_end_clean();
set_time_limit(0);
header('X-Accel-Buffering: no');
and running ob_implicit_flush(1); flush(); every time I needed to output partial results.
Using these directives for nginx:
fastcgi_keep_conn on;
proxy_buffering off;
gzip off;
However, with an upgrade to PHP 7 and nginx 1.10.3, these settings no longer work.
I have tried adding these directives to nginx:
fastcgi_max_temp_file_size 0;
fastcgi_store off;
fastcgi_buffering off;
But these don't seem to do anything, either. The results are still buffered until php script finishes running and then sent all at once.
Is what I am asking for still possible?
(I appreciate suggestions that there are other ways to send partial results that don't involve disabling buffers, but that's not part of my question).
Think the only way you can do that is if you can split the initial script in multiple scripts.
Each script you can then call from frontend using ajax and append the content to the dom.
PHP scripts are sync for the most part. But by doing ajax calls (those run async) you can execute multiple php scripts in parallel.

nginx: trigger slow php-fpm process, but quickly return status 200

Is it possible for nginx to trigger a php-fpm process, but then close the nginx worker and quickly return an empty page with status 200?
I have some slow php processes that need kicking off a few times a week. They can take between 3 and 4 minutes each. I trigger them with a cron manager site. The php process writes a lock file at the start, and when the process is complete an email is sent and finally the lock file is removed.
Following this guide, in my php-fpm worker pool, I have this: request_terminate_timeout = 300 and in my nginx site config I have fastcgi_read_timeout 300;
It works, but I don't care about the on-screen result. And the cron service I use has a time limit of 5 seconds, and after repeated timeouts, it disables the job.
Yes, I know I could fork a process in php, let it run in the background, and return a 200 to nginx. And yes, I could pay and upgrade my cron service. Nonetheless, it would be an interesting and useful thing to know, anyway.
So, is this possible, or does php-fpm require an open and "live" socket? I ask that because on the "increase your timeout" page referred to above, one answer says
"Lowest of three. It’s line chain. Nginx->PHP-FPM->PHP. Whoever dies
first will break the chain".
In other words, does that mean that I can never "trigger" a process, but then close the nginx part of the trigger?
You can.
exec a php cli script by adding a trailing &, redirecting output to a log file or /dev/null, pass any parameters as json or serialized (use escapeshellarg()), the exec will return 0 immediately (no error); or
use php's ignore_user_abort(), send a Connection: close header, flush any output buffers as well as a normal flush(). Put any slow code after that. You'll need to test this under Nginx.
Either way, return a 1xx code to signify acceptance but no response. And it's up you to make sure your script doesn't run forever; give it a heartbeat so it touch()es a file every so often. If the file is old and it's still running, kill it.
Thanks to #Walf's answer combined with this example from the php site, this SO answer and a little fiddling, this appears to be a solution for nginx that requires no messing with any php or nginx ini or conf files.
$start = microtime(true);
ob_end_clean();
header("Connection: close\r\n");
header('X-Accel-Buffering: no');
header("Content-Encoding: none\r\n");
ignore_user_abort(true); // optional
ob_start();
echo ('Text user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
ob_end_clean();
sleep(35); // simulate something longer than default 30s timeout
$time_elapsed_secs = microtime(true) - $start;
echo $time_elapsed_secs; // you will never see this!
Or, at least, it works perfectly for what I want it to do. Thanks for the answers.

How can I disable gzip inside php code on HHVM? (eg setting content-encoding header)

I'm converting php code to hhvm. One page in particular sometimes needs to flush() a status-message to the browser before sending some emails and a few other slow tasks and then updating the status message.
Before hhvm (using php-fpm and nginx) I used:
header('Content-Encoding: none;');
echo "About to send emails...";
if (ob_get_level() > 0) { ob_end_flush(); }
flush();
// Emails sent here
echo "Emails sent.";
So the content-encoding stops gzip being used, then the flush sends the first message, then the second message is sent when the page ends.
Using HHVM (and nginx), setting the Content-encoding header works (it shows up in the browser), but either hhvm or nginx is ignoring it and sending the page as gzipped content, so the browser interprets the content-encoding=none with binary data.
How can I disable gzip inside php code on HHVM?
(I know I could turn it off in the config files, but I want it kept on for nearly every page load except a few that will run slower.)
While my suggestion would be to have different nginx location paths with different gzip configuration, here's a better alternative solution to achieve what you want to happen.
Better Solution:
It is often referred to as bad practice to keep a connection open (and the browser loading bar spinning) while you're doing work in the background.
Since PHP 5.3.3 there is a method fastcgi_finish_request() which flushes the data and closes the connection, while it continues to work in the background.
Now, this is unfortunately not supported yet on HHVM. However, there is an alternative way of doing this.
HHVM alternative:
You can use register_postsend_function('function_name'); instead. This closes the connection, and the given function will be executed in the background.
Here is an example:
<?php
echo "and ...";
register_postsend_function(function() {
echo "... you should not be seeing this";
sleep("10"); // do a lot of work
});
die();

Nginx as a reverse-proxy while long-polling

I've got apache as a back-end server, which runs php scripts and nginx as a reverse-proxy server which deals with static content.
A php-script, which gives me ID of some process and then performs this process (pretty long). I need to pass to browser only the ID of that proccess.
// ...
ob_start();
echo json_encode($arResult); // only this data should be passed to browser
$contentLength = ob_get_length();
header('Connection: close');
header('Content-Length: ' . $contentLength);
ob_end_flush();
ob_flush();
flush();
// then performed a long process
(I check the status of the proccess with another ajax-script)
This works fine under apache alone. But I have problems when apache is behind nginx. In this case I get response only when the proccess is completly finished.
nginx settings:
server {
#...
proxy_set_header Connection close;
proxy_pass_header Content-Length;
#...
}
But I still get Connection keep-alive in FireBug.
How can I get nginx to immediately give a response from apache?
Hope the question is clear.
Thanks.
Have you tried proxy_buffering off in nginx? Not sure it will close the connection but at least the response will be transmited as is to the client. :-)
Nginx does not support any of the flush() methods in PHP when used in a fastcgi or reverse_proxy scheme.
I have tested all of the many proxy_buffering_*, buffer_size_* configurations without success under nginx/0.8.49. It will always wait until the PHP process exits.
If your content is big you have to tune the proxy buffers
client closed prematurely connection while sending to client, in nginx

php simultaneous file downloads from the same browser and same php script

<?php
$filename= './get/me/me_'.rand(1,100).'.zip';
header("Content-Length: " . filesize($filename));
header('Content-Type: application/zip');
header('Content-Disposition: attachment; filename=foo.zip');
readfile($filename);
?>
Hi,
I have this simple code that forces a random file download, my problem is that if I call the script two or more times from the same browser the second download won't start until the first is completed or interrupted. Thus I can download only one file per time.
Do you have any clue?
This may be related to PHP's session handling.
Using the default session handler, when a PHP script opens a session it locks it. Subsequent scripts that need to access it have to wait until the first script is finished with it and unlocks it (which happens automatically at shutdown, or by session_write_close() ). This will manifest as the script not doing anything till the previous one finishes in exactly the same way you describe.
Clearly you aren't starting the session explicitly, but there's a config flag that causes the session to start automatically: session.auto_start - http://www.php.net/manual/en/session.configuration.php
Either use phpinfo() to determine if this is set to true, or look in your config. You could also try adding session_write_close() to the top of the script, see if it makes the issue go away.
just guesses. There could be different reasons.
first, your server could restrict the number of connections or childs in paralell. But I guess this sin't the problem
second, it is more likely that the client restricts the number of connections. The "normal" browser opens only two connections at a time to a certain server. Modern browsers allow up to 8 (?) connections. This is a simple restriction in order to avoid problems which could occur with slow servers.
One workaround could be to place every download on a "virtual" subdomain.
give it a try!
Just to say that the session_write_close(); solved the problem for me.
I was using session_destroy(); (that worked) but was not much good if I needed to keep session data :)
All you need to do I place session_write_close(); just before you start streaming the file data.
Example:
<?php
$filename= './get/me/me_'.rand(1,100).'.zip';
session_write_close();
header("Content-Length: " . filesize($filename));
header('Content-Type: application/zip');
header('Content-Disposition: attachment; filename=foo.zip');
readfile($filename);
?>
I'd further investigate Ralf's suggestion about the server restrictions and start with checking the logfiles to ensure that the second request is received by the server at all. Having that knowledge, you can eliminate one of the possibilities and at least see which side the problem resides on.
From the client's browser - you didn't mention which one is it - if Firefox, try to install the Live Http Headers extension to see what happens to request you send and if browser receives any response from the server side.
As far as I can find, there is no php configuration setting that restricts max downloads or anything like that - besides, such a configuration is outside the scope of php.
Therefore, I can come to only two conclusions:
The first is that this is browser behaviour, see if the problem is repeated across multiple browsers (let me know if it is). The HTTP spec does say that only two connections to the same domain should be active at any one time, but I wasn't aware that affected file downloads as well as page downloads. A way of getting round such a limitation is to allocate a number of sub-domains to the same site (or do a catch-all subdomains DNS entry), and when generating a link to the download, select a random sub domain to download from. This should work around the multiple request issue if it is a browser problem.
A second and much more unlikely option is that (and this only applys if you are using Apache), your MaxKeepAliveRequests configuration option is set to something ridiculously low and KeepAlives are enabled. However, I highly doubt that is the issue, so I suggest investigating the browser possibility.
Are you getting an error message from the browser when the second download is initiated, or does it just hang? If it just hangs, this suggests it is a browser issue.

Categories