Cloud run fails to serve complete file - client_disconnected_after_partial_response - php

We have problem in our PHP application that runs in GCP Cloud run.
Environment: PHP 8 (nginx php-fpm) in docker - official image "php:8.1-fpm".
There are certain files that are failed to be sent to the client when client opens a page in our system. I am not able to reproduce the same behavior locally in docker (exactly the same image), so I guess this is Cloud run specific.
The file is served by this PHP code:
...
header('Last-Modified: ' . $lmstring);
header('Content-Lenght: ' . filesize($filename));
readfile($filename);
die;
}
Getting this warning from cloud run:
"Truncated response body. Usually implies that the request timed out or the application exited before the response was finished."
Info log from Google HTTPS load balancer (error code explained here):
statusDetails: "client_disconnected_after_partial_response"
Real example and some facts I already found out:
Opening page X in the internal system. This page downloads Y other files - most of these Y files (css+js) are sent successfully, but some are truncated to ~3641 characters. All files that are sent incompletely are usually truncated to the same length (rarely a bit less). It's always the same files trucated and I didn't spot anything they have in common. Sometimes it is js file, sometimes it is css file. The response code is HTTP 200.
I didn't find any error in PHP error output - trying to turn errors on on all levels. We get errors from PHP normally in other cases, so I don't expect anything is silenced.
Anybody has any idea why might this be happening and how to deal with that?

Seems like I found the solution. Answer in following thread has helped:
Google Cloud Run website timeouts when content length is between 4013-8092 characters. What is going on?
Try to explain the behavior is in the above thread too.
Turning the fastcgi bufferring off in nginx helped to get rid of the problem:
fastcgi_buffering off; (nginx.conf)

Related

NGINX/PHP - Cannot serve jpeg via php script

The following script is trivial and works ok apache without issue
header('Content-Type: image/jpeg');
echo file_get_contents('./photo.jpg');
On NGINX/PHP-FPM I get a blank page. I have tried two different virtual servers. One I created, and the homestead improved box ( https://github.com/Swader/homestead_improved ) which is based on Laravel Homestead.
Error reporting is on, there are no errors. If I remove the header and just use:
echo file_get_contents('./photo.jpg');
I get the binary converted to ASCII and see the strange characters; the file is being loaded correctly.
I thought the issue might be a missing header, so I tried content length:
header('Content-Type: image/jpeg');
$contents = file_get_contents('./photo.jpg');
header('Content-length: ' . strlen($contents));
echo $contents;
This gives a different result: The page never loads, as if the browser never receives all the bytes it's expecting.
If I print strlen($contents) it displays the file size in bytes. PHP is loading the image correctly, but it's never reaching the browser.
The script works on an Apache server so the issue seems to be NGINX or PHP-FPM.
I have tried different images (one 80kb, one 2.2mb), the result is the same. I've also tried readfile instead of file_get_contents.
Update
In Chrome developer tools, the full image is downloaded and shown in the Network tab. The browser is getting the data but it's not displayed.
Your problem lies in process memory. PHP uses a different configuration file when running under PHP-FPM then when running under Apache for instance.
The problem with file_get_contents is that it reads the entire file into memory. In the case of an image file, memory reaches its limit and the http response never completes.
To fix the problem, you can either stream the image using fopen or increase PHP-FPM php's memory limit.
Like Scriptonomy said the memory can be an issue. You can also use readfile
https://secure.php.net/manual/en/function.readfile.php

Requesting file via PHP fails, but succeeds in Python

The URL in question : http://www.roblox.com/asset/?id=149996624
When accessed in a browser, it will correctly download a file (which is an XML document). I wanted to get the file in php, and simply display its contents on a page.
$contents = file_get_contents("http://www.roblox.com/asset/?id=149996624");
The above is what I've tried using (as far as I know, the page does not expect any headers). I get a 500 HTTP error. However, in Python, the following code works and I receive the file.
r = requests.get("http://www.roblox.com/asset/?id=147781188")
I'm confused as to what the distinction is between how these two requests are sent. I am almost 100% it is not a header problem. I've also tried the cURL library in PHP to no avail. Nothing I've tried in PHP seems to succeed with the URL (with any valid id parameter); but Python is able to bring success nonchalantly.
Any insight as to why this issue may be happening would be great.
EDIT : I have already tried copying Python's headers into my PHP request.
EDIT2 : It also appears that there are two requests happening upon navigating to the link.
Is this on a linux/mac host by chance? If so you could use ngrep to see the differences on the request themselves on the wire. Something like the following should work
ngrep -t '^(GET) ' 'src host 127.0.0.1 and tcp and dst port 80'
EDIT - The problem is that your server is responding with a 302 and the PHP library is not following it automatically. Cheers!

PHP script on Amazon EC2 giving response 324 on browser

We have a script which downloads acsv file. When we run this script on command line on EC2 console it runs fine; downloads the file and sends success message to the user.
But if we run through a browser then we get:
error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
When we checked in backed for the file download, it's there but the success message sent after the download is not received by the browser.
We are using cURL to download from a remote location with authentication. The user group and ownership of the folder is "ec2-user", the folder has full rights ie 777.
To summarize: the file is downloaded but at the browser end we are not getting any data or success message which we print.
P.S.: The problem occurs when the downloaded file size is 8-9MB; if it is a smaller file size say 1MB it works. So Either script executing time or download file size or some ec2 instance config is blocking it from giving browser a response. The same script is working perfectly fine on our Godaddy Linux VPS. We have already changed Max execution time for the script.
Sadly, this is a known problem without a good solution. There's a very long thread on the amazon forum here: https://forums.aws.amazon.com/thread.jspa?threadID=33427. The solution offered there is to send a keep-alive message to keep the connection from dying after 60 seconds. Not a great solution, but I don't think there's a better one unless Amazon fixes the problem, which doesn't seem likely given that the thread has been open for 3 years.

How to view PHP or Apache error log online in a browser?

Is there a way to view the PHP error logs or Apache error logs in a web browser?
I find it inconvenient to ssh into multiple servers and run a "tail" command to follow the error logs. Is there some tool (preferably open source) that shows me the error logs online (streaming or non-streaming?
Thanks
A simple php code to read log and print:
<?php
exec('tail /var/log/apache2/error.log', $error_logs);
foreach($error_logs as $error_log) {
echo "<br />".$error_log;
}
?>
You can embed error_log php variable in html as per your requirement. The best part is tail command will load the latest errors which wont make too load on your server.
You can change tail to give output as you want
Ex. tail myfile.txt -n 100 // it will give last 100 lines
See What commercial and open source competitors are there to Splunk? and I would recommend https://github.com/tobi/clarity
Simple and easy tool.
Since everyone is suggesting clarity, I would also like to mention tailon. I wrote tailon as a more modern and secure alternative to clarity. It's still in its early stages of development, but the functionality you need is there. You may also use wtee, if you're only interested in following a single log file.
You good make a script that reads the error logs from apache2..
$apache_errorlog = file_get_contents('/var/log/apache2/error.log');
if its not working.. trying to get it with the php functions exec or shell_exec and the command 'cat /var/log/apache2/error.log'
EDIT: If you have multi servers(i quess with webservers on it) you can create a file on the machine, when you make a request to that script(hashed connection) you get the logs from that server
I recommend LogHappens: https://loghappens.com, it allows you to view the error log in web, and this is what it looks like:
LogHappens supports kinds of web server log format, it comes with parses for Apache and CakePHP, and you can write your own.
You can find it here: https://github.com/qijianjun/logHappens
It's open source and free, I forked it and do some work to make it work better in dev env or in public env. That is:
Support token for security, one can't access the site without the token in config.php
Support IP whitelists for security and privacy
Sopport config the interval between ajax requests
Support load static files from local (for local dev env)
I've found this solution https://code.google.com/p/php-tail/
It's working perfectly. I only needed to change the filesize, because I was getting an error first.
56 if($maxLength > $this->maxSizeToLoad) {
57 $maxLength = $this->maxSizeToLoad;
58 // return json_encode(array("size" => $fsize, "data" => array("ERROR: PHPTail attempted to load more (".round(($maxLength / 1048576), 2)."MB) then the maximum size (".round(($this->maxSizeToLoad / 1048576), 2) ."MB) of bytes into memory. You should lower the defaultUpdateTime to prevent this from happening. ")));
59 }
And I've added default size, but it's not needed
125 lastSize = <?php echo filesize($this->log) || 1000; ?>;
I know this question is a bit old, but (along with the lack of good choices) it gave me the idea to create this tiny (open source) web app. https://github.com/ToX82/logHappens. It can be used online, but I'd use an .htpasswd as a basic login system. I hope it helps.

The server at www.localhost.com is taking too long to respond

A very strange thing is happening. I am running a script on a new server (it works on my current server and laptop).
The strange thing is that I only get it to (sort of) work when I increase memory limit to 1024M (!). It is extracting a large zip file and going through the files, so I thought it was normal. Instead of this script terminating or ending with errors. I get an error from my browser:
The server at www.localhost.com is
taking too long to respond.
Localhost.com? The web server is just localhost:9090 and I can see Apache is still running. Maybe Apache crashes momentarily and it can't find the server? But nothing about apache crashing in the log files.
This isn't a server issue, its more to do with my PHP script and memory usage I think, so no need to move to server fault.
What could be the problem? How can I narrow do the cause, I am at loss here!
The server is a windows server running Apache 2.2 with PHP version 5.3.2. May laptop and the other working server are running version 5.3.0 and 5.3.1 for PHP.
Thanks all for any help
Ensure that,
ini_set('display_errors','On');
ini_set('error_reporting',E_ALL);
ini_set('max_execution_time', 180);
ini_set('memory_limit','1024MB' );
I'd pop this in the top of the script and see what comes out. It should show you errors and the like.
The other thing, have you checked fopen and the path of the file which it's loading?
Abs said,
check files being zipped up can be zipped by PHP (permissions
especially on a Windows OS with multi
users)
I kept getting this problem too, and none of these sites really helped until I started looking at the same thing for people using Internet Explorer. The way I fixed it is to open up the system hosts file, located at C:\Windows\System32\drivers\etc\hosts, and then uncomment out the line that mentions ::1, which is needed for IPv6. After that it worked fine.
Somehow your system's munged up and isn't treating localhost as the local 127.0.0.1 address. Is your hosts file properly configured? This is most likely why you're getting the "too long to respond" error:
marc#panic:~$ host www.localhost.com
www.localhost.com has address 64.99.64.32
marc#panic:~$ wget www.localhost.com
--2010-08-03 22:41:05-- http://www.localhost.com/
Resolving www.localhost.com... 64.99.64.32
Connecting to www.localhost.com|64.99.64.32|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
www.localhost.com is full valid hostname as far as the DNS system is concerned.
I am not a php guru by any means but are you writing the extracted files to a temporary local storage location that is within the scope of the application? Because if you are not then I think what is happening is that the application is storing the zip file and extracted files in memory and then is attempting to read them. So if it is a large zip and/or the extracted files are large that would introduce a huge amount of overhead on top of the overhead introduced by your read and processing actions.
So if you are not already I would extracted the files and write them to disk in their own folder, dispose of the zip file at this point, and then iterate over the files in your newly created directory and perform whatever actions you need to on them.

Categories