503 Service Unavailable (Long Running MySQL query) - php

I have a script that exports a lot of data from a MySQL database hosted in a Docker container. All other requests are working fine without a 503 error - a long running MySQL query is the only request getting the 503 error.
PHP already has max execution and input time set to infinite, I set the TimeOut in Apache to an hour (1800). My first inclination is Apache is simply not waiting long enough / not receiving a response so it goes to a 503 error. Prior to setting the timeout to 1800 seconds, I was getting a 504 error.
The error happens roughly at the 60 to 65 second mark.
Anyone know of a setting in Apache that can be changed or some other solution?

Related

AWS S3 PHP: Tons of Inconsistent 408 and cURL 28 Errors, Timeouts/Delays/Hangs every few seconds

Background: I have a 250 GB object storage at Dreamhost. I am using the AWS S3 Client (PHP) for uploading files to it. It worked fine for months until they migrated their server from the West Coast to the East Coast. The only changes (very small and simple) to my scripts was a new Host URL/region. My bucket has around 1 million photos/thumbnails of around 10kb-100kb in size on average.
Since then for 2 months, some photos will upload fine, and then half the time, uploading a photo will result in 400/500 errors. We contacted Dreamhost Support and they have been absolutely stumped for 2 months - no answer to the problem. Here are type of errors in our logs:
[05-Dec-2018 12:28:27 UTC] PHP Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://objects-us-east-1.dream.io/mybucket/img.jpg"; AWS HTTP error: Client error: `PUT https://objects-us-east-1.dream.io/mybucket/img.jpg` resulted in a `408 Request Time-out` response:
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
(client): - <html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
'
GuzzleHttp\Exception\ClientException: Client error: `PUT https://objects-us-east-1.dream.io/mybucket/img.jpg` resulted in a `408 Request Time-out` response:
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
in /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Exception/RequestException.php:113
in /home/username/mysite.com/includes/cdn/aws/Aws/WrappedHttpHandler.php on line 191
[05-Dec-2018 12:44:21 UTC] PHP Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://objects-us-east-1.dream.io/mybucket/img.jpg"; AWS HTTP error: cURL error 28: Operation timed out after 0 milliseconds with 0 out of 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)'
GuzzleHttp\Exception\ConnectException: cURL error 28: Operation timed out after 0 milliseconds with 0 out of 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php:186
Stack trace:
#0 /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php(150): GuzzleHttp\Handler\CurlFactory::createRejection(Object(GuzzleHttp\Handler\EasyHandle), Array)
#1 /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php(103): GuzzleHttp\Handler\CurlFactory::finishError(Object(GuzzleHttp\Handler\CurlMultiHandler), Object(GuzzleHttp\H in /home/username/mysite.com/includes/cdn/aws/Aws/WrappedHttpHandler.php on line 191
In an attempt to narrow down the problem, I've also done the simplest of examples like listing buckets (Dreamhost tutorial examples), and the same behavior happens - even on a new test bucket with 1 image in it. If I refresh the browser once every few seconds it might list the buckets 2-3 times successfully, but on the 4th refresh, the page continues to "hang" for a long time and it might finally display the bucket after a 150 seconds delay, or the script might just timeout. Dreamhost noticed the same thing when they set up an example on a basic cloud server instance: the bucket list might load immediately, or after 60 seconds, 120 seconds, 180 seconds, etc. A clue: seems like it loads just after 30 second increments. (these 180, 150, 120, 60 times are all divisible by 30).
I'm hoping someone understands what is happening here. The problem is so bad that we have hundreds of unhappy merchants in our marketplace that are having a hard time listing new products for sale, because of this image uploading issue making it nearly impossible for them to list images and is causing their browsers to "hang". To make matters worse, these image uploading timeout issues are causing all my 40 PHP processes to timeout, which indirectly causes 500 Internal Server Errors for site visitors as well. Our site doesn't have that much traffic, maybe 10,000 visitors per day. Again, it is surprising that Dreamhost has been stumped for months, they say I'm the only customer they have that has the issue.
Other info, my server is running on:
Ubuntu 16.04
Apache 2.4.33
PHP-FPM (7.0)
cURL 7.47.0
AWS S3 SDK for PHP 3.81.0
Have HTTPS and HTTP/2 enabled

Php - Azure 500 - The request timed out

I am fetching data from cleardb mysql. It takes around 10 mins to give me result back.
But after 230 secs Azure gives error as "500 - The request timed out.
The web server failed to respond within the specified time."
i have tried to set max_execution_timeout to infinite and changed more config variables in .user.ini.
As well tried to set manually in first line of php script file as set_time_limit(0); and ini_set('max_execution_time', 6000000);.
But no luck.
I don't want to use webjobs.
Is there any way to resolve Azure 500 - The request timed out. issue.
Won't work. You'll hit the in-flight request timeout long before the 10-min wait.
Here's a better approach. Call a stored procedure that produces the result and make a second call 10 minutes later to retrieve the data.
call the stored procedure from your code
return a Location: header in the response
follow the URL to grab results, 200 OK means you have them, 417 Expectation Failed means not yet.

500 Internal Server Error caused by mod_fcgid: read data timeout in 60 seconds

I have a php page that runs a number of loops and queries and database updates etc. It can take some time to run and after a minute, I get the 500 Internal Server Error. I don't have access to the logs but my hosting service has forwarded a copy and it seems that it is a timeout related error:
mod_fcgid: read data timeout in 60 seconds
I have included:
ini_set('max_execution_time', 9000);
set_time_limit(0);
In the php page but it still causes the 500 error. I can't access any of the config files. Is there any other way I can increase the timeout for this page?
I have also tried putting
set_time_limit(59);
at the start of each loop through. If this is meant to reset the clock then I can't see that I should have a problem but the error persists.
NOTE: I am 99% sure that it is not an error in the script itself as sometimes it goes through and other times times it doesn't with exactly the same data.

PHP script dies before ftp_get() finishes downloading a file

I know how to download a file from a server using FTP with PHP.
I have a list of files to download from the ftp to an internal storage.
I use ftp_get() to download the list of files,
the first file size is: 126 mb, successfully downloaded to my internal storage.
However the PHP function throws an error 500, and then dies without continuing.
The error I get:
Internal Server Error
The server encountered an internal error or misconfiguration and was
unable to complete your request.
Please contact the server administrator, webmaster#zzz.com and inform
them of the time the error occurred, and anything you might have done
that may have caused the error.
More information about this error may be available in the server error
log.
Additionally, a 404 Not Found error was encountered while trying to
use an ErrorDocument to handle the request.
Any idea what I should do in order for the function to complete its run successfully?
You need to increase the timeout then. 180 is in seconds, which is 3 minutes. Try setting it to 600. I.e.: FTP_TIMEOUT_SEC, 600 or higher, depending on how much more time is needed. You probably could even try FTP_TIMEOUT_SEC, 0 which I think is NO time limit.
It is already commented for one more question similar to this. Please try this. It should work.
Maybe you exceeded the maximum execution time.
Try to increase it:
https://www.php.net/manual/en/function.set-time-limit.php

Gateway Time-out:The gateway did not receive a timely response from the upstream server

I am sending 300 newsletter at a time with a url, after 2 min it refresh itself again to send next 300 at so on.
But I am getting this error:
Gateway Time-out
The gateway did not receive a timely response from the upstream server
or application.
Additionally, a 404 Not Found error was encountered while trying to
use an ErrorDocument to handle the request.
I have set max execution to 3600
ini_set('max_execution_time', 3600);
But I am regularly getting same error. Please help me to find out the solution.
I encountered the same problem and I used ini_set('default_socket_timeout', 6000); to fix it.
http://php.net/manual/en/filesystem.configuration.php#ini.default-socket-timeout
I encountered the same problem. After i changed my php.ini file
default_socket_timeout = 240
max_execution_time = 240
to fix it.
"Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request."
This would indicate something is not configured properly on the server.
Can't follow why you think this is a CloudFlare issue right now (from the tag). Are you getting a CloudFlare error message at all?
if the problem is coming from sql sentence, is a server's processing the long query try to optimize the SQL Sentence
I have 18,600,000 rows in my table. TimeOut error was over when I set the TimeOut to 6000 in http.conf after the ServerRoot.

Categories