I am fetching data from cleardb mysql. It takes around 10 mins to give me result back.
But after 230 secs Azure gives error as "500 - The request timed out.
The web server failed to respond within the specified time."
i have tried to set max_execution_timeout to infinite and changed more config variables in .user.ini.
As well tried to set manually in first line of php script file as set_time_limit(0); and ini_set('max_execution_time', 6000000);.
But no luck.
I don't want to use webjobs.
Is there any way to resolve Azure 500 - The request timed out. issue.
Won't work. You'll hit the in-flight request timeout long before the 10-min wait.
Here's a better approach. Call a stored procedure that produces the result and make a second call 10 minutes later to retrieve the data.
call the stored procedure from your code
return a Location: header in the response
follow the URL to grab results, 200 OK means you have them, 417 Expectation Failed means not yet.
Related
I have a script that exports a lot of data from a MySQL database hosted in a Docker container. All other requests are working fine without a 503 error - a long running MySQL query is the only request getting the 503 error.
PHP already has max execution and input time set to infinite, I set the TimeOut in Apache to an hour (1800). My first inclination is Apache is simply not waiting long enough / not receiving a response so it goes to a 503 error. Prior to setting the timeout to 1800 seconds, I was getting a 504 error.
The error happens roughly at the 60 to 65 second mark.
Anyone know of a setting in Apache that can be changed or some other solution?
I keep getting a curl timeout using the stream-laravel library from getstream.io. It happens when I request an activity feed for display in a blade template.
I have the settings in stream-laravel config set to timeout at 3. When the page displays, it displays the activity feed correctly. If I refresh that page after the timeout, I get this curl error:
cURL error 28: Resolving timed out after 4058 milliseconds (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
What it seems like is going on, is that the curl session to getstream.io hasn't disconnected and when there is a page refresh, it thinks it's still connected and since the time has passed for the timeout, returns that error.
On a subsequent request (refresh of the page), it works again. Rinse and repeat.
I am not the only one to have had this issue, see: GetStream.io connection timeout on PHP addActivity
That issue was never resolved but is similar.
This is both on my local dev and in production.
I have a php page that runs a number of loops and queries and database updates etc. It can take some time to run and after a minute, I get the 500 Internal Server Error. I don't have access to the logs but my hosting service has forwarded a copy and it seems that it is a timeout related error:
mod_fcgid: read data timeout in 60 seconds
I have included:
ini_set('max_execution_time', 9000);
set_time_limit(0);
In the php page but it still causes the 500 error. I can't access any of the config files. Is there any other way I can increase the timeout for this page?
I have also tried putting
set_time_limit(59);
at the start of each loop through. If this is meant to reset the clock then I can't see that I should have a problem but the error persists.
NOTE: I am 99% sure that it is not an error in the script itself as sometimes it goes through and other times times it doesn't with exactly the same data.
I've spent almost two full days trying to resolve this issue, but to no avail. Any help is greatly appreciated.
I am trying to make a Soap Request in PHP using php's build in SoapClient. I have verified that trying to send a request where the size of envelope is smaller than 4MB works great. The communication between the called server and my client has no issues in this circumstance. As soon as I tip the size of the envelope just over 4MB, my php instance takes somewhere between 1-2 minutes to throw a SoapFault where the error message is "Error fetching HTTP headers". I have my max_post_size and memory_limits set to 150M in my php.ini and my IIS request limit is set to 500MB.
I have verified that if I am not using php to make the SOAP request, I can complete my request and response chain with bodies upwards of 4MB in no time at all, so I feel that I've narrowed this down to a php/SoapClient issue.
If anybody has any ideas, I would greatly appreciate the help. I am not sure what else to try at this point.
PHP Warning: SoapClient::__doRequest(): SSL: An existing connection was forcibly closed by the remote host.
in C:\myvu\services\vendor\vu\file_storage_client\FileStorageClient\File\VuStore\VuStoreFileManager.php on line 54
[07-May-2015 08:31:48 America/Chicago] Error Fetching http headers
Thank you!
Phil
Okay so I'm trying to retrieve 130 XML files from a feed and then insert values into a database. Problem is, it crashes at around 40-60 entries and doesn't give an error. I timed it and the script goes for around 13 seconds each time.
I checked my php.ini settings and they are...
Memory Limit = 128M
Time Limit - 30 seconds
So what is causing this error? When I run the script on firefox it just displays a white screen.
EDIT - The error I'm getting on chrome is
"Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the
connection without sending any data"
Have you checked the memory consumption? Also could you do a writeline or output something to the screen to see that it's reading in the files? Add error handling around the read statement to see if it's failing on parsing the XML.