Google Cloud messaging php Curl timeout - php

Does anyone know how what might be causing this type of error sometimes i try to execute my php file?
Curl error: Operation timed out after 0 milliseconds with 0 out of 0
bytes received
I get this error very often when I try to load my php file with GCM code, but sometimes I do not get it.

Related

AWS S3 PHP: Tons of Inconsistent 408 and cURL 28 Errors, Timeouts/Delays/Hangs every few seconds

Background: I have a 250 GB object storage at Dreamhost. I am using the AWS S3 Client (PHP) for uploading files to it. It worked fine for months until they migrated their server from the West Coast to the East Coast. The only changes (very small and simple) to my scripts was a new Host URL/region. My bucket has around 1 million photos/thumbnails of around 10kb-100kb in size on average.
Since then for 2 months, some photos will upload fine, and then half the time, uploading a photo will result in 400/500 errors. We contacted Dreamhost Support and they have been absolutely stumped for 2 months - no answer to the problem. Here are type of errors in our logs:
[05-Dec-2018 12:28:27 UTC] PHP Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://objects-us-east-1.dream.io/mybucket/img.jpg"; AWS HTTP error: Client error: `PUT https://objects-us-east-1.dream.io/mybucket/img.jpg` resulted in a `408 Request Time-out` response:
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
(client): - <html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
'
GuzzleHttp\Exception\ClientException: Client error: `PUT https://objects-us-east-1.dream.io/mybucket/img.jpg` resulted in a `408 Request Time-out` response:
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
in /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Exception/RequestException.php:113
in /home/username/mysite.com/includes/cdn/aws/Aws/WrappedHttpHandler.php on line 191
[05-Dec-2018 12:44:21 UTC] PHP Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://objects-us-east-1.dream.io/mybucket/img.jpg"; AWS HTTP error: cURL error 28: Operation timed out after 0 milliseconds with 0 out of 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)'
GuzzleHttp\Exception\ConnectException: cURL error 28: Operation timed out after 0 milliseconds with 0 out of 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php:186
Stack trace:
#0 /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php(150): GuzzleHttp\Handler\CurlFactory::createRejection(Object(GuzzleHttp\Handler\EasyHandle), Array)
#1 /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php(103): GuzzleHttp\Handler\CurlFactory::finishError(Object(GuzzleHttp\Handler\CurlMultiHandler), Object(GuzzleHttp\H in /home/username/mysite.com/includes/cdn/aws/Aws/WrappedHttpHandler.php on line 191
In an attempt to narrow down the problem, I've also done the simplest of examples like listing buckets (Dreamhost tutorial examples), and the same behavior happens - even on a new test bucket with 1 image in it. If I refresh the browser once every few seconds it might list the buckets 2-3 times successfully, but on the 4th refresh, the page continues to "hang" for a long time and it might finally display the bucket after a 150 seconds delay, or the script might just timeout. Dreamhost noticed the same thing when they set up an example on a basic cloud server instance: the bucket list might load immediately, or after 60 seconds, 120 seconds, 180 seconds, etc. A clue: seems like it loads just after 30 second increments. (these 180, 150, 120, 60 times are all divisible by 30).
I'm hoping someone understands what is happening here. The problem is so bad that we have hundreds of unhappy merchants in our marketplace that are having a hard time listing new products for sale, because of this image uploading issue making it nearly impossible for them to list images and is causing their browsers to "hang". To make matters worse, these image uploading timeout issues are causing all my 40 PHP processes to timeout, which indirectly causes 500 Internal Server Errors for site visitors as well. Our site doesn't have that much traffic, maybe 10,000 visitors per day. Again, it is surprising that Dreamhost has been stumped for months, they say I'm the only customer they have that has the issue.
Other info, my server is running on:
Ubuntu 16.04
Apache 2.4.33
PHP-FPM (7.0)
cURL 7.47.0
AWS S3 SDK for PHP 3.81.0
Have HTTPS and HTTP/2 enabled

Php - Azure 500 - The request timed out

I am fetching data from cleardb mysql. It takes around 10 mins to give me result back.
But after 230 secs Azure gives error as "500 - The request timed out.
The web server failed to respond within the specified time."
i have tried to set max_execution_timeout to infinite and changed more config variables in .user.ini.
As well tried to set manually in first line of php script file as set_time_limit(0); and ini_set('max_execution_time', 6000000);.
But no luck.
I don't want to use webjobs.
Is there any way to resolve Azure 500 - The request timed out. issue.
Won't work. You'll hit the in-flight request timeout long before the 10-min wait.
Here's a better approach. Call a stored procedure that produces the result and make a second call 10 minutes later to retrieve the data.
call the stored procedure from your code
return a Location: header in the response
follow the URL to grab results, 200 OK means you have them, 417 Expectation Failed means not yet.

Azure PHP SDK throws Fatal Exception with Socket error

My PHP script reads from my DB and sends messages to the queue so that worker roles (well other LAMP machines) can pull them and work in parallel.
However, often times my script ends with a Fatal Error with the following message in my error_log on my apache server. This error is on the sending side.
PHP Notice: fwrite(): send of 414 bytes failed with errno=32
Broken pipe in /home/azureuser/pear/share/pear/HTTP/Request2/SocketWrapper.php on line 202
PHP Fatal error: Uncaught HTTP_Request2_MessageException:
Error writing request in /home/azureuser/pear/share/pear/HTTP/Request2/Adapter/Socket.php
on line 130
Exception trace
HTTP_Request2_SocketWrapper->write('POST /proxy/mess…')
/home/azureuser/pear/share/pear/HTTP/Request2/Adapter/Socket.php:130
HTTP_Request2_Adapter_Socket->sendRequest(Object(HTTP_Request2))
/home/azureuser/pear/share/pear/HTTP/Request2.php:93
in /home/azureuser/pear/share/pear/HTTP/Request2/SocketWrapper.php on line 206
It seems to me the socket throws an exception for some reason that is not handled and thus crashes the script. If you agree, do you suggest it is a good idea to fix the SDK?
Looked into this really quickly for a first pass, but it seems:
fwrite(): send of 414 bytes failed with errno=32
Refers to a dropped socket, which could happen for a few reasons:
The site goes into a Cold state (turn on always on)
The socket is staying open extremely long and terminated by the LB
Something unexpected happened and the socket crashes (think exception writing to the Queue)
Have you been able to look at the FREB logs or run the PHP Process Report in the Support Portal (https://[site-name].scm.azurewebsites.net/Support) to diagnose the why the socket is being dropped?
Have you tried to increase the endpoint timeout value to which your php socket is connecting to. The default timeout is 4 mins for a VM endpoint. You can change this to a higher value. Here is the article on how to do that. https://azure.microsoft.com/blog/2014/08/14/new-configurable-idle-timeout-for-azure-load-balancer/
Check this section: "Set Idle Timeout when creating an Azure endpoint on a Virtual Machine" in above link.

SOAP Client Request in PHP failing at 4MB or greater

I've spent almost two full days trying to resolve this issue, but to no avail. Any help is greatly appreciated.
I am trying to make a Soap Request in PHP using php's build in SoapClient. I have verified that trying to send a request where the size of envelope is smaller than 4MB works great. The communication between the called server and my client has no issues in this circumstance. As soon as I tip the size of the envelope just over 4MB, my php instance takes somewhere between 1-2 minutes to throw a SoapFault where the error message is "Error fetching HTTP headers". I have my max_post_size and memory_limits set to 150M in my php.ini and my IIS request limit is set to 500MB.
I have verified that if I am not using php to make the SOAP request, I can complete my request and response chain with bodies upwards of 4MB in no time at all, so I feel that I've narrowed this down to a php/SoapClient issue.
If anybody has any ideas, I would greatly appreciate the help. I am not sure what else to try at this point.
PHP Warning: SoapClient::__doRequest(): SSL: An existing connection was forcibly closed by the remote host.
in C:\myvu\services\vendor\vu\file_storage_client\FileStorageClient\File\VuStore\VuStoreFileManager.php on line 54
[07-May-2015 08:31:48 America/Chicago] Error Fetching http headers
Thank you!
Phil

PHP script dies before ftp_get() finishes downloading a file

I know how to download a file from a server using FTP with PHP.
I have a list of files to download from the ftp to an internal storage.
I use ftp_get() to download the list of files,
the first file size is: 126 mb, successfully downloaded to my internal storage.
However the PHP function throws an error 500, and then dies without continuing.
The error I get:
Internal Server Error
The server encountered an internal error or misconfiguration and was
unable to complete your request.
Please contact the server administrator, webmaster#zzz.com and inform
them of the time the error occurred, and anything you might have done
that may have caused the error.
More information about this error may be available in the server error
log.
Additionally, a 404 Not Found error was encountered while trying to
use an ErrorDocument to handle the request.
Any idea what I should do in order for the function to complete its run successfully?
You need to increase the timeout then. 180 is in seconds, which is 3 minutes. Try setting it to 600. I.e.: FTP_TIMEOUT_SEC, 600 or higher, depending on how much more time is needed. You probably could even try FTP_TIMEOUT_SEC, 0 which I think is NO time limit.
It is already commented for one more question similar to this. Please try this. It should work.
Maybe you exceeded the maximum execution time.
Try to increase it:
https://www.php.net/manual/en/function.set-time-limit.php

Categories