My PHP script reads from my DB and sends messages to the queue so that worker roles (well other LAMP machines) can pull them and work in parallel.
However, often times my script ends with a Fatal Error with the following message in my error_log on my apache server. This error is on the sending side.
PHP Notice: fwrite(): send of 414 bytes failed with errno=32
Broken pipe in /home/azureuser/pear/share/pear/HTTP/Request2/SocketWrapper.php on line 202
PHP Fatal error: Uncaught HTTP_Request2_MessageException:
Error writing request in /home/azureuser/pear/share/pear/HTTP/Request2/Adapter/Socket.php
on line 130
Exception trace
HTTP_Request2_SocketWrapper->write('POST /proxy/mess…')
/home/azureuser/pear/share/pear/HTTP/Request2/Adapter/Socket.php:130
HTTP_Request2_Adapter_Socket->sendRequest(Object(HTTP_Request2))
/home/azureuser/pear/share/pear/HTTP/Request2.php:93
in /home/azureuser/pear/share/pear/HTTP/Request2/SocketWrapper.php on line 206
It seems to me the socket throws an exception for some reason that is not handled and thus crashes the script. If you agree, do you suggest it is a good idea to fix the SDK?
Looked into this really quickly for a first pass, but it seems:
fwrite(): send of 414 bytes failed with errno=32
Refers to a dropped socket, which could happen for a few reasons:
The site goes into a Cold state (turn on always on)
The socket is staying open extremely long and terminated by the LB
Something unexpected happened and the socket crashes (think exception writing to the Queue)
Have you been able to look at the FREB logs or run the PHP Process Report in the Support Portal (https://[site-name].scm.azurewebsites.net/Support) to diagnose the why the socket is being dropped?
Have you tried to increase the endpoint timeout value to which your php socket is connecting to. The default timeout is 4 mins for a VM endpoint. You can change this to a higher value. Here is the article on how to do that. https://azure.microsoft.com/blog/2014/08/14/new-configurable-idle-timeout-for-azure-load-balancer/
Check this section: "Set Idle Timeout when creating an Azure endpoint on a Virtual Machine" in above link.
Related
Background: I have a 250 GB object storage at Dreamhost. I am using the AWS S3 Client (PHP) for uploading files to it. It worked fine for months until they migrated their server from the West Coast to the East Coast. The only changes (very small and simple) to my scripts was a new Host URL/region. My bucket has around 1 million photos/thumbnails of around 10kb-100kb in size on average.
Since then for 2 months, some photos will upload fine, and then half the time, uploading a photo will result in 400/500 errors. We contacted Dreamhost Support and they have been absolutely stumped for 2 months - no answer to the problem. Here are type of errors in our logs:
[05-Dec-2018 12:28:27 UTC] PHP Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://objects-us-east-1.dream.io/mybucket/img.jpg"; AWS HTTP error: Client error: `PUT https://objects-us-east-1.dream.io/mybucket/img.jpg` resulted in a `408 Request Time-out` response:
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
(client): - <html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
'
GuzzleHttp\Exception\ClientException: Client error: `PUT https://objects-us-east-1.dream.io/mybucket/img.jpg` resulted in a `408 Request Time-out` response:
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
in /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Exception/RequestException.php:113
in /home/username/mysite.com/includes/cdn/aws/Aws/WrappedHttpHandler.php on line 191
[05-Dec-2018 12:44:21 UTC] PHP Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://objects-us-east-1.dream.io/mybucket/img.jpg"; AWS HTTP error: cURL error 28: Operation timed out after 0 milliseconds with 0 out of 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)'
GuzzleHttp\Exception\ConnectException: cURL error 28: Operation timed out after 0 milliseconds with 0 out of 0 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php:186
Stack trace:
#0 /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php(150): GuzzleHttp\Handler\CurlFactory::createRejection(Object(GuzzleHttp\Handler\EasyHandle), Array)
#1 /home/username/mysite.com/includes/cdn/aws/GuzzleHttp/Handler/CurlFactory.php(103): GuzzleHttp\Handler\CurlFactory::finishError(Object(GuzzleHttp\Handler\CurlMultiHandler), Object(GuzzleHttp\H in /home/username/mysite.com/includes/cdn/aws/Aws/WrappedHttpHandler.php on line 191
In an attempt to narrow down the problem, I've also done the simplest of examples like listing buckets (Dreamhost tutorial examples), and the same behavior happens - even on a new test bucket with 1 image in it. If I refresh the browser once every few seconds it might list the buckets 2-3 times successfully, but on the 4th refresh, the page continues to "hang" for a long time and it might finally display the bucket after a 150 seconds delay, or the script might just timeout. Dreamhost noticed the same thing when they set up an example on a basic cloud server instance: the bucket list might load immediately, or after 60 seconds, 120 seconds, 180 seconds, etc. A clue: seems like it loads just after 30 second increments. (these 180, 150, 120, 60 times are all divisible by 30).
I'm hoping someone understands what is happening here. The problem is so bad that we have hundreds of unhappy merchants in our marketplace that are having a hard time listing new products for sale, because of this image uploading issue making it nearly impossible for them to list images and is causing their browsers to "hang". To make matters worse, these image uploading timeout issues are causing all my 40 PHP processes to timeout, which indirectly causes 500 Internal Server Errors for site visitors as well. Our site doesn't have that much traffic, maybe 10,000 visitors per day. Again, it is surprising that Dreamhost has been stumped for months, they say I'm the only customer they have that has the issue.
Other info, my server is running on:
Ubuntu 16.04
Apache 2.4.33
PHP-FPM (7.0)
cURL 7.47.0
AWS S3 SDK for PHP 3.81.0
Have HTTPS and HTTP/2 enabled
Does anyone know how what might be causing this type of error sometimes i try to execute my php file?
Curl error: Operation timed out after 0 milliseconds with 0 out of 0
bytes received
I get this error very often when I try to load my php file with GCM code, but sometimes I do not get it.
My company recently added MongoDB to the databases that we use and things have been going mostly smoothly, but every once in a while we get an odd error. It is near impossible to duplicate, and has only happened four times in the last week of testing, but once we go live to production our clients will be using the site much more frequently than we were for testing. We are trying to solve the bug before it gets out of hand.
The error we get is: (line breaks added for readability)
Fatal error: Uncaught exception 'MongoCursorException' with message
'Failed to connect to: 10.0.1.114:27017: send_package: error reading from socket:
Timed out waiting for header data' in
/opt/local/apache2/htdocs/stage2/library/Shanty/Mongo/Connection/Group.php on line 134
We are using ShantyMongo in PHP and it is a remote connection. The error is really intermittent, and refreshing the page is enough to get it to go away. As a temporary solution, we have wrapped all of our mongo methods in a for...try/catch so that if a MongoException is thrown we retry the method up to two more times, the hope being that it will succeed one of the three attempts since the error is so unpredictable.
On PHP 5.3.3-7, I'm using DOMDocument->load() to load a local file, and recently I've started encountering situations where I start getting E_WARNINGs
DOMDocument::load() [domdocument.load]: I/O warning : failed to load external entity "/path/to/local/file/that/does/exist"
Once this starts happening the only way I've found to make the errors stop is to restart apache, and then it's fine for a while again.
I haven't changed any related code recently, but it occurred to me that this started happening after I installed a Debian patch for CVE-2013-1643, which seems to possibly disable entity loading... if there's a single instance of an event that would trigger disabling, could that disable it permanently for all future PHP requests until a restart? That seems aggressive. By contrast, libxml_disable_entity_loader() seems to operate on the current request only.
I have no code that I know of that should ever load remote XML and that would ever trigger disabling, but if this is what's happening, I would have expected something to show up in my php error log, but I don't see anything. What other avenues should I investigate?
EDIT: Finally, I've been able to predictably repeat the problem. If I intentionally exceed the allowed memory limit in a single session...
mod_fcgid: stderr: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32 bytes)
...then I start getting the I/O warning on all subsequent calls to DOMDocument->load(). This time, I was able to get it working again without restarting apache... just by calling libxml_disable_entity_loader(false). This is truly funky behavior--it's starting to smell like a bug in php?
I have a PHP CLI script mostly written that functions as a chat server for chat clients to connect to (don't ask me why I'm doing it in PHP, thats another story haha).
My script utilizes the socket_select() function to hang execution until something happens on a socket, at which point it wakes up, processes the event, and waits until the next event. Now, there are some routine tasks that I need performed every 30 seconds or so (check if tempbanned users should be unbanned, save user databases, other assorted things).
From what I can tell, PHP doesn't have very great multi-threading support at all. My first thought was to compare a timestamp every time the socket generates an event and gets the program flowing again, but this is very inconsistent since the server could very well sit idle for hours and not have any of my cleanup routines executed.
I came across the PHP pcntl extensions, and it lets me use assign a time interval for SIGALRM to get sent and a function get executed every time it's sent. This seems like the ideal solution to my problem, however pcntl_alarm() and socket_select() clash with each other pretty bad. Every time SIGALRM is triggered, all sorts of crazy things happen to my socket control code.
My program is fairly lengthy so I can't post it all here, but it shouldn't matter since I don't believe I'm doing anything wrong code-wise. My question is: Is there any way for a SIGALRM to be handled in the same thread as a waiting socket_select()? If so, how? If not, what are my alternatives here?
Here's some output from my program. My alarm function simply outputs "Tick!" whenever it's called to make it easy to tell when stuff is happening. This is the output (including errors) after allowing it to tick 4 times (there were no actual attempts at connecting to the server despite what it says):
[05-28-10 # 20:01:05] Chat server started on 192.168.1.28 port 4050
[05-28-10 # 20:01:05] Loaded 2 users from file
PHP Notice: Undefined offset: 0 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112
PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116
[05-28-10 # 20:01:15] Tick!
PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126
[05-28-10 # 20:01:25] Tick!
PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129
[05-28-10 # 20:01:25] Accepting socket connection from
PHP Notice: Undefined offset: 1 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112
PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116
[05-28-10 # 20:01:35] Tick!
PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126
[05-28-10 # 20:01:45] Tick!
PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129
[05-28-10 # 20:01:45] Accepting socket connection from
PHP Notice: Undefined offset: 2 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112
pcntl_alarm and socket_select can coexist, but you need to be aware of how to do it right.
In particular, if the alarm goes off while socket_select() is waiting, then after the alarm is handled the socket_select() will return immediately with an error indication. The error is "Interrupted System Call", which is what you're seeing in the output. You need to specifically check for that error, and retry the socket_select().
Alternatively, you can just forget about using the alarm and instead use the timeout facility of socket_select(). This is what the tv_sec parameter is for - it gives a timeout in seconds, after which the socket_select() will return even if no sockets are ready. You can then do your regular processing.