php guzzle [curl] 52 Empty reply from server - php

I am making following request which resulted empty reply from server.
Originate server : AWS ec2 / PHP 5.4 / Guzzle
Remote server : AWS ec2 through elb
CURL info :{
"url":"https:\/\/xxx\/xxx",
"content_type":null,
"http_code":0,
"header_size":0,
"request_size":5292,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":120.987057,
"namelookup_time":0.000277,
"connect_time":0.001504,
"pretransfer_time":0.014271,
"size_upload":2430,
"size_download":0,
"speed_download":0,
"speed_upload":20,
"download_content_length":-1,
"upload_content_length":2430,
"starttransfer_time":60.998147,
"redirect_time":59.988895,
"certinfo":[],
"primary_ip":"54.169.126.111",
"primary_port":443,
"local_ip":"192.168.2.111",
"local_port":39522,
"redirect_url":""
}
CURL error : [curl] 52: Empty reply from server [url] https:\/\/xxx\/xxx
Pls note that this does not happen all the time.
It seems like the request has not even reach the destination(elb) since there was no logs relate to the request
1. Is the issue with originate server or remote server ?
2. "starttransfer_time":60.998147 Could this be the root cause ?
Solutions,workarounds,suggestions are welcome.Thanks!

As it seems request never reached the server,
Check for network errors. Any TCP re-transmission/timeout or any errors. As you mentioned no reply means is it TCP timeout?
Run a tcpdump and analyse traces based on that you can decide.
Additionally you can enable log level in both applications in Originate and Remote servers.
Check for error patterns, ex: is it because of high load?

In my case "empty answer from server" was caused by exhausted memory on the remote server. In this case fatal error was thrown and request was terminated.
Debugging cURL with curl_setopt($h, CURLOPT_VERBOSE, true); did not help since there was only "Connection died, retrying a fresh connect" and then "Empty reply from server". We had to debug it on remote server's side.

Related

Cloudflare throws 524 an error on my server

Wonder why Cloudflare throws an error on my server which is up? I can verify the server is up by visiting the ip in my browser.
I checked system log, apache log, no error found. Btw, I just set the domain on a static site.. I can't figure out how to fix it. Googled and found no solution
A 524 error states that CloudFlare was able to make a TCP connection to the origin, but the origin did not reply with a HTTP response before the connection timed out. This means that CloudFlare is able to make a network connection to the origin server, but the origin server took too long to respond to the request.
https://support.cloudflare.com/hc/en-us/articles/200171926-Error-524-A-timeout-occurred
If your process take more than 100 seconds (1.67 Minutes) then CloudFlare throws an error,
This link resolves in PHP.
In PHP app this could be that session don't return data - It hang on because of cookies. Clear application data to create new session and try again.
Official for default and free is 100 seconds before Cloudflare send this header.
For enterprise there is 524 to max 600 seconds option (not default)
Sometimes Cloudflare throws 524 an error on the server because your IP address is blocked by Cloudflare. For looking for how to increase time best to first check if your IP address is not locked by Cloudflare.

file_get_contents not working - connection refused

After reading the following links:
get url content PHP
file_get_contents failed to open stream: Connection refused godaddy server with remote connection server
PHP file_get_contents() returns "failed to open stream: HTTP request failed!"
I'm having a problem with the function file_get_contents or even curl in one of my servers at hostgator, they are not working, returning the PHP error failed to open stream: Connection refused. I've tried cURL with USERAGENT set with no result too. It's a simple weather service that I'm creating, it returns the altitude, wind direction, speed and temperature on a certain coordinate of the globe.
Return sample: 30000;221;2;-32;1;
In the other side (request side), I have a web server running IIS 7.5, with all router firewalls, computers firewalls and antivirus softwares disabled only for testing, and is still refusing the connection ONLY for the hostgator servers. I've tried the same code in other web hosting providers, and the code is working properly.
This service will handle a lot of requests per minute, and this seems to me that something has blocked my connection between hostgator and my server due to the number of requests. But I don't know where!!
The page is perfectly accesible via browser.
This is my enviorioment at the hostgator side:
allow_url_fopen: On
allow_url_include: On
OpenSSL: Enabled
Here is my PHP code:
$datalink = "http://#####.########.###:8280/weather.php?waypoint_lat=-10.981925&waypoint_lon=-37.077377&altitude=30000";
$weather_layer = file_get_contents($datalink);
echo "Layer ($datalink):" .$weather_layer."<br>";
Isn't Hostgator blocking the requests because of the DDoS protection? Give them a call, my hosting provider was blocking connections to my other server because they were thinking it was some hacker DDoSing using my hosting.
Also, there might be problem with the port in URL - Hostgator cannot proccess it?

PHP File_get_contents error in connection

I have a php file and all it contains is
<?php
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
echo file_get_contents("http://mywebsite.com/javascript-function.php");
?>
And for some reason it displays the following notice:
Notice: file_get_contents(): send of 24 bytes failed with errno=104 Connection reset by peer in /home/sites/mywebsite.com/public_html/index.php on line 6 Notice: file_get_contents(): send of 2 bytes failed with errno=32 Broken pipe in /home/sites/mywebsite.com/public_html/index.php on line 6
I have never come accross this message before so I have no idea what to do to solve it.
I have also tried using cURL but it outputs nothing and no error message.
A connection reset by peer error occurs in a datastream connection when either the remote host you are connecting to (i.e. mywebsite.com which you specified in the call to file_get_contents) terminates the socket connection on their end before the client is finished sending the request, or when the local network system detects a failure in connecting.
Some common root causes could be a firewall rule that is blocking the connection on either end or possibly a miss-configured web server. One way to narrow down the problem is to attempt accessing the same URL from a web browser on the same client that this script was run when the error occurred. If it works as expected, at least you know, it's not a firewall issue on the client. Begin digging into the web server's config files to troubleshoot the problem further. However, if the same problem occurs in a web browser then you should begin looking into your firewall rules on that client as well as the host's firewall rules if any.

PHP Guzzle 5: Cannot handle URL with PORT number in it

I am using the latest version of guzzle.
(from composer.json)
"guzzlehttp/guzzle": "~5"
(from composer.lock)
"name": "guzzlehttp/guzzle",
"version": "5.2.0",
When I attempt to request (GET or POST) with a URL that contains a PORT number:
$response = $client->get('http://www.hostdnshere.com:8888', array());
I get the following error:
string(68) "cURL error 7: Failed to connect to 000.000.000.000: Permission denied"
When I do the same but omit the PORT:
$response = $client->get('http://www.hostdnshere.com', array());
The request succeeds without issue. I have searched the documentation and Googled the web but cannot find out how to set the port for the host since it cannot be included.
Additionally, I have tested it all using cURL on the server form which the requests are being sent, with and without the PORT, works like a charm no matter what so I know its not an issue with the Server, DNS, Proxies, or PORTS.
For all those banging their heads against the wall due to the
"cURL error 7: Failed to connect to 000.000.000.000: Permission denied"
error, it all boils down to 'SELINUX'. That is right, any cURL wrapper written in any programming language can be affected by the fact that when 'SELINUX' is set to 'enforcing' it takes issue with cURL being executed against a URL that has a non-standard PORT in it (i.e. my.domain.com:8888).
Recommended for local development only, if you wish to use non-standard PORTS in your URL's, is to set 'SELINUX' to 'disabled'. The proper solution in production will be to use clean URLs without PORTs in them in order to leave 'SELINUX' enabled.
Open:
nano /etc/selinux/config
Locate:
SELINUX=enforcing
Change:
SELINUX=disabled
Those using CentOS will most likely be running into this issue since 'SELINUX' is set to 'enforcing' by default.

cURL/PHP Request Executes 50% of the Time

After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL:
* About to connect() to www.virginia.edu port 443 (#0)
* Trying 128.143.22.36... * connected
* Connected to www.virginia.edu (128.143.22.36) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac
* Closing connection #0
If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally.
I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.
The remote host is maybe not a real unique host. Maybe it's some sort of load balancing solution with several servers taking the incoming requests.
What make me think it could be that is the 'mac error' in the error message. This could mean the remote host mac address as changed while the SSL negociation was still running. And this could explain that sometimes you do not have any problem.
But maybe not :-) SSL problems are quite hard to find.
I do not understand your answer on prefork MPM vs Worker MPM, if you run PHP in cli mode your apache MPM is not used, you're not even using apache.
You may need this option:
CURLOPT_FORBID_REUSE
Pass a long. Set to 1 to make the next transfer explicitly close the connection when done. Normally, libcurl keeps all connections alive when done with one transfer in case a succeeding one follows that can re-use them. This option should be used with caution and only if you understand what it does. Set to 0 to have libcurl keep the connection open for possible later re-use (default behavior).
Had you tried?
curl_setopt($handle, CURLOPT_SSLVERSION, 3);

Categories