After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL:
* About to connect() to www.virginia.edu port 443 (#0)
* Trying 128.143.22.36... * connected
* Connected to www.virginia.edu (128.143.22.36) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac
* Closing connection #0
If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally.
I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.
The remote host is maybe not a real unique host. Maybe it's some sort of load balancing solution with several servers taking the incoming requests.
What make me think it could be that is the 'mac error' in the error message. This could mean the remote host mac address as changed while the SSL negociation was still running. And this could explain that sometimes you do not have any problem.
But maybe not :-) SSL problems are quite hard to find.
I do not understand your answer on prefork MPM vs Worker MPM, if you run PHP in cli mode your apache MPM is not used, you're not even using apache.
You may need this option:
CURLOPT_FORBID_REUSE
Pass a long. Set to 1 to make the next transfer explicitly close the connection when done. Normally, libcurl keeps all connections alive when done with one transfer in case a succeeding one follows that can re-use them. This option should be used with caution and only if you understand what it does. Set to 0 to have libcurl keep the connection open for possible later re-use (default behavior).
Had you tried?
curl_setopt($handle, CURLOPT_SSLVERSION, 3);
Related
I am making following request which resulted empty reply from server.
Originate server : AWS ec2 / PHP 5.4 / Guzzle
Remote server : AWS ec2 through elb
CURL info :{
"url":"https:\/\/xxx\/xxx",
"content_type":null,
"http_code":0,
"header_size":0,
"request_size":5292,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":120.987057,
"namelookup_time":0.000277,
"connect_time":0.001504,
"pretransfer_time":0.014271,
"size_upload":2430,
"size_download":0,
"speed_download":0,
"speed_upload":20,
"download_content_length":-1,
"upload_content_length":2430,
"starttransfer_time":60.998147,
"redirect_time":59.988895,
"certinfo":[],
"primary_ip":"54.169.126.111",
"primary_port":443,
"local_ip":"192.168.2.111",
"local_port":39522,
"redirect_url":""
}
CURL error : [curl] 52: Empty reply from server [url] https:\/\/xxx\/xxx
Pls note that this does not happen all the time.
It seems like the request has not even reach the destination(elb) since there was no logs relate to the request
1. Is the issue with originate server or remote server ?
2. "starttransfer_time":60.998147 Could this be the root cause ?
Solutions,workarounds,suggestions are welcome.Thanks!
As it seems request never reached the server,
Check for network errors. Any TCP re-transmission/timeout or any errors. As you mentioned no reply means is it TCP timeout?
Run a tcpdump and analyse traces based on that you can decide.
Additionally you can enable log level in both applications in Originate and Remote servers.
Check for error patterns, ex: is it because of high load?
In my case "empty answer from server" was caused by exhausted memory on the remote server. In this case fatal error was thrown and request was terminated.
Debugging cURL with curl_setopt($h, CURLOPT_VERBOSE, true); did not help since there was only "Connection died, retrying a fresh connect" and then "Empty reply from server". We had to debug it on remote server's side.
I am using the latest version of guzzle.
(from composer.json)
"guzzlehttp/guzzle": "~5"
(from composer.lock)
"name": "guzzlehttp/guzzle",
"version": "5.2.0",
When I attempt to request (GET or POST) with a URL that contains a PORT number:
$response = $client->get('http://www.hostdnshere.com:8888', array());
I get the following error:
string(68) "cURL error 7: Failed to connect to 000.000.000.000: Permission denied"
When I do the same but omit the PORT:
$response = $client->get('http://www.hostdnshere.com', array());
The request succeeds without issue. I have searched the documentation and Googled the web but cannot find out how to set the port for the host since it cannot be included.
Additionally, I have tested it all using cURL on the server form which the requests are being sent, with and without the PORT, works like a charm no matter what so I know its not an issue with the Server, DNS, Proxies, or PORTS.
For all those banging their heads against the wall due to the
"cURL error 7: Failed to connect to 000.000.000.000: Permission denied"
error, it all boils down to 'SELINUX'. That is right, any cURL wrapper written in any programming language can be affected by the fact that when 'SELINUX' is set to 'enforcing' it takes issue with cURL being executed against a URL that has a non-standard PORT in it (i.e. my.domain.com:8888).
Recommended for local development only, if you wish to use non-standard PORTS in your URL's, is to set 'SELINUX' to 'disabled'. The proper solution in production will be to use clean URLs without PORTs in them in order to leave 'SELINUX' enabled.
Open:
nano /etc/selinux/config
Locate:
SELINUX=enforcing
Change:
SELINUX=disabled
Those using CentOS will most likely be running into this issue since 'SELINUX' is set to 'enforcing' by default.
I have an ecommerce website that has been running for several months with no code changes (and for several years with only minimal changes to the card processing path). I now have a problem where when first opening a connection to the credit card processor secure server, the connection fails. On a second (or third, or fourth, etc.) attempt the connection succeeds. After some length of time--perhaps 5 minutes--the initial connection will fail again and subsequent connections will succeed.
Sample code that comes from the credit card processor's PHP API file:
$url = 'https://esplus.moneris.com:443/gateway_us/servlet/MpgRequestArray';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt ($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS,$dataToSend);
curl_setopt($ch,CURLOPT_TIMEOUT,$gArray[CLIENT_TIMEOUT]);
curl_setopt($ch,CURLOPT_USERAGENT,$gArray[API_VERSION]);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, TRUE);
$response=curl_exec ($ch);
if(!$response) {
print curl_error($ch);
print "\n";
print curl_errno($ch);
print "\n";
} else {
print "Success\n";
}
Output:
% php tester_curl.php
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
35
% php tester_curl.php
Success
% php tester_curl.php
Success
% php tester_curl.php
Success
There are some similar questions, but I haven't been able to resolve the problem and I do not see any with the same error message and symptom of subsequent connection attempts succeeding after an initial fail, e.g.:
Unable to establish SSL connection, how do I fix my SSL cert?
curl errno 35 (Unknown SSL protocol error in connection to [secure site]:443) (same error message)
How to fix cURL SSL connection timeout that only happens the first time the script is called (different error msg, but SSL connections fails first attempt, subsequently succeeds)
The server is kind of broken. It does support TLS1.2 and TLS1.0, but not TLS1.1 (replies with TLS1.0 which is ok). This is usually not a problem unless you have client code which tries to enforce specific protocols by excluding others.
The behavior you describe looks like a client which downgrades the connection on failed connection, keeps this downgrade cached for a while but retries with the originally failed version again after some time. To trace the problem down:
check if the problem is also with other servers
check if other clients have the problem with the same server
check the underlying implementation. Curl can use GnuTLS, NSS, OpenSSL and maybe more. From the error message it looks like OpenSSL, but which version?
check for any middlebox (firewall, load balancer...) in the path to the server which might cause problems
do a packet capture and post it here in a form usable with wireshark (e.g. cloudshark)
For more information on how to debug this kind of problems and which additional information would be useful check http://noxxi.de/howto/ssl-debugging.html#aid_external_debugging
I had the exact thing happen with a client using an old VirtualMerchant gateway. It started failing at 5:00 PM on a Monday and magically started working again at 10 AM the next day.
Whether on the command line via openssl, or curl, or through curl in PHP the connection would fail the first time and then if you ran the same command a second latter it would work.
I tried forcing IPv4 (instead of IPv6), setting timeouts, forcing different protocols, downgrading openssl, etc, and none of it work.
The assumption is that this was something DNS and/or server related on the gateway side because nothing we did fixed it and it fixed itself.
We were running an older openssl that only supported up to TLS 1.1, but it was working and then it started working again, so it wasn't only our client. Though, the age of our client must have been part of the issue because other newer clients didn't experience the "first attempt failure" during the same window of time.
Long story short, if this happens it's probably not YOU (besides you have an older OpenSSL) and the other gateway/server you're calling likely will possibly need to fix/tweak something for it to start working again.
Keep in mind, openssl is part of the Linux core packages, so you can't simply upgrade openssl without serious risk of messing up your server. You'll have to upgrade to a newer version of the operating system to get a more modern openssl.
We send some files across to a third party with a PHP cron job via FTP.
However sometimes we get the following error:
ErrorException [ 2 ]: ftp_put(): php_connect_nonb() failed: Operation
now in progress (115) ~ MODPATH/fileop/classes/Drivers/Fileop/Ftp.php [ 37 ]
When I say "sometimes" I mean exactly that; most times it goes across fine but about 1 in 5 times we get that error. It's not to do with the files themselves, because they will go happily if we try again.
We've found similar issues online - relating to a bug in PHP with NAT devices or to do with firewall configuration but again the implication is that if this were the case it would never work.
So, why would this work some times and not others?
ftp_set_option($ftpconn, FTP_USEPASVADDRESS, false);
This line of code before setting passivity of the connection ftp_pasv($ftpconn, true);
Solved my problem
FTP(S) uses random ports to set up data connections; an intermittent success rate indicates that not all ports are allowed by a firewall on the client and/or server machines. The port range for incoming (PASV) data connections can be set in the FTP server.
This page has a nice summary:
The easy way is to simply allow FTP servers and clients unlimited
access through your firewall, but if you like to limit their access to
"known" ports, you have to understand the 4 different scenarios.
1) The FTP server should be allowed to accept TCP connections to port
21, and to make TCP connections from port 20 to any (remote ephemeral)
port.
2) The FTP server should be allowed to accept TCP connections to port
21, AND to accept TCP connections to any ephemeral port as well!
3) The FTP client should be allowed to make TCP connections to port
21, and to accept TCP connections from port 20 to any ephemeral port.
4) The FTP client should be allowed to make TCP connections to port
21, and to make TCP connections to any other (remote ephemeral) port
as well!
So, I'm writing this answer after doing some investigation on my FTP server and reading the link you provided elitehosts.com.
I'm using FileZilla FTP server, and there is a specific setting that I had to enter to make it work. Going into the server settings, there is an area titled "Passive mode settings". In that dialog, there is an area titled "IPv4 specific", and within that area there is a setting labeled "External Server IP Address for passive mode transfers:". It's a radio button selection set, and it was on "Default", but since the FTP server is NAT'ed, I changed that radio selection from "Default" to "Use the following IP:" and entered in the external-facing IP address of my gateway provided by my ISP.
After I set this up, it worked! Not terribly sure if your FTP server is NAT'ed, but I thought I would provide the answer on this thread because it seems related.
In addition to Cees answer, I am running vsftp on ec2 and had to comment out the listen_ipv6=YES, listen=YES then "service vsftpd restart".
Although documentation says it will listen on ipv4 as well it wasn't and this resolved the issue.
For me all I had to do was to remove the ftp_pasv( $ftpconn, true ); and everything worked perfectly. I'm not yet sure why but I am trying to find out and I will surely come back when I do get the reason behind it.
This should be a comment under jj_dev2 comment, but I cannot add one due to reputation. But maybe it will be helpful for someone, so I post it here.
We had the same issue as described in the original post. In our case it worked with many customers - except one.
The solution in jj_dev2 comment did work for us. So we investigated what does ftp_set_option($conn, FTP_USEPASVADDRESS, false) actually do. And based on that we found out that in fact customer's FTPS server was configured incorrectly.
In response to PASV command (ftp_pasv($conn, true)) FTP server returns an IP address which the PHP FTP client then will use for data transfers. In our case the FTP server was returning an internal IP address and not the public IP address that we connect to. Customer had to fix their FTP server settings so FTP server would send external IP address in the PASV command response.
I'm trying to do this in PHP. I need to check if a specified host is "up"
I thought of pinging the specified host (though I'm not sure how I would, since that would require root. --help here?)
I also though of using fsockopen() to try to connect on a specified port, but that would fail too, if the host wasn't listening for connections on that port.
Additionally, some hosts block ping requests, so how might I get around this? This part isn't a necessity, though, so don't worry about this too much. I realize this one might get tricky.
I typically do a simple cURL for a public page and see if it returns a 200. If you get a 500, 404, or anything besides a 200 response you know something fishy is up.
The short answer is that there is no good, universal way to do this. Ping is about as close as you can get (almost all hosts will respond to that), but as you observed, in PHP that usually requires root access to use the low port.
Does your host allow you to execute system calls, so you could run the ping command at the OS level and then parse the results? This is probably your best bet.
$result = exec("ping -c 2 google.com");
If a host is blocking a ping request, you could do a more general portscan to look for other open ports (but this is pretty rude, don't do it to hosts who haven't given you specific permission). Nmap is a good tool for doing this. It uses quite a few tricks to figure out if a host is up and what services may or may not be running. Be careful though, as some shared hosting providers will terminate your account for "hacking activity" if you install and use Nmap, especially against hosts you do not control or have permission to probe.
Beyond that, if you are on the same unswitched ethernet layer as another host (if you happen to be on the same open WiFi network, for example), an ethernet adaptor in promiscuous mode can sniff traffic to and from a host even if it does not respond directly to you.
You could use cURL
$url = 'yoururl';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_exec($ch);
$retcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if (200==$retcode) {
// All's well
} else {
// not so much
}
For the host to be monitored at all, at least one port must be open. Is the host a web server? If so you could just open a connection to port 80, as long as it's opened successfully then at least some part of the host is working.
A better solution would be to have a script that is web accessible to just your monitor, and then you could open a connection to that, and that script would return various bits of system info.
EDIT--
How thorough do you want this test to be?
[server on] -> [apache running] -> [web application working]
Are all different levels of working. Just showing apache is returning something does at least show the server is on, but not that your web app is running.
(I realise that you may not be running anything like this but I hope it's a useful example)
EDIT--
Would it be worth installing a lightweight http server (I mean very light weight) just for monitoring?
Failing that could you install something on the hosts that phoned home every so often to show they are up?
I used gethostbyname($hostname).
The function gives you the IP if the host is up, or the input hostname if it couldn't find the IP.
if ($hostname !== gethostbyname($hostname)) {
//Host is up
}