Hi I have the following problem:
I have a PHP Script which uploads files via curl with a post. So basically it does hit the "upload file" button.
When I upload a 100mb file it takes roughly 100 secs (+/- 10). When I start a remote session and upload the same file with a browser it takes about 40 seconds with the upload script running in the background. So the browser upload isn't even at full speed.
My question is now: Why is the curl upload so much slower? I tried googling it and all i found were some mailings which affected an older curl version and windows machines.
PS: Server is running on debian, script is executed with the rootuser and nothing found in the php or apache configs.
#Edit:
The return of curl_getinfo
[url] => http://example.com
[content_type] => text/plain
[http_code] => 200
[header_size] => 344
[request_size] => 464
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 113.560758
[namelookup_time] => 0.000961
[connect_time] => 0.055728
[pretransfer_time] => 0.055896
[size_upload] => 105000463
[size_download] => 11
[speed_download] => 0
[speed_upload] => 924619
[download_content_length] => -1
[upload_content_length] => 105000463
[starttransfer_time] => 1.057226
[redirect_time] => 0
[certinfo] => Array
(
)
[primary_ip] => xx.xx.xx.xx
[primary_port] => 80
[local_ip] => xx.xx.xx.xx
[local_port] => 39679
[redirect_url] =>
As you haven't provided any code. I suggest you look through these cURL parameters. Your answer lies in here.
CURLOPT_MAX_SEND_SPEED_LARGE
If an upload exceeds this speed (counted in bytes per second) on
cumulative average during the transfer, the transfer will pause to
keep the average rate less than or equal to the parameter value.
Defaults to unlimited speed.
So i suggest you enable this parameter (which defaults to unlimited speed as mentioned)
Also, CURLOPT_LOW_SPEED_TIME & CURLOPT_LOW_SPEED_LIMIT
Source
Well it seems this can only be resolved with a fix in curl.
This is what I did:
Changed my upload script to generate lock file if a file is about to upload
Opened multiple shells and executed the script
Or basically:
If the upload cant use fullspeed I upload so many files at the same time that the loss of speed is equaled.
Related
I have an API which makes a get request using curl to get response from an external host.
The lookup time is 5 seconds which is about 10-20 times slower than it should be. I also tried file_get_contents(<url>) and that also takes about 5 seconds.
Then I tried pinging the same source through my VPS (through ssh) and found that DNS lookup time for ping was also 5 secs so I tried changing the dns setting of server and gave it another DNS ip to use (google and one another) and solved the problem for the server. Now the ping from server is instant and doesn't take time to resolve domain name to IP.
However, curl requests still take same time. 5 seconds for an API command is too slow. What can be changed here? What can I do to make this faster?
Here is the curl_info log:
[url] => https://www.hungrybulb.com/pony/relay.php/?object=user&user=4fd582133861b5c74b4dab7ba42934aa1&scene=home-tv+series
[content_type] => text/html; charset=UTF-8
[http_code] => 200
[header_size] => 234
[request_size] => 143
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 5.254223
[namelookup_time] => 5.191469
[connect_time] => 5.192079
[pretransfer_time] => 5.246915
[size_upload] => 0
[size_download] => 1
[speed_download] => 0
[speed_upload] => 0
[download_content_length] => 1
[upload_content_length] => 0
[starttransfer_time] => 5.254214
[redirect_time] => 0
[certinfo] => Array
(
)
[request_header] => GET /pony/relay.php/?object=user&user=4fd582133861b5c74b4dab7ba42934aa1&scene=home-tv+series HTTP/1.1
Host: www.hungrybulb.com
Accept: */*
Update 1: The same code works quite fast when I run it on localhost. I assume that is because my local machine's DNS lookup doesn't take time.
The problem was with the DNS server that my server was using as primary. I was using 8.8.8.8 as primary and 8.8.4.4 for secondary. I removed 8.8.8.8 and made 8.8.4.4 as primary and added one more free DNS as secondary.
I have run into an odd issue I do not fully understand. My webserver via PHP calls a PHP api. The rest api will route the request to the appropriate function in the photos.php file based on verb. GET, PUT, and DELETE requests all work fine, however the below POST request returns a 404 Not found. The problem is it works just fine from the browser or when I call it from Postman. The file absolutely does exist and has appropriate access. I don't have this problem with other POST requests either.
After about 5 hours fighting with this and reading 100 sites/articles, I came across an obscure reference to the stream_context_create option of "ignore_errors" => true. The moment I added that line the 404 disappeared and I could reach the api endpoint.
I don't understand how that's possible. The file was either there or it wasn't so the 404 was a lie. I don't want to ignore errors to get this to work. Any idea what's going on here?
$apiUrl = 'http://localhost/api/v1/user/john/photos';
// Setup http request
$options = ["http" =>
["method" => "POST",
"header" => "Content-Type: application/json",
"ignore_errors" => true,
"content" => $data
]
];
// Call API
$apiResponse = file_get_contents($apiUrl, NULL, stream_context_create($options));
Here is the error I receive:
<b>Warning</b>:
file_get_contents(http://localhost/api/v1/user/john/photos): failed to
open stream: HTTP request failed! HTTP/1.1 404 Not Found
curl Info:
Array
(
[url] => http://localhost/api/v1/user/john/photos
[content_type] => text/html; charset=UTF-8
[http_code] => 404
[header_size] => 214
[request_size] => 147
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 0.015
[namelookup_time] => 0
[connect_time] => 0
[pretransfer_time] => 0
[size_upload] => 0
[size_download] => 229
[speed_download] => 15266
[speed_upload] => 0
[download_content_length] => 229
[upload_content_length] => -1
[starttransfer_time] => 0.015
[redirect_time] => 0
[redirect_url] =>
[primary_ip] => 127.0.0.1
[certinfo] => Array
(
)
[primary_port] => 80
[local_ip] => 127.0.0.1
[local_port] => 53224
)
Header Info: HTTP/1.1 404 Not Found
Date: Fri, 28 Oct 2016 15:14:23 GMT
Server: Apache
X-Frame-Options: SAMEORIGIN
X-Powered-By: PHP/5.6.21
Content-Length: 229
Connection: close
Content-Type: text/html; charset=UTF-8
If you are seeking the answer to why file_get_contents was throwing an error without ignore_errors set to true, then perhaps these instructions will get you closer to the answer. Please note these instructions assume that the php error logs have not been reviewed or are not useful yet and hope to help overcome those issues.
To locate your php error log put this at the top of your script (just temporarily for one load of the page)
phpinfo(); die();
Now, after that page loads and you see your php info, search for error_log and locate that directory on your host.
If that value is empty, you may have log_errors turned off or the errors are printing into your apache error log. If that is the case, try to find the apache error log.
With the error log available to you, replace your phpinfo(); die(); with these two calls. Turning off ignore_errors and run the file again.
error_reporting(E_ALL); ini_set("display_errors", 1);
Using the error log, can you find notices/warnings/errors related to the file_get_contents call and post them here if they do not clear up the question?
** EDIT **
Okay, using curl, get the response and request headers in complete and updating your question with the output, please.
ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
// ...
$response = curl_exec($ch);
// Then, after your curl_exec call:
$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
$header = substr($response, 0, $header_size);
$body = substr($response, $header_size);
I ran into some troubles using Mandrill PHP API
SSL certificate problem: unable to get local issuer certificate
I have dig some informations on the net about configuring cacarts file for cURL.
So I have the current cert extract from http://curl.haxx.se/docs/caextract.html, and also configured (VALID) path for that file in PHP.ini
curl.cainfo=some\local\path\\certs\cacert.pem
Now I am testing multiple HTTPS websites (src of the test here)
$this->nxs_cURLTest("https://mandrillapp.com/api/1.0/users/ping.json", "TestApiCALL", "Mandrill API CERT");
$this->nxs_cURLTest("https://www.google.com/intl/en/contact/", "HTTPS to Google", "Mountain View, CA");
$this->nxs_cURLTest("http://www.google.com/intl/en/contact/", "HTTP to Google", "Mountain View, CA");
$this->nxs_cURLTest("https://www.facebook.com/", "HTTPS to Facebook", 'id="facebook"');
$this->nxs_cURLTest("https://www.linkedin.com/", "HTTPS to LinkedIn", 'link rel="canonical" href="https://www.linkedin.com/"');
$this->nxs_cURLTest("https://twitter.com/", "HTTPS to Twitter", 'link rel="canonical" href="https://twitter.com/"');
$this->nxs_cURLTest("https://www.pinterest.com/", "HTTPS to Pinterest", 'content="Pinterest"');
$this->nxs_cURLTest("https://www.tumblr.com/", "HTTPS to Tumblr", 'content="Tumblr"');
and got inconsistent results like:
Testing ... https://mandrillapp.com/api/1.0/users/ping.json - https://mandrillapp.com/api/1.0/users/ping.json
....TestApiCALL - Problem
SSL certificate problem: unable to get local issuer certificate
Array
(
[url] => https://mandrillapp.com/api/1.0/users/ping.json
[content_type] =>
[http_code] => 0
[header_size] => 0
[request_size] => 0
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 0.14
[namelookup_time] => 0
[connect_time] => 0.062
[pretransfer_time] => 0
[size_upload] => 0
[size_download] => 0
[speed_download] => 0
[speed_upload] => 0
[download_content_length] => -1
[upload_content_length] => -1
[starttransfer_time] => 0
[redirect_time] => 0
[certinfo] => Array
(
)
[primary_ip] => 54.195.231.78
[primary_port] => 443
[local_ip] => 192.168.2.142
[local_port] => 63719
[redirect_url] =>
)
There is a problem with cURL. You need to contact your server admin or hosting provider.Testing ... https://www.google.com/intl/en/contact/ - https://www.google.com/intl/en/contact/
....HTTPS to Google - OK
Testing ... http://www.google.com/intl/en/contact/ - http://www.google.com/intl/en/contact/
....HTTP to Google - OK
Testing ... https://www.facebook.com/ - https://www.facebook.com/
....HTTPS to Facebook - OK
Testing ... https://www.linkedin.com/ - https://www.linkedin.com/
....HTTPS to LinkedIn - OK
Testing ... https://twitter.com/ - https://twitter.com/
....HTTPS to Twitter - OK
Testing ... https://www.pinterest.com/ - https://www.pinterest.com/
....HTTPS to Pinterest - Problem
SSL certificate problem: unable to get local issuer certificate
Array
(
[url] => https://www.pinterest.com/
[content_type] =>
[http_code] => 0
[header_size] => 0
[request_size] => 0
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 0.078
[namelookup_time] => 0
[connect_time] => 0.016
[pretransfer_time] => 0
[size_upload] => 0
[size_download] => 0
[speed_download] => 0
[speed_upload] => 0
[download_content_length] => -1
[upload_content_length] => -1
[starttransfer_time] => 0
[redirect_time] => 0
[certinfo] => Array
(
)
[primary_ip] => 23.65.117.124
[primary_port] => 443
[local_ip] => 192.168.2.142
[local_port] => 63726
[redirect_url] =>
)
There is a problem with cURL. You need to contact your server admin or hosting provider.Testing ... https://www.tumblr.com/ - https://www.tumblr.com/
....HTTPS to Tumblr - OK
As you can see, in overall SSL configuration is working, but for some reason for 2 calls
https://www.pinterest.com/
https://mandrillapp.com/api/1.0/users/ping.json
I got the same error. Above links opens just fine in the browser, also their certificates with CA chains are valid. What could be reason here?
EDIT:
I have spend like 6 hours to try to fix this, and find a clue about what is going on about 2 minutes after posting question on SO.
So I have read one more time info on http://curl.haxx.se/docs/caextract.html about provided extracts there. What cought my eye (now, but not 100 times I have read it before)
RSA-1024 removed
Around early September 2014, Mozilla removed the trust bits from the
certs in their CA bundle that were still using RSA 1024 bit keys. This
may lead to TLS libraries having a hard time to verify some sites if
the library in question doesn't properly support "path discovery" as
per RFC 4158. (That includes OpenSSL and GnuTLS.)
The last CA bundle we converted from before that cleanup: an older
ca-bundle from github.
So I took a shot and tired budle from "before cleanup" - all tests are passing trough now!
So another question is - is it about out-of-date software on my mashine like OpenSSL, PHP, cURL etc or is it that sites that were failing on tests has out-of-date certificates format according to RFC 4158 and that is what causing the troubles?
So another question is - is it about out-of-date software on my mashine like OpenSSL, PHP, cURL etc or is it that sites that were failing on tests has out-of-date certificates format according to RFC 4158 and that is what causing the troubles?
Probably none of these. The removed certificates where old Root-CA with only a 1024bit key. These certificates somehow got replaced with newer certificates, but not on the same place, i.e. if you often have multiple possible trust path:
host-cert -> intermediate.1 -> 2048bit intermediate.2 -> 1024bit root-CA
host-cert -> intermediate.1 -> 2048bit new root
The public key of the 2048bit new root is the same as the one of the 2048bit intermediate.2, so the signature for intermediate.1 will still match so that chain validation will succeed. But, while most TLS stack try to find the best chain OpenSSL insists of the longest chain. This means if the server sends the chain
host-cert -> intermediate.1 -> 2048bit intermediate.2
then OpenSSL will insist on finding a root-CA signing intermediate.2, even if it has a root-CA signing intermediate.1 (i.e. 2048bit new root). If the old 1024bit root-CA is no longer in the trust store the validation will fail. If instead the server sends only
host-cert -> intermediate.1
then the validation will succeed with the 2048bit new root. But lots of servers will still send the longer chain to maintain compatibility with older clients which don't have the 2048bit new root.
All very ugly and the bug was reported in 2012 and again in 2015. OpenSSL 1.0.2 (fresh released) has at least an option X509_V_FLAG_TRUSTED_FIRST to work around the problem and there are changes in OpenSSL git which seem to fix the issue but it is not clear if they get every backported to 1.0.2 or lower :(
For now you better just keep the old 1024bit certificates in the trust store.
I'm using curl library (with NSS) in PHP to connect to my other server. Everything was fine until last week, when the destination server stoped supporting SSLv3 due to poodle vulnerability (CloudFlare by the way). Now, I'm trying to make connection using TLS, but I'm still getting "SSL connect error".
There is sample code, I'm using:
$ch = curl_init();
curl_setopt_array( $ch, array(
CURLOPT_URL => 'https://www.lumiart.cz',
CURLOPT_RETURNTRANSFER => true,
CURLOPT_SSLVERSION => 1,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_VERBOSE => true
) );
$output = curl_exec( $ch );
echo $output;
print_r( curl_getinfo( $ch ) );
echo 'error:' . curl_error( $ch );
curl_close($ch);
From my understanding, setting CURLOPT_SSLVERSION to 1 should force connection via TLS.
Note: I have CURLOPT_SSL_VERIFYPEER => false just for debuging and I'm not meaning to leave it there, once I figure this problem out.
This is output:
Array
(
[url] => https://www.lumiart.cz
[content_type] =>
[http_code] => 0
[header_size] => 0
[request_size] => 0
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 0
[namelookup_time] => 2.3E-5
[connect_time] => 0.005777
[pretransfer_time] => 0
[size_upload] => 0
[size_download] => 0
[speed_download] => 0
[speed_upload] => 0
[download_content_length] => -1
[upload_content_length] => -1
[starttransfer_time] => 0
[redirect_time] => 0
[certinfo] => Array
(
)
[primary_ip] => 2400:cb00:2048:1::681c:86f
[redirect_url] =>
)
error:SSL connect error
I have all of this at shared hosting provider, so I can't change any php.ini configuration or update any components. All I have is phpinfo(). I've checked for TLS support on these components version and it should be fine. Here is excerpt of phpinfo:
PHP Version 5.4.32
System Linux wl42-f262 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12 00:41:43 UTC 2014 x86_64
curl:
cURL support enabled
cURL Information 7.19.7
Age 3
Features
AsynchDNS No
Debug No
GSS-Negotiate Yes
IDN Yes
IPv6 Yes
Largefile Yes
NTLM Yes
SPNEGO No
SSL Yes
SSPI No
krb4 No
libz Yes
CharConv No
Protocols tftp, ftp, telnet, dict, ldap, ldaps, http, file, https, ftps, scp, sftp
Host x86_64-redhat-linux-gnu
SSL Version NSS/3.15.3
ZLib Version 1.2.3
libSSH Version libssh2/1.4.2
I think, that problem is usage of SSLv3 instead of TLS, but I'm not 100% sure. All I'm getting is "SSL connect error" and I don't know, how to find out, which SSL version was used to connect.
Is there a way, how to check, which SSL version is used for connection? Or am I missing something?
That's an interesting problem.
If you query SSLLabs for this site you will see, that it only supports various ECDHE-ECDSA-* ciphers and no other ciphers. But, in the version history of curl you will find a bug with ECC ciphers and the NSS library (which you use) which is only fixed in curl version 7.36 "nss: allow to use ECC ciphers if NSS implements them".
Since you are using curl 7.19.7 your curl is too old to use the necessary ciphers together with the NSS library. This means you need to upgrade your curl library.
I have Curl 7.21.7 and PHP 5.4.34, and this seemed to do the trick for me:
curl_setopt($curl_request, CURLOPT_SSLVERSION, CURL_SSLVERSION_TLSv1);
More info here, although it doesn't say when CURL_SSLVERSION_TLSv1 was introduced.
The answer for me was to use an integer value instead of a string.. i.e.:
Change:
curl_setopt($ch, CURLOPT_SSLVERSION_TLSv1_2);
To:
curl_setopt($ch, CURLOPT_SSLVERSION, 6);
Or for tlsv1_1:
curl_setopt($ch, CURLOPT_SSLVERSION, 5);
Here's the full list:
CURL_SSLVERSION_DEFAULT (0)
CURL_SSLVERSION_TLSv1 (1)
CURL_SSLVERSION_SSLv2 (2)
CURL_SSLVERSION_SSLv3 (3)
CURL_SSLVERSION_TLSv1_0 (4)
CURL_SSLVERSION_TLSv1_1 (5)
CURL_SSLVERSION_TLSv1_2 (6)
I'm running the following by the way:
curl-7.19.7-46.el6.x86_64
nss-3.21.0-0.3.el6_7.x86_64
Duplicate answer SSL error can not change to TLS proposed :
Try adding CURLOPT_SSL_CIPHER_LIST => 'TLSv1' to your PPHttpConfig.php.
( and discussed here Update PHP cURL request from SSLv3 to TLS..? too ).
As usefully commented, this apply to openssl curl library, not to nss.
When executing a basic curl request in PHP to a plain text web page (http://whatismyip.org/) it takes more than 10 seconds to respond.
After looking at the info from curl is tells me that the namelookup_time is 10 seconds. I can see the exact same result when executing curl from the command line (Terminal).
Why does it take so long for the name lookup, from what I've read it's more than likely something relating to the server/my computer from which the PHP file is hosted.
Here's my code:
$ch = curl_init();
curl_setopt( $ch, CURLOPT_URL, "whatismyip.org");
curl_exec( $ch );
$ci = curl_getinfo($ch);
print_r($ci);
Here's the info:
[url] => HTTP://whatismyip.org
[content_type] => text/plain
[http_code] => 200
[header_size] => 45
[request_size] => 53
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 10.549943
[namelookup_time] => 10.100938
[connect_time] => 10.300077
[pretransfer_time] => 10.300079
[size_upload] => 0
[size_download] => 14
[speed_download] => 1
[speed_upload] => 0
[download_content_length] => -1
[upload_content_length] => 0
[starttransfer_time] => 10.549919
[redirect_time] => 0
[certinfo] => Array ( )
curl_setopt($ch, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);
Solved the problem for me. IPV6 got an obscure bug.
I can't repeat this using exactly the code above - namelookup_time on my (Windows) machine comes back as 0, with total_time being ~0.5. namelookup_time is the time the OS has taken to resolve the DNS name for whatismyip.org, so you need to examine your server's DNS configuration.
At a guess, your configured primary DNS server doesn't exist/doesn't work, and the timeout is 10 seconds. This means that the OS will wait 10 seconds trying to contact the primary DNS, and when this times out falls through to the secondary, which works.
What are your configured DNS server(s)? Try using 8.8.8.8 (Google) as your primary DNS, if needed.
As a side note, it is best to supply a full URL to cURL, so use http://whatismyip.org/ instead of just whatismyip.org - although this does not seem to be the cause of this specific problem.
Probably one of your DNS servers aren't replying in a timely fashion. Try this command for all IP's listed in /etc/resolv.conf:
dig #IP.TO.DNS.SERVER google.com
If I am correct, one of your DNS servers are not responding.