I'm running a curl request on an eXist database through php. The dataset is very large, and as a result, the database consistently takes a long amount of time to return an XML response. To fix that, we set up a curl request, with what is supposed to be a long timeout.
$ch = curl_init();
$headers["Content-Length"] = strlen($postString);
$headers["User-Agent"] = "Curl/1.0";
curl_setopt($ch, CURLOPT_URL, $requestUrl);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERPWD, 'admin:');
curl_setopt($ch,CURLOPT_TIMEOUT,1000);
$response = curl_exec($ch);
curl_close($ch);
However, the curl request consistently ends before the request is completed (<1000 when requested via a browser). Does anyone know if this is the proper way to set timeouts in curl?
See documentation: http://www.php.net/manual/en/function.curl-setopt.php
CURLOPT_CONNECTTIMEOUT - The number of seconds to wait while trying to connect. Use 0 to wait indefinitely.
CURLOPT_TIMEOUT - The maximum number of seconds to allow cURL functions to execute.
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 400); //timeout in seconds
also don't forget to enlarge time execution of php script self:
set_time_limit(0);// to infinity for example
Hmm, it looks to me like CURLOPT_TIMEOUT defines the amount of time that any cURL function is allowed to take to execute. I think you should actually be looking at CURLOPT_CONNECTTIMEOUT instead, since that tells cURL the maximum amount of time to wait for the connection to complete.
There is a quirk with this that might be relevant for some people... From the PHP docs comments.
If you want cURL to timeout in less than one second, you can use CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like systems" that causes libcurl to timeout immediately if the value is < 1000 ms with the error "cURL Error (28): Timeout was reached". The explanation for this behavior is:
"If libcurl is built to use the standard system name resolver, that portion of the transfer will still use full-second resolution for timeouts with a minimum timeout allowed of one second."
What this means to PHP developers is "You can't use this function without testing it first, because you can't tell if libcurl is using the standard system name resolver (but you can be pretty sure it is)"
The problem is that on (Li|U)nix, when libcurl uses the standard name resolver, a SIGALRM is raised during name resolution which libcurl thinks is the timeout alarm.
The solution is to disable signals using CURLOPT_NOSIGNAL. Here's an example script that requests itself causing a 10-second delay so you can test timeouts:
if (!isset($_GET['foo'])) {
// Client
$ch = curl_init('http://localhost/test/test_timeout.php?foo=bar');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_NOSIGNAL, 1);
curl_setopt($ch, CURLOPT_TIMEOUT_MS, 200);
$data = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
if ($curl_errno > 0) {
echo "cURL Error ($curl_errno): $curl_error\n";
} else {
echo "Data received: $data\n";
}
} else {
// Server
sleep(10);
echo "Done.";
}
From http://www.php.net/manual/en/function.curl-setopt.php#104597
Your code sets the timeout to 1000 seconds. For milliseconds, use CURLOPT_TIMEOUT_MS.
You will need to make sure about timeouts between you and the file.
In this case PHP and Curl.
To tell Curl to never timeout when a transfer is still active, you need to set CURLOPT_TIMEOUT to 0, instead of 1000.
curl_setopt($ch, CURLOPT_TIMEOUT, 0);
In PHP, again, you must remove time limits or PHP it self (after 30 seconds by default) will kill the script along Curl's request. This alone should fix your issue.
In addition, if you require data integrity, you could add a layer of security by using ignore_user_abort:
# The maximum execution time, in seconds. If set to zero, no time limit is imposed.
set_time_limit(0);
# Make sure to keep alive the script when a client disconnect.
ignore_user_abort(true);
A client disconnection will interrupt the execution of the script and possibly damaging data,
eg. non-transitional database query, building a config file, ecc., while in your case it would download a partial file... and you might, or not, care about this.
Answering this old question because this thread is at the top on engine searches for CURL_TIMEOUT.
You can't run the request from a browser, it will timeout waiting for the server running the CURL request to respond. The browser is probably timing out in 1-2 minutes, the default network timeout.
You need to run it from the command line/terminal.
If you are using PHP as a fastCGI application then make sure you check the fastCGI timeout settings.
See: PHP curl put 500 error
Related
here i explain details about my question
first check below code
$ch = curl_init('http://example123.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $xml);
$result = #curl_exec($ch);
now my question is that if "http://example123.com" is not valid or there is no such URL, then what is the problem.
i have a page there written the above. while execute the code the page takes too much time. but if i comment above 5 line then my page executes faster.
can anybody told me what is the reason behind and why the page execute very slow.
Thanks Sanjib
Your script waits for a response (which may take 60 seconds for default_socket_timeout.)
You should set curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); to make the script redirect from http://example123.com/ to http://ww38.example123.com/, the way it does in browser.
When cUrl request encounters INVALID URI it waits until Default connection timeout is reached, this makes page load slower.
Default Connection timeout set in lib/connect.h (if Linux Server)
You can change it here
#define DEFAULT_CONNECT_TIMEOUT 300000 /* milliseconds == five minutes */
Or you can Explicitly set this in you Codes
curl_setopt( $c, CURLOPT_CONNECTIONTIMEOUT, 100 ); # Or curl_setopt($ch, CURLOPT_CONNECTTIMEOUT_MS ,0);
curl_setopt($ch, CURLOPT_TIMEOUT, 400); # timeout in seconds
CURLOPT_CONNECTTIMEOUT : The number of seconds to wait while trying to connect. Use 0 to wait indefinitely.
CURLOPT_TIMEOUT : The maximum number of seconds to allow cURL functions to execute.
*** (If you are using PHP as a fastCGI application then make sure you check the fastCGI timeout settings.)*
So,
I have an issue with needing to make multiple http requests (i mean 100's, even 1000's) using PHP, and these requests need to be run through a proxy to protect the server info/ip.
So, the problem is, to do these requests using php curl is perfectly fine, but takes a long time. If i use php curl_multi, it takes a lot less time. But if i introduce the PROXY to this, the curl_multi takes a significantly longer time than not using curl_multi (curl one after another).
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_COOKIESESSION, TRUE);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_URL, $_REQUEST['url']);
//Proxy
$proxyStr = 'user:pass#123.123.123.123:80'; // username:password#ip:port
curl_setopt($ch, CURLOPT_PROXY, $proxyStr);
curl_setopt($ch, CURLOPT_PROXYTYPE, 'HTTP');
curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, 1);
$req = "something= somethingelse \r\n"; //request that needs to be sent
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, $req); // Query
$result = curl_exec($ch);
//This is some sample code, obviously the curl_multi is using curl_multi_init() and curl_multi_exec()
//Here is a sample code for curl_multi, just introduce the proxy code in the middle of it: http://www.phpied.com/simultaneuos-http-requests-in-php-with-curl/
My question is, what am i doing wrong? The curl_multi with proxy takes a HUGE amount of time longer regardless of if i use 2 at a time or 5 at a time. But using multi with 1 at a time is quicker (not quick enough but a LOT quicker than 2 or more).
As a poor man's multithreading, i added a worker script to the server which pretty much takes a http post and does a single curl on the data (post contains the proxy, so this worker uses the proxy). And using the main server, i use curl_multi to POST details to the worker script (about 2-5 requests at a time). This method works "better" but not a huge amount better. Eg. if i use curl_multi to send 10 requests 1 at a time to the worker, it finishes in ~10 secs. If i send 10 requests 5 at a time, it finished in ~8-9 Secs. So its definitely faster, but not as fast as i need it to be.
Does anyone have any ideas/suggestions on why this is the case?
Is there something about php curl_multi and proxies that im missing? And I assumed that multiple PHP requests coming INTO a server would execute in parallel (thus why i was reffering to curl_multi'ing into the worker script which does the proxy as a poor mans multithreading). Anyone suggesting on using pthreads, will that actually solve the problem (it is not practical for me to install pthreads, but from what i can see, i doubt it would solve the problem)?
I am calling a series of links using the file_get_contents() method in a loop. Each link may take more than 15 minutes to process. Now, I worry about whether PHP's file_get_contents() has a timeout period?
If yes, it will time out with a call and move to next link. I don't want to call the next link without the prior one finishing.
So, please tell me whether file_get_contents() has a timeout period. The file which contains the file_get_contents() is set to set_time_limit() to zero (unlimited).
The default timeout is defined by default_socket_timeout ini-setting, which is 60 seconds. You can also change it on the fly:
ini_set('default_socket_timeout', 900); // 900 Seconds = 15 Minutes
Another way to set a timeout, would be to use stream_context_create to set the timeout as HTTP context options of the HTTP stream wrapper in use:
$ctx = stream_context_create(array('http'=>
array(
'timeout' => 1200, //1200 Seconds is 20 Minutes
)
));
echo file_get_contents('http://example.com/', false, $ctx);
As #diyism mentioned, "default_socket_timeout, stream_set_timeout, and stream_context_create timeout are all the timeout of every line read/write, not the whole connection timeout." And the top answer by #stewe has failed me.
As an alternative to using file_get_contents, you can always use curl with a timeout.
So here's a working code that works for calling links.
$url='http://example.com/';
$ch=curl_init();
$timeout=5;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_TIMEOUT, $timeout);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$result=curl_exec($ch);
curl_close($ch);
echo $result;
Yes! By passing a stream context in the third parameter:
Here with a timeout of 1s:
file_get_contents("https://abcedef.com", 0, stream_context_create(["http"=>["timeout"=>1]]));
Source in comment section of https://www.php.net/manual/en/function.file-get-contents.php
HTTP context options:
method
header
user_agent
content
request_fulluri
follow_location
max_redirects
protocol_version
timeout
Non HTTP stream contexts
Socket
FTP
SSL
CURL
Phar
Context (notifications callback)
Zip
It is worth noting that if changing default_socket_timeout on the fly, it might be useful to restore its value after your file_get_contents call:
$default_socket_timeout = ini_get('default_socket_timeout');
....
ini_set('default_socket_timeout', 10);
file_get_contents($url);
...
ini_set('default_socket_timeout', $default_socket_timeout);
For me work when i change my php.ini in my host:
; Default timeout for socket based streams (seconds)
default_socket_timeout = 300
For prototyping, using curl from the shell with the -m parameter allow to pass milliseconds, and will work in both cases, either the connection didn't initiate, error 404, 500, bad url, or the whole data wasn't retrieved in full in the allowed time range, the timeout is always effective. Php won't ever hang out.
Simply don't pass unsanitized user data in the shell call.
system("curl -m 50 -X GET 'https://api.kraken.com/0/public/OHLC?pair=LTCUSDT&interval=60' -H 'accept: application/json' > data.json");
// This data had been refreshed in less than 50ms
var_dump(json_decode(file_get_contents("data.json"),true));
We're having problems with an api we are using.
Here is the code we're using (naming no names on the api front)
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://apiurl.com/whatever/api/we/call');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$ch_output = curl_exec($ch);
curl_close($ch);
This response times out, but not for ages. This is hideously slowing down our web app, and as such further code breaks because of the bad return value. This I can fix, however the response timeout I don't know how to fix. Is there any way to quickly see if a url is "responding" (e.g. something like ping in terminal) before trying to do a curl request?
Thank you.
Do you mean usingcurl_setopt($ch,CURLOPT_CONNECTTIMEOUT,NUMERIC_TIMEOUT_VALUE);to set the timeout?
Your best option would be to set the timeout on curl to a more acceptable level. There are several timeout options available for DNS lookup, connect timeout, transfer timeout, etc. More information is available here http://php.net/manual/en/function.curl-setopt.php
Using PHP's Curl how do I know that the page is not responding so as to grab another one?
Use the CURLOPT_CONNECTTIMEOUT option:
// Wait two seconds before bailing
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 2);
There's also CURLOPT_TIMEOUT, which works for an entire request call (including DNS fetching and reading the data).
To check if a call did time out, you may be able to check its return value. If not, the CURL handler's curl_errno is set, which you can compare to CURLE_OPERATION_TIMEDOUT (or just CURLE_OK).