I have a foreach doing curl request, but some requests take more than 10 seconds to answer, I want to skip those requests, how is this done? I'm trying to set maximum 5 seconds per request if not responds then continue to the next request..
I tried setting timeout in curl
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2);
or
curl_setopt($ch, CURLOPT_TIMEOUT, 400);
but I think that doesn't fix my problem.
Related
I currently have a Laravel application, which is doing a CURL request from one route to another route within the same route. My CURL looks like this:
//LOGGING THAT A CURL CALL IS ABOUT TO BE MADE
$url = env('APP_URL') . '/tests/add/results';
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); //return server error
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLINFO_HEADER_OUT, FALSE);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, TRUE);
curl_setopt($ch, CURLOPT_POSTFIELDS, $test_post_data);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$response = curl_exec($ch);
In the route, that the POST is being sent to, the first thing i log, is the request data:
//LOGING THAT I RECEIVED THE CURL CALL in receiving function
I'm noticing, that the logs for the request data get logged exactly the same amount of time as the timeout, meaning the request is actually being sent 10 seconds after the initial call.
In my logs i'll see something like:
10:10:10 - LOGGING CURL CALL
10:10:20 - Recieving CURL call
If i change the timeout to 30, then the log shows 30 seconds later that i received the CURL call.
Does anyone have any idea why this may be happening?
The response from the CURL just comes back as false always.
I did the following to make a post request work:
Instead of calling the route via CURL i did a post directly to the route function
$testController = new TestsController;
$test_data_request = new \Illuminate\Http\Request();
$test_data_request->setMethod('POST');
$test_data_request->request->add( $test_post_data );
$testId = $testController->addTestResults($test_data_request);
You've not provided enough information, but i think, the problem will be one or more of the following:
The WebServer http://127.0.0.1:8000 is not running
The script located on http://127.0.0.1:8000/tests/add/results is running too long and the request timeouts before it is completed
The requested path is returning redirect headers and creates an infinite loop
The response is too big to finish the data transfer in thirty seconds (very wierd if on localhost)
Try some more debugging and provide more information on this, so we may help you.
PS: Firstly i would try to catch the headers (curl_setopt($ch, CURLINFO_HEADER_OUT, true);) and print out the response (var_dump($response);) - or save it to some file :)
I have to call external script in which i make a first call with CURL to get data which takes about 2-3 minutes, Now during this time i need to make other external call with CURL to get the progress of the first call. But issue is my next call wait till the reply of first CURL comes. I also checked curl_multi but that is also not helping me as i want to make many calls when the first call is in progress. So anyone can help me to solve it please.
I suppose that, there is no need to make second call to track the CURL progress. You can achieve the same by using CURL option CURLOPT_PROGRESSFUNCTION with a callback function.
The call back method takes 5 arguments:
cURL resource
Total number of bytes expected to be downloaded
Number of bytes downloaded so far
Total number of bytes expected to be uploaded
Number of bytes uploaded so far
In the callback method you can calculate the percentage downloaded/uploaded. An example is given below:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://stackoverflow.com");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION, 'progress');
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_HEADER, 0);
$html = curl_exec($ch);
curl_close($ch);
function progress($resource,$download_size, $downloaded, $upload_size, $uploaded)
{
if($download_size > 0)
echo $downloaded / $download_size * 100 . "%\n";
sleep(1);
}
There is a way to do this - please see the following links, they explain how to do this using curl_multi_init: php.net curl_multi_init and http://arguments.callee.info/2010/02/21/multiple-curl-requests-with-php/
here i explain details about my question
first check below code
$ch = curl_init('http://example123.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $xml);
$result = #curl_exec($ch);
now my question is that if "http://example123.com" is not valid or there is no such URL, then what is the problem.
i have a page there written the above. while execute the code the page takes too much time. but if i comment above 5 line then my page executes faster.
can anybody told me what is the reason behind and why the page execute very slow.
Thanks Sanjib
Your script waits for a response (which may take 60 seconds for default_socket_timeout.)
You should set curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); to make the script redirect from http://example123.com/ to http://ww38.example123.com/, the way it does in browser.
When cUrl request encounters INVALID URI it waits until Default connection timeout is reached, this makes page load slower.
Default Connection timeout set in lib/connect.h (if Linux Server)
You can change it here
#define DEFAULT_CONNECT_TIMEOUT 300000 /* milliseconds == five minutes */
Or you can Explicitly set this in you Codes
curl_setopt( $c, CURLOPT_CONNECTIONTIMEOUT, 100 ); # Or curl_setopt($ch, CURLOPT_CONNECTTIMEOUT_MS ,0);
curl_setopt($ch, CURLOPT_TIMEOUT, 400); # timeout in seconds
CURLOPT_CONNECTTIMEOUT : The number of seconds to wait while trying to connect. Use 0 to wait indefinitely.
CURLOPT_TIMEOUT : The maximum number of seconds to allow cURL functions to execute.
*** (If you are using PHP as a fastCGI application then make sure you check the fastCGI timeout settings.)*
I'm using this code to test my Awstats with private proxies [4 ips]
curl_setopt($ch, CURLOPT_URL, "http://example.com/");
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_HTTPGET,1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_PROXY, trim($privateproxylist[$p]));
When I'm checking my stats I can see visits and refs but is there any option to make this script stay on CURLOPT_URL for 60s for each proxy?
Thanks
According to a webpage i found:
Based on the time between a visitor's first and last document access
AWStats tries to calculate an average visit duration.
Therefore you need to wait 60s then make a request to the website again. As i don't know the internals of AWStats, you may need to use a different page url, but in theory, you should be able to just request the same url. Therefore its just a case of:
// 1. Make your curl request to URL
// 2. Wait 60s
sleep(60);
// 3. Make the curl request again
// 4. Change proxy and go back to step 1
Of course this is syncronous, so you will have the script running for a minimum of 4 minutes (based on 4 proxy ips), so dont forget to set the execution time limit of your php script to unlimited (or very high).
You may also need to set the "cookiejar" config on the curl resource, as awstats MAY use a session cookie or something like that, to identify visitors. So a cookiejar text file will need to be set, so the session cookie can be stored and then resent on the second request. Don't forget to clear the cookie file (or just set a new text file in the options) before using a new proxy ip.
I'm running a curl request on an eXist database through php. The dataset is very large, and as a result, the database consistently takes a long amount of time to return an XML response. To fix that, we set up a curl request, with what is supposed to be a long timeout.
$ch = curl_init();
$headers["Content-Length"] = strlen($postString);
$headers["User-Agent"] = "Curl/1.0";
curl_setopt($ch, CURLOPT_URL, $requestUrl);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERPWD, 'admin:');
curl_setopt($ch,CURLOPT_TIMEOUT,1000);
$response = curl_exec($ch);
curl_close($ch);
However, the curl request consistently ends before the request is completed (<1000 when requested via a browser). Does anyone know if this is the proper way to set timeouts in curl?
See documentation: http://www.php.net/manual/en/function.curl-setopt.php
CURLOPT_CONNECTTIMEOUT - The number of seconds to wait while trying to connect. Use 0 to wait indefinitely.
CURLOPT_TIMEOUT - The maximum number of seconds to allow cURL functions to execute.
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 400); //timeout in seconds
also don't forget to enlarge time execution of php script self:
set_time_limit(0);// to infinity for example
Hmm, it looks to me like CURLOPT_TIMEOUT defines the amount of time that any cURL function is allowed to take to execute. I think you should actually be looking at CURLOPT_CONNECTTIMEOUT instead, since that tells cURL the maximum amount of time to wait for the connection to complete.
There is a quirk with this that might be relevant for some people... From the PHP docs comments.
If you want cURL to timeout in less than one second, you can use CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like systems" that causes libcurl to timeout immediately if the value is < 1000 ms with the error "cURL Error (28): Timeout was reached". The explanation for this behavior is:
"If libcurl is built to use the standard system name resolver, that portion of the transfer will still use full-second resolution for timeouts with a minimum timeout allowed of one second."
What this means to PHP developers is "You can't use this function without testing it first, because you can't tell if libcurl is using the standard system name resolver (but you can be pretty sure it is)"
The problem is that on (Li|U)nix, when libcurl uses the standard name resolver, a SIGALRM is raised during name resolution which libcurl thinks is the timeout alarm.
The solution is to disable signals using CURLOPT_NOSIGNAL. Here's an example script that requests itself causing a 10-second delay so you can test timeouts:
if (!isset($_GET['foo'])) {
// Client
$ch = curl_init('http://localhost/test/test_timeout.php?foo=bar');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_NOSIGNAL, 1);
curl_setopt($ch, CURLOPT_TIMEOUT_MS, 200);
$data = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
if ($curl_errno > 0) {
echo "cURL Error ($curl_errno): $curl_error\n";
} else {
echo "Data received: $data\n";
}
} else {
// Server
sleep(10);
echo "Done.";
}
From http://www.php.net/manual/en/function.curl-setopt.php#104597
Your code sets the timeout to 1000 seconds. For milliseconds, use CURLOPT_TIMEOUT_MS.
You will need to make sure about timeouts between you and the file.
In this case PHP and Curl.
To tell Curl to never timeout when a transfer is still active, you need to set CURLOPT_TIMEOUT to 0, instead of 1000.
curl_setopt($ch, CURLOPT_TIMEOUT, 0);
In PHP, again, you must remove time limits or PHP it self (after 30 seconds by default) will kill the script along Curl's request. This alone should fix your issue.
In addition, if you require data integrity, you could add a layer of security by using ignore_user_abort:
# The maximum execution time, in seconds. If set to zero, no time limit is imposed.
set_time_limit(0);
# Make sure to keep alive the script when a client disconnect.
ignore_user_abort(true);
A client disconnection will interrupt the execution of the script and possibly damaging data,
eg. non-transitional database query, building a config file, ecc., while in your case it would download a partial file... and you might, or not, care about this.
Answering this old question because this thread is at the top on engine searches for CURL_TIMEOUT.
You can't run the request from a browser, it will timeout waiting for the server running the CURL request to respond. The browser is probably timing out in 1-2 minutes, the default network timeout.
You need to run it from the command line/terminal.
If you are using PHP as a fastCGI application then make sure you check the fastCGI timeout settings.
See: PHP curl put 500 error