here i explain details about my question
first check below code
$ch = curl_init('http://example123.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $xml);
$result = #curl_exec($ch);
now my question is that if "http://example123.com" is not valid or there is no such URL, then what is the problem.
i have a page there written the above. while execute the code the page takes too much time. but if i comment above 5 line then my page executes faster.
can anybody told me what is the reason behind and why the page execute very slow.
Thanks Sanjib
Your script waits for a response (which may take 60 seconds for default_socket_timeout.)
You should set curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); to make the script redirect from http://example123.com/ to http://ww38.example123.com/, the way it does in browser.
When cUrl request encounters INVALID URI it waits until Default connection timeout is reached, this makes page load slower.
Default Connection timeout set in lib/connect.h (if Linux Server)
You can change it here
#define DEFAULT_CONNECT_TIMEOUT 300000 /* milliseconds == five minutes */
Or you can Explicitly set this in you Codes
curl_setopt( $c, CURLOPT_CONNECTIONTIMEOUT, 100 ); # Or curl_setopt($ch, CURLOPT_CONNECTTIMEOUT_MS ,0);
curl_setopt($ch, CURLOPT_TIMEOUT, 400); # timeout in seconds
CURLOPT_CONNECTTIMEOUT : The number of seconds to wait while trying to connect. Use 0 to wait indefinitely.
CURLOPT_TIMEOUT : The maximum number of seconds to allow cURL functions to execute.
*** (If you are using PHP as a fastCGI application then make sure you check the fastCGI timeout settings.)*
Related
I have a foreach doing curl request, but some requests take more than 10 seconds to answer, I want to skip those requests, how is this done? I'm trying to set maximum 5 seconds per request if not responds then continue to the next request..
I tried setting timeout in curl
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2);
or
curl_setopt($ch, CURLOPT_TIMEOUT, 400);
but I think that doesn't fix my problem.
While using PHP I am taking image links from my mysql database, and echoing them out. There are 600 or so, but it keeps stopping after running 100 or so. It is not a logic error, it seems there is a setting that is stopping php from continuing the curl. Please advise which setting I should expand to allow a longer CURL thanks!
Here is what I am using now:
function file_get_contents_curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch,CURLOPT_BINARYTRANSFER, true);
$data = curl_exec($ch);
return $data;
}
$htmlaa = file_get_contents_curl($getimagefrom);
$docaa = new DOMDocument();
#$docaa->loadHTML($htmlaa);
Again, it is worknig just fine but just keeps stopping after running for maybe 3 minutes.
You can set the curl timeout like so:
curl_setopt($ch, CURLOPT_TIMEOUT, 1000); //seconds to live
Since there are multiple factors that influence execution time you should also check out these two as well:
http://php.net/manual/en/function.set-time-limit.php
http://php.net/manual/en/info.configuration.php#ini.max-execution-time
Also please note that CURLOPT_TIMEOUT defines the amount of time that any cURL function is allowed to take to execute. You should also checkout CURLOPT_CONNECTTIMEOUT option.
I'm using this code to test my Awstats with private proxies [4 ips]
curl_setopt($ch, CURLOPT_URL, "http://example.com/");
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_HTTPGET,1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_PROXY, trim($privateproxylist[$p]));
When I'm checking my stats I can see visits and refs but is there any option to make this script stay on CURLOPT_URL for 60s for each proxy?
Thanks
According to a webpage i found:
Based on the time between a visitor's first and last document access
AWStats tries to calculate an average visit duration.
Therefore you need to wait 60s then make a request to the website again. As i don't know the internals of AWStats, you may need to use a different page url, but in theory, you should be able to just request the same url. Therefore its just a case of:
// 1. Make your curl request to URL
// 2. Wait 60s
sleep(60);
// 3. Make the curl request again
// 4. Change proxy and go back to step 1
Of course this is syncronous, so you will have the script running for a minimum of 4 minutes (based on 4 proxy ips), so dont forget to set the execution time limit of your php script to unlimited (or very high).
You may also need to set the "cookiejar" config on the curl resource, as awstats MAY use a session cookie or something like that, to identify visitors. So a cookiejar text file will need to be set, so the session cookie can be stored and then resent on the second request. Don't forget to clear the cookie file (or just set a new text file in the options) before using a new proxy ip.
In my php script I am using curl library and the function curl_exec takes 1-5 seconds to be executed ( and for some url it take 10 seconds as well ). It is normal ?
This is my script:
$ch = curl_init();
$timeout = 5;
$url = "http://www.mashable.com/feed";
curl_setopt ($ch, CURLOPT_URL, $url );
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$file_contents = curl_exec($ch);
curl_close($ch);
It is normal ?
Totally depends on your connection, the target URLs, and the server it runs on. It's well possible that it's normal.
If you have command line access to your server, you could try replicating the actions in command line curl and see how long they take there; also try them from your local machine. If there are massive differences, there could be a networking or firewall issue.
But those kinds of loading times are not unheard of.
It doesn't have to be uncommon - you are under the same conditions as if you requested an URL with your own browser - connecting and exchanging requests will take some time and if the URL you are requesting is busy or on a slower connection, the time naturally increases.
I'm running a curl request on an eXist database through php. The dataset is very large, and as a result, the database consistently takes a long amount of time to return an XML response. To fix that, we set up a curl request, with what is supposed to be a long timeout.
$ch = curl_init();
$headers["Content-Length"] = strlen($postString);
$headers["User-Agent"] = "Curl/1.0";
curl_setopt($ch, CURLOPT_URL, $requestUrl);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERPWD, 'admin:');
curl_setopt($ch,CURLOPT_TIMEOUT,1000);
$response = curl_exec($ch);
curl_close($ch);
However, the curl request consistently ends before the request is completed (<1000 when requested via a browser). Does anyone know if this is the proper way to set timeouts in curl?
See documentation: http://www.php.net/manual/en/function.curl-setopt.php
CURLOPT_CONNECTTIMEOUT - The number of seconds to wait while trying to connect. Use 0 to wait indefinitely.
CURLOPT_TIMEOUT - The maximum number of seconds to allow cURL functions to execute.
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 400); //timeout in seconds
also don't forget to enlarge time execution of php script self:
set_time_limit(0);// to infinity for example
Hmm, it looks to me like CURLOPT_TIMEOUT defines the amount of time that any cURL function is allowed to take to execute. I think you should actually be looking at CURLOPT_CONNECTTIMEOUT instead, since that tells cURL the maximum amount of time to wait for the connection to complete.
There is a quirk with this that might be relevant for some people... From the PHP docs comments.
If you want cURL to timeout in less than one second, you can use CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like systems" that causes libcurl to timeout immediately if the value is < 1000 ms with the error "cURL Error (28): Timeout was reached". The explanation for this behavior is:
"If libcurl is built to use the standard system name resolver, that portion of the transfer will still use full-second resolution for timeouts with a minimum timeout allowed of one second."
What this means to PHP developers is "You can't use this function without testing it first, because you can't tell if libcurl is using the standard system name resolver (but you can be pretty sure it is)"
The problem is that on (Li|U)nix, when libcurl uses the standard name resolver, a SIGALRM is raised during name resolution which libcurl thinks is the timeout alarm.
The solution is to disable signals using CURLOPT_NOSIGNAL. Here's an example script that requests itself causing a 10-second delay so you can test timeouts:
if (!isset($_GET['foo'])) {
// Client
$ch = curl_init('http://localhost/test/test_timeout.php?foo=bar');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_NOSIGNAL, 1);
curl_setopt($ch, CURLOPT_TIMEOUT_MS, 200);
$data = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
if ($curl_errno > 0) {
echo "cURL Error ($curl_errno): $curl_error\n";
} else {
echo "Data received: $data\n";
}
} else {
// Server
sleep(10);
echo "Done.";
}
From http://www.php.net/manual/en/function.curl-setopt.php#104597
Your code sets the timeout to 1000 seconds. For milliseconds, use CURLOPT_TIMEOUT_MS.
You will need to make sure about timeouts between you and the file.
In this case PHP and Curl.
To tell Curl to never timeout when a transfer is still active, you need to set CURLOPT_TIMEOUT to 0, instead of 1000.
curl_setopt($ch, CURLOPT_TIMEOUT, 0);
In PHP, again, you must remove time limits or PHP it self (after 30 seconds by default) will kill the script along Curl's request. This alone should fix your issue.
In addition, if you require data integrity, you could add a layer of security by using ignore_user_abort:
# The maximum execution time, in seconds. If set to zero, no time limit is imposed.
set_time_limit(0);
# Make sure to keep alive the script when a client disconnect.
ignore_user_abort(true);
A client disconnection will interrupt the execution of the script and possibly damaging data,
eg. non-transitional database query, building a config file, ecc., while in your case it would download a partial file... and you might, or not, care about this.
Answering this old question because this thread is at the top on engine searches for CURL_TIMEOUT.
You can't run the request from a browser, it will timeout waiting for the server running the CURL request to respond. The browser is probably timing out in 1-2 minutes, the default network timeout.
You need to run it from the command line/terminal.
If you are using PHP as a fastCGI application then make sure you check the fastCGI timeout settings.
See: PHP curl put 500 error