I am trying to send curl request from source to destination in loop. Loop runs for 2 times. First request lasts for 32 seconds and second one for 50 seconds. Finally times out. Controlling timeout is not in my control as it is shared hosting.
Source section below is being run in the browser. the below error message shows after using up 120 seconds
Error Details: Fatal error: Maximum execution time of 120 seconds
exceeded
Question
I am assuming that the request should not timeout, since both requests are submitted separately through their own curl request. Still, it seems like it is getting consolidated to form total one request.
In case I run the loop for one time, then everything works as it takes 30 seconds.
Am I missing anything?
Source
for($i = 0; $i <= 200; $i+= 100) {
$postData = array(
'start' => $i,
'end' => $i + 100
);
$ch = curl_init('Server url');
curl_setopt_array($ch, array(
CURLOPT_POST => TRUE,
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_HTTPHEADER => array(
'Content-Type: application/json'
),
CURLOPT_POSTFIELDS => json_encode($postData)
));
$response = curl_exec($ch);
$responseData = json_decode($response, TRUE);
curl_close($ch);
echo $response;
}
Destination
public function methodname()
{
$json = json_decode(file_get_contents('php://input'), true);
// .
// .
// Logic that runs for 32 seconds
// .
// .
header('Content-type: application/json');
echo json_encode("message");
}
Try to add a sleep(1) function inner your loop. It could be that the server which you are requested dont like multiple POST request in a short time.
try using cURl's CURLOPT_TIMEOUT or similar configurations. More information https://www.php.net/manual/en/function.curl-setopt.php here
Answer: read the documentation
LE:
You could also use set_time_limit(0); // or value > 120 to increase your script execution timeout
Related
I can't allow for file_get_contents to work more than 1 second, if it is not possible - I need to skip to next loop.
for ($i = 0; $i <=59; ++$i) {
$f=file_get_contents('http://example.com');
if(timeout<1 sec) - do something and loop next;
else skip file_get_contents(), do semething else, and loop next;
}
Is it possible to make a function like this?
Actually I'm using curl_multi and I can't fugure out how to set timeout on a WHOLE curl_multi request.
If you are working with http urls only you can do the following:
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 1
)
));
for ($i = 0; $i <=59; $i++) {
file_get_contents("http://example.com/", 0, $ctx);
}
However, this is just the read timeout, meaning the time between two read operations (or the time before the first read operation). If the download rate is constant, there should not being such gaps in the download rate and the download can take even an hour.
If you want the whole download not take more than a second you can't use file_get_contents() anymore. I would encourage to use curl in this case. Like this:
// create curl resource
$ch = curl_init();
for($i=0; $i<59; $i++) {
// set url
curl_setopt($ch, CURLOPT_URL, "example.com");
// set timeout
curl_setopt($ch, CURLOPT_TIMEOUT, 1);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
// close curl resource to free up system resources
curl_close($ch);
}
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 1
)
)
);
file_get_contents("http://example.com/", 0, $ctx);
Source
When I run a check on 10 urls, if I am able to get a connection with the host server, the handle will return a success message (CURLE_OK)
When processing each handle if a server refuses the connection, the handle will include a error message.
The problem
I assumed that when we get a bad handle, CURL will mark this handle but continue to process the unprocessed handles, however this is not what seems to happen.
When we come across a bad handle, CURL will mark this handle as bad, but will not process the remaining unprocessed handles.
This can be hard to detect, if I do get a connection with all handles, which is what happens most of the time, then the problem is not visible.(CURL only stops on first bad connection);
For the test, I had to find a suitable site which loads slow/refuses x amount simultaneous of connections.
set_time_limit(0);
$l = array(
'http://smotri.com/video/list/',
'http://smotri.com/video/list/sports/',
'http://smotri.com/video/list/animals/',
'http://smotri.com/video/list/travel/',
'http://smotri.com/video/list/hobby/',
'http://smotri.com/video/list/gaming/',
'http://smotri.com/video/list/mult/',
'http://smotri.com/video/list/erotic/',
'http://smotri.com/video/list/auto/',
'http://smotri.com/video/list/humour/',
'http://smotri.com/video/list/film/'
);
$mh = curl_multi_init();
$s = 0;
$f = 10;
while($s <= $f)
{
$ch = curl_init();
$curlsettings = array(
CURLOPT_URL => $l[$s],
CURLOPT_TIMEOUT => 0,
CURLOPT_CONNECTTIMEOUT => 0,
CURLOPT_RETURNTRANSFER => 1
);
curl_setopt_array($ch, $curlsettings);
curl_multi_add_handle($mh,$ch);
$s++;
}
$active = null;
do
{
curl_multi_exec($mh,$active);
curl_multi_select($mh);
$info = curl_multi_info_read($mh);
echo '<pre>';
var_dump($info);
if($info['result'] === CURLE_OK)
echo curl_getinfo($info['handle'],CURLINFO_EFFECTIVE_URL) . ' success<br>';
if($info['result'] != 0)
echo curl_getinfo($info['handle'],CURLINFO_EFFECTIVE_URL) . ' failed<br>';
} while ($active > 0);
curl_multi_close($mh);
I have dumped $info in the script which asks the Multi Handle if there is any new information on any handles whilst running.
When the script has ended we will see some bool(false) - when no new information was available(handles were still processing), along with all handles if all was successful or limited handles if one handle failed.
I have failed at fixing this, its probably something I have overlooked and I have gone too far down the road on attempting to fix things which are not relevant.
Some attempts at fixing this was.
Assign each $ch handle to a array - $ch[1], $ch[2] etc (instead of
adding current $ch handle to multi_handle then overwriting - as whats
in the test)
Removing handles after success/failure with
curl_multi_remove_handle
Set CURLOPT_CONNECTTIMEOUT and CURLOPT_TIMEOUT to infinity.
many more.(I will update this post as I have forgotten all of them)
Testing this with Php version 5.4.14
Hopefully I have illustrated the points well enough.
Thanks for reading.
I've been mucking around with your script for a while now trying to get it to work.It was only when I read Repeated calls to this function will return a new result each time, until a FALSE is returned as a signal that there is no more to get at this point., for http://se2.php.net/manual/en/function.curl-multi-info-read.php, that I realized a while loop might work.
The extra while loop makes it behave exactly how you'd expect. Here is the output I get:
http://smotri.com/video/list/sports/ failed
http://smotri.com/video/list/travel/ failed
http://smotri.com/video/list/gaming/ failed
http://smotri.com/video/list/erotic/ failed
http://smotri.com/video/list/humour/ failed
http://smotri.com/video/list/animals/ success
http://smotri.com/video/list/film/ success
http://smotri.com/video/list/auto/ success
http://smotri.com/video/list/ failed
http://smotri.com/video/list/hobby/ failed
http://smotri.com/video/list/mult/ failed
Here's the code I used for testing:
<?php
set_time_limit(0);
$l = array(
'http://smotri.com/video/list/',
'http://smotri.com/video/list/sports/',
'http://smotri.com/video/list/animals/',
'http://smotri.com/video/list/travel/',
'http://smotri.com/video/list/hobby/',
'http://smotri.com/video/list/gaming/',
'http://smotri.com/video/list/mult/',
'http://smotri.com/video/list/erotic/',
'http://smotri.com/video/list/auto/',
'http://smotri.com/video/list/humour/',
'http://smotri.com/video/list/film/'
);
$mh = curl_multi_init();
$s = 0;
$f = 10;
while($s <= $f)
{
$ch = curl_init();
if($s%2)
{
$curlsettings = array(
CURLOPT_URL => $l[$s],
CURLOPT_TIMEOUT_MS => 3000,
CURLOPT_RETURNTRANSFER => 1,
);
}
else
{
$curlsettings = array(
CURLOPT_URL => $l[$s],
CURLOPT_TIMEOUT_MS => 4000,
CURLOPT_RETURNTRANSFER => 1,
);
}
curl_setopt_array($ch, $curlsettings);
curl_multi_add_handle($mh,$ch);
$s++;
}
$active = null;
do
{
$mrc = curl_multi_exec($mh,$active);
curl_multi_select($mh);
while($info = curl_multi_info_read($mh))
{
echo '<pre>';
//var_dump($info);
if($info['result'] === 0)
{
echo curl_getinfo($info['handle'],CURLINFO_EFFECTIVE_URL) . ' success<br>';
}
else
{
echo curl_getinfo($info['handle'],CURLINFO_EFFECTIVE_URL) . ' failed<br>';
}
}
} while ($active > 0);
curl_multi_close($mh);
Hope that helps. For testing just adjust CURLOPT_TIMEOUT_MS to your internet connection. I made it so it alternates between 3000 and 4000 milliseconds as 3000 will fail and 4000 usually succeeds.
Update
After going through the PHP and libCurl docs I have found how curl_multi_exec works (in libCurl its curl_multi_perform). Upon first being called it starts handling transfers for all the added handles (added before via curl_multi_add_handle).
The number it assigns $active is the number of transfers still running. So if it's less than the total number of handles you have then you know one or more transfers are complete. So curl_multi_exec acts as a kind of progress indicator as well.
As all transfers are handled in a non-blocking fashion (transfers can finish simultaneously) the while loop curl_multi_exec's in cannot represent each iteration of completed url requests.
All data is stored in a queue so as soon as one or more transfers are complete you can call curl_multi_info_read to fetch this data.
In my original answer I had curl_multi_info_read in a while loop. This loop would keep iterating until curl_multi_info_read found no remaining data in the queue. After which the outer while loop would move onto the next iteration if $active != 0 (meaning curl_multi_exec reported transfers still not complete).
To summarize, the outer loop keeps iterating when there are still transfers not completed and the inner loop iterates only when there's data from a completed transfer.
The PHP documentation is pretty bad for curl multi functions so I hope this cleared a few things up. Below is an alternative way to do the same thing.
do
{
curl_multi_exec($mh,$active);
} while ($active > 0);
// while($info = curl_multi_info_read($mh)) would work also here
for($i = 0; $i <= $f; $i++){
$info = curl_multi_info_read($mh);
if($info['result'] === 0)
{
echo curl_getinfo($info['handle'],CURLINFO_EFFECTIVE_URL) . ' success<br>';
}
else
{
echo curl_getinfo($info['handle'],CURLINFO_EFFECTIVE_URL) . ' failed<br>';
}
}
From this information you can also see curl_multi_select is not needed as you don't want something that blocks until there is activity.
With the code you provided in your question it only seemed like curl wasn't proceeding after a few failed transfers but there was actually still data queued in the buffer. Your code just wasn't calling curl_multi_info_read enough times. The reason all the successful transfers were picked up by your code is due to PHP being run on a single thread and so the script hanged waiting for the requests. The timeouts for the failed requests didn't impact PHP enough to make it hang/wait that long so the number of iterations the while loop was doing was less than the number of queued data.
My code is pretty simple.
function x($url, $request)
{
static $curl = null;
if (is_null($curl)) $curl = curl_init($url);
$options = array(CURLOPT_RETURNTRANSFER => true, CURLOPT_HTTPHEADER => array('Content-type: application/json'), CURLOPT_POST => true, CURLOPT_POSTFIELDS => $request);
curl_setopt_array($curl, $options);
$response = curl_exec($curl);
echo curl_getinfo($curl)['total_time'].' ';
}
for ($i=0; $i<10000; $i++) x('http://server/', '<...post vars...>');
The problem is that most of the time I get response from the server in 0.0001 sec., but sometimes it is 1.0001 or 2.0001 sec.
The code above may output something like:
0.000632 0.00034 2.001671 0.000526 0.000501 0.000914 0.007355 0.000769 0.001429 0.001249 0.000554 0.001623 0.000595 0.006834 0.000793 0.000436 0.000408 0.006953 0.000867 0.000593 0.000546 0.007408 0.000837 0.001208 0.000652 0.000947 0.000614 0.000641 0.000647 0.001288 0.000501 0.000582 0.000625 0.000288 0.000351 0.000557 0.000601 0.000259 0.000309 0.000541 0.000565 0.000582 0.000949 0.000403 0.000896 0.000487 0.000569 0.001233 1.002649 .0.001107
The problem is not with the server, because there is no such delays if I use for example stream_context_create()+fopen(). It seems like the problem in curl itself, but I can't figure out why is it sleeping sometimes for a second or two.
If I use curl_close and reinitialise $curl each time there is no difference - it still hangs sometimes with the same frequency.
Thank you in advance for your reply.
I have a php file called testResponse.php which is only :
<?php
sleep(5);
echo"go";
?>
Now, I'm calling this file from a another page using file_get_contents like this :
$start= microtime(true);
$opts = array('http' =>
array(
'method' => 'GET',
'timeout' => 1
)
);
$context = stream_context_create($opts);
$loc = #file_get_contents("http://www.mywebsite.com/testResponse.php", false, $context);
$end= microtime(true);
echo $end - $start, "\n";
The output is more than 5 sec, which means that my timeout has been ignored...
I followed the advice of this post : stackoverflow.com/questions/3689371
But it seems that hostname cannot be a path (like www.mywebsite.com/testResponse.php) but directly the hostname like www.mywebsite.com.
So I'm stuck to achieve this goal :
Get content of page www.test.com/x.php with constraint :
if test.com doesn't exist or the page x.php doesn't exist returns nothing quickly
if the page exist but takes more than 1 sec to load, abort
else get the content of the file
Edit : By the way, it seems to work when I call this page (testResponse.php) from my local server. Well, it multiply the timeout by 2. For instance, If I have 1 for timeout, I will have echoed something like "2.0054645". But only from local...
The solution is to use PHP's cURL functions. The other question you linked to explains things properly, about the read timeouts vs. the connection timeouts, and so on, but neither of those are truly what you're looking for here. Even the connection timeout won't work, because the connection to testResponse.php is always successful; after that it's waiting, so what you need is an execution timeout. This is where cURL comes in handy.
So, testResponse.php doesn't need to be altered. In your main file, though, try the following code (this is tested and it works on my server):
$start = microtime(true);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.mywebsite.com/testResponse.php");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 1);
$output = curl_exec($ch);
$errno = curl_errno($ch);
if ($errno > 0) {
if ($errno === 28) {
echo "Connection timed out.";
}
else {
echo "Error #" . $errno . ": " . curl_error($ch);
}
}
else {
echo $output;
}
$end = microtime(true);
echo "<br><br>" . ($end - $start);
curl_close($ch);
This sets the execution time of the cURL session, via the CURLOPT_TIMEOUT option you see on line 5. So, when the connection is timed out, $errno will equal 28, the code for cURL's operation timeout error. The rest of the error codes are listed in the cURL documentation, so you can expand the script above to act accordingly.
Finally, because of the CURLOPT_RETURNTRANSFER option that's set, curl_exec($ch) will be set to the content of the retrieved page if the session succeeds. Otherwise, it will equal false.
Hope this helps!
Edit: Removed the statement setting CURLOPT_HEADER. I also, for some reason, was under the impression that curl_exec($ch) set the value of $ch to the returned contents, forgetting that the contents are returned by curl_exec().
I am using php 5.2 and I am fetching data from url using file_get_contents function. This is loop for 5000 and I have divided into 500 slots and set a script like this.
For 500 it is taking 3 hours to complete because for some url it is taking too much time and for some it is in 1 sec that is fine.
What I want if url is taking more than 30 sec then skip and go for next.
I want to stop fetch after 30 sec.
<?php
// Create the stream context
$context = stream_context_create(array(
'http' => array(
'timeout' => 1 // Timeout in seconds
)
));
// Fetch the URL's contents
echo date("Y-m-d H:i:s")."\n";
$contents = file_get_contents('http://example.com', 0, $context);
echo date("Y-m-d H:i:s")."\n";
// Check for empties
if (!empty($contents))
{
// Woohoo
// echo $contents;
echo "file fetched";
}
else
{
echo $contents;
echo "more than 30 sec";
}
?>
I have already done that it is not working for me because file_get_contents function is not stoping it will continue , then only thing now I am getting no result after 30 sec but time it is taking sameas u can see in output.
Output of php
2012-03-09 11:26:38
2012-03-09 11:26:40
more than 30 sec
You can set the HTTP timeout. (Not tested)
<?php
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 30
)
));
file_get_contents("http://example.com/", 0, $ctx);
Source
Edit: I don't know why it isn't working with this code by you. But if you don't manage it to bring it to work with this you may also want to give CURL a try. This could be eventually also faster for that (but I don't know if that is really faster...).
If that would work for you, you could than use the curl_setopt function to set the timeout time with the CURLOPT_TIMEOUT flag.
There some info on the php manual about timeouts.
http://php.net/manual/en/function.file-get-contents.php
there is mention of the following as of php 5.2.1
ini_set('default_socket_timeout', 120);
$a = file_get_contents("http://abcxyz.com");
or adding a context which is more or less the same.
// Create the stream context
$context = stream_context_create(array(
'http' => array(
'timeout' => 3 // Timeout in seconds
)
));
// Fetch the URL's contents
$contents = file_get_contents('http://abcxyz.com', 0, $context);`
A third option is using PHP's fsockopen which has an explicit timeout option
http://www.php.net/manual/en/function.fsockopen.php
$timeout = 2; // seconds
$fp = fsockopen($url, 80, $errNo, $errString, $timeout);
/* stops connecting after 2 seconds,
stores the error Number in $errNo,
the error String in $errStr */
To save writing a lot of code, you could use it as a quick check if host is up.
ie:
if (pingLink($domain,$timeout)) {
file_get_contents()
}
function pingLink($domain,$timeout=30){
$status = 0; //default site is down
$file = fsockopen($domain,"r");
if ($file) {
$status = 1; // Site is up
fclose($file);
}
return $status;
}