I'm using some sites to detect my site visitor's country. I mean like this
$ip = $_SERVER['REMOTE_ADDR'];
$url1 = 'http://api.hostip.info/get_json.php?ip='.$ip;
$url2 = 'http://ip2country.sourceforge.net/ip2c.php?format=JSON&ip='.$ip;
Sometimes sites like sourgeforge taking too much time to load.
So can anyone tell how to limit the http response time.?
if url1 is down or not responded in x seconds then move to url2,url3,etc
$context = stream_context_create(array(
'http' => array(
'method' => 'GET',
, 'timeout' => 3
)
));
Then supply the stream context to fopen() or file_get_contents() etc...
http://php.net/manual/en/stream.contexts.php
http://php.net/manual/en/context.http.php
The manual calls that a "read timeout". I worry it may not include time for stuff like dns resolution + socket connection. I think the timeout before php tries reading from the stream may be governed by the default_socket_timeout setting.
You may want to consider curl, it seems a bit more specific, but I'm not sure if CURLOPT_TIMEOUT is inclusive of CURLOPT_CONNECTTIMEOUT.
$ch = curl_init();
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2);
curl_setopt($ch, CURLOPT_TIMEOUT, 2);
http://php.net/manual/en/function.curl-setopt.php
If this is done using streams, you could use stream_set_timeout for this. A decent example from the php manual, it also describes more advanced ways of archieving this:
$fp = fsockopen("www.example.com", 80);
if (!$fp) {
echo "Unable to open\n";
} else {
fwrite($fp, "GET / HTTP/1.0\r\n\r\n");
stream_set_timeout($fp, 2);
$res = fread($fp, 2000);
$info = stream_get_meta_data($fp);
fclose($fp);
if ($info['timed_out']) {
echo 'Connection timed out!';
} else {
echo $res;
}
}
There is another solution, just download the DB and offer that service to yourself on a faster machine of your own:
IP to Geolocation db
Related
I am making a website that will check if a website is working and live. I pass in the URL of the site I would like to check and the following code will check if the site is live and return the HTTP response code as well as true or false.
function urlExists($url=NULL)
{
if($url == NULL) return false;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpcode == 0) {
return array (false, $httpcode);
}
else if($httpcode < 400){
return array (true, $httpcode);
} else {
return array (false, $httpcode);
}
}
With one of the sites I am testing though I am getting the HTTP response code of 0 even though I know that the site is live and working.
The site is very slow as its a large site on a not very powerful server so response times can vary between 7 - 25 seconds.
Any help would be greatly appreciated.
Thanks,
Sam
Based on these two links:-
https://curl.haxx.se/libcurl/c/CURLOPT_TIMEOUT.html
And
https://curl.haxx.se/libcurl/c/CURLOPT_CONNECTTIMEOUT.html
First one is:- set maximum time the request is allowed to take
Second one is:- timeout for the connect phase
As you said that the Site URL you are hitting is taking 7-25 second for responding. meanwhile your CURL request is terminated and closed because of these two time settings.
Increase these two time settings in your code and it will work for you.
thanks.
I will offer 2 alternatives for you to compare - along with your curl() function, you will have 3 options to see which one is better/faster for you.
Option A (all php versions), requires fopen() to be activated:
if (!$fp = fopen($url, 'r'))
{
trigger_error("Unable to open URL ($url)", E_USER_ERROR);
}
$headers = stream_get_meta_data($fp);
fclose($fp);
$http_header_info = $headers['wrapper_data'][0];
$httpCode = (int)substr($http_header_info, 9, 3);
Option B (php5+):
$headers = get_headers($url, 1);
$http_header_info = $headers[0];
$httpCode = substr($http_header_info, 9, 3);
Also, if anyone has benchmarks on these 3 approaches, i am curious to see which is more appropriate (only for retrieving http response headers of course)
Code 0 returns often when used invalid URL syntax or host not found error.
You can also call curl_error($ch) function (http://php.net/manual/en/function.curl-error.php) to determine error details.
I have a php file called testResponse.php which is only :
<?php
sleep(5);
echo"go";
?>
Now, I'm calling this file from a another page using file_get_contents like this :
$start= microtime(true);
$opts = array('http' =>
array(
'method' => 'GET',
'timeout' => 1
)
);
$context = stream_context_create($opts);
$loc = #file_get_contents("http://www.mywebsite.com/testResponse.php", false, $context);
$end= microtime(true);
echo $end - $start, "\n";
The output is more than 5 sec, which means that my timeout has been ignored...
I followed the advice of this post : stackoverflow.com/questions/3689371
But it seems that hostname cannot be a path (like www.mywebsite.com/testResponse.php) but directly the hostname like www.mywebsite.com.
So I'm stuck to achieve this goal :
Get content of page www.test.com/x.php with constraint :
if test.com doesn't exist or the page x.php doesn't exist returns nothing quickly
if the page exist but takes more than 1 sec to load, abort
else get the content of the file
Edit : By the way, it seems to work when I call this page (testResponse.php) from my local server. Well, it multiply the timeout by 2. For instance, If I have 1 for timeout, I will have echoed something like "2.0054645". But only from local...
The solution is to use PHP's cURL functions. The other question you linked to explains things properly, about the read timeouts vs. the connection timeouts, and so on, but neither of those are truly what you're looking for here. Even the connection timeout won't work, because the connection to testResponse.php is always successful; after that it's waiting, so what you need is an execution timeout. This is where cURL comes in handy.
So, testResponse.php doesn't need to be altered. In your main file, though, try the following code (this is tested and it works on my server):
$start = microtime(true);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.mywebsite.com/testResponse.php");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 1);
$output = curl_exec($ch);
$errno = curl_errno($ch);
if ($errno > 0) {
if ($errno === 28) {
echo "Connection timed out.";
}
else {
echo "Error #" . $errno . ": " . curl_error($ch);
}
}
else {
echo $output;
}
$end = microtime(true);
echo "<br><br>" . ($end - $start);
curl_close($ch);
This sets the execution time of the cURL session, via the CURLOPT_TIMEOUT option you see on line 5. So, when the connection is timed out, $errno will equal 28, the code for cURL's operation timeout error. The rest of the error codes are listed in the cURL documentation, so you can expand the script above to act accordingly.
Finally, because of the CURLOPT_RETURNTRANSFER option that's set, curl_exec($ch) will be set to the content of the retrieved page if the session succeeds. Otherwise, it will equal false.
Hope this helps!
Edit: Removed the statement setting CURLOPT_HEADER. I also, for some reason, was under the impression that curl_exec($ch) set the value of $ch to the returned contents, forgetting that the contents are returned by curl_exec().
I'm trying to find a way to only quickly access a file and then disconnect immediately.
So I've decided to use cURL since it's the fastest option for me. But I can't figure out how I should "disconnect" cURL.
With the code below, Apache's access logs says that the file I tried accessing was indeed accessed, but I'm feeling a little iffy about this, because when I just run the while loop without breaking out of it, it just keeps looping. Shouldn't the loop stop when cURL has finished fetching the file? Or am I just being silly; is the loop just restarting constantly?
<?php
$Resource = curl_init();
curl_setopt($Resource, CURLOPT_URL, '...');
curl_setopt($Resource, CURLOPT_HEADER, 0);
curl_setopt($Resource, CURLOPT_USERAGENT, '...');
while(curl_exec($Resource)){
break;
}
curl_close($Resource);
?>
I tried setting the CURLOPT_CONNECTTIMEOUT_MS / CURLOPT_CONNECTTIMEOUT options to very small values, but it didn't help in this case.
Is there a more "proper" way of doing this?
This statement is superflous:
while(curl_exec($Resource)){
break;
}
Instead just keep the return value for future reference:
$result = curl_exec($Resource);
The while loop does not help anything. So now to your question: You can tell curl that it should only take some bytes from the body and then quit. That can be achieved by reducing the CURLOPT_BUFFERSIZE to a small value and by using a callback function to tell curl it should stop:
$withCallback = array(
CURLOPT_BUFFERSIZE => 20, # ~ value of bytes you'd like to get
CURLOPT_WRITEFUNCTION => function($handle, $data) {
echo "WRITE: (", strlen($data), ") $data\n";
return 0;
},
);
$handle = curl_init("http://stackoverflow.com/");
curl_setopt_array($handle, $withCallback);
curl_exec($handle);
curl_close($handle);
Output:
WRITE: (10) <!DOCTYPE
Another alternative is to make a HEAD request by using CURLOPT_NOBODY which will never fetch the body. But it's not a GET request.
The connect timeout settings are about how long it will take until the connect times out. The connect is the phase until the server accepts input from curl and curl starts to know about that the server does. It's not related to the phase when curl fetches data from the server, that's
CURLOPT_TIMEOUT The maximum number of seconds to allow cURL functions to execute.
You find a long list of available options in the PHP Manual: curl_setoptĀDocs.
Perhaps that might be helpful?
$GLOBALS["dataread"] = 0;
define("MAX_DATA", 3000); // how many bytes should be read?
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.php.net/");
curl_setopt($ch, CURLOPT_WRITEFUNCTION, "handlewrite");
curl_exec($ch);
curl_close($ch);
function handlewrite($ch, $data)
{
$GLOBALS["dataread"] += strlen($data);
echo "READ " . strlen($data) . " bytes\n";
if ($GLOBALS["dataread"] > MAX_DATA) {
return 0;
}
return strlen($data);
}
I am using php 5.2 and I am fetching data from url using file_get_contents function. This is loop for 5000 and I have divided into 500 slots and set a script like this.
For 500 it is taking 3 hours to complete because for some url it is taking too much time and for some it is in 1 sec that is fine.
What I want if url is taking more than 30 sec then skip and go for next.
I want to stop fetch after 30 sec.
<?php
// Create the stream context
$context = stream_context_create(array(
'http' => array(
'timeout' => 1 // Timeout in seconds
)
));
// Fetch the URL's contents
echo date("Y-m-d H:i:s")."\n";
$contents = file_get_contents('http://example.com', 0, $context);
echo date("Y-m-d H:i:s")."\n";
// Check for empties
if (!empty($contents))
{
// Woohoo
// echo $contents;
echo "file fetched";
}
else
{
echo $contents;
echo "more than 30 sec";
}
?>
I have already done that it is not working for me because file_get_contents function is not stoping it will continue , then only thing now I am getting no result after 30 sec but time it is taking sameas u can see in output.
Output of php
2012-03-09 11:26:38
2012-03-09 11:26:40
more than 30 sec
You can set the HTTP timeout. (Not tested)
<?php
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 30
)
));
file_get_contents("http://example.com/", 0, $ctx);
Source
Edit: I don't know why it isn't working with this code by you. But if you don't manage it to bring it to work with this you may also want to give CURL a try. This could be eventually also faster for that (but I don't know if that is really faster...).
If that would work for you, you could than use the curl_setopt function to set the timeout time with the CURLOPT_TIMEOUT flag.
There some info on the php manual about timeouts.
http://php.net/manual/en/function.file-get-contents.php
there is mention of the following as of php 5.2.1
ini_set('default_socket_timeout', 120);
$a = file_get_contents("http://abcxyz.com");
or adding a context which is more or less the same.
// Create the stream context
$context = stream_context_create(array(
'http' => array(
'timeout' => 3 // Timeout in seconds
)
));
// Fetch the URL's contents
$contents = file_get_contents('http://abcxyz.com', 0, $context);`
A third option is using PHP's fsockopen which has an explicit timeout option
http://www.php.net/manual/en/function.fsockopen.php
$timeout = 2; // seconds
$fp = fsockopen($url, 80, $errNo, $errString, $timeout);
/* stops connecting after 2 seconds,
stores the error Number in $errNo,
the error String in $errStr */
To save writing a lot of code, you could use it as a quick check if host is up.
ie:
if (pingLink($domain,$timeout)) {
file_get_contents()
}
function pingLink($domain,$timeout=30){
$status = 0; //default site is down
$file = fsockopen($domain,"r");
if ($file) {
$status = 1; // Site is up
fclose($file);
}
return $status;
}
I am working on a PHP script that makes an API call to a external site. However, if this site is not available or the request times out, I would like my function to return false.
I have found following, but I am not sure on how to implement it on my script, since i use "file_get_contents" to retrieve the content of the external file call.
Limit execution time of an function or command PHP
$fp = fsockopen("www.example.com", 80);
if (!$fp) {
echo "Unable to open\n";
} else {
fwrite($fp, "GET / HTTP/1.0\r\n\r\n");
stream_set_timeout($fp, 2);
$res = fread($fp, 2000);
$info = stream_get_meta_data($fp);
fclose($fp);
if ($info['timed_out']) {
echo 'Connection timed out!';
} else {
echo $res;
}
}
(From: http://php.net/manual/en/function.stream-set-timeout.php)
How would you adress such an issue? Thanks!
I'd recommend using the cURL family of PHP functions. You can then set the timeout using curl_setopt():
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,2); // two second timeout
This will cause the curl_exec() function to return FALSE after the timeout.
In general, using cURL is better than any of the file reading functions; it's more dependable, has more options and is not regarded as a security threat. Many sysadmins disable remote file reading, so using cURL will make your code more portable and secure.
<?php
$fp = fsockopen("www.example.com", 80);
if (!$fp) {
echo "Unable to open\n";
} else {
stream_set_timeout($fp, 2); // STREAM RESOURCE, NUMBER OF SECONDS TILL TIMEOUT
// GET YOUR FILE CONTENTS
}
?>
From the PHP manual for File_Get_Contents (comments):
<?php
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 1
)
)
);
file_get_contents("http://example.com/", 0, $ctx);
?>
<?php
$fp = fsockopen("www.example.com", 80, $errno, $errstr, 4);
if ($fp) {
stream_set_timeout($fp, 2);
}