I am making a website that will check if a website is working and live. I pass in the URL of the site I would like to check and the following code will check if the site is live and return the HTTP response code as well as true or false.
function urlExists($url=NULL)
{
if($url == NULL) return false;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpcode == 0) {
return array (false, $httpcode);
}
else if($httpcode < 400){
return array (true, $httpcode);
} else {
return array (false, $httpcode);
}
}
With one of the sites I am testing though I am getting the HTTP response code of 0 even though I know that the site is live and working.
The site is very slow as its a large site on a not very powerful server so response times can vary between 7 - 25 seconds.
Any help would be greatly appreciated.
Thanks,
Sam
Based on these two links:-
https://curl.haxx.se/libcurl/c/CURLOPT_TIMEOUT.html
And
https://curl.haxx.se/libcurl/c/CURLOPT_CONNECTTIMEOUT.html
First one is:- set maximum time the request is allowed to take
Second one is:- timeout for the connect phase
As you said that the Site URL you are hitting is taking 7-25 second for responding. meanwhile your CURL request is terminated and closed because of these two time settings.
Increase these two time settings in your code and it will work for you.
thanks.
I will offer 2 alternatives for you to compare - along with your curl() function, you will have 3 options to see which one is better/faster for you.
Option A (all php versions), requires fopen() to be activated:
if (!$fp = fopen($url, 'r'))
{
trigger_error("Unable to open URL ($url)", E_USER_ERROR);
}
$headers = stream_get_meta_data($fp);
fclose($fp);
$http_header_info = $headers['wrapper_data'][0];
$httpCode = (int)substr($http_header_info, 9, 3);
Option B (php5+):
$headers = get_headers($url, 1);
$http_header_info = $headers[0];
$httpCode = substr($http_header_info, 9, 3);
Also, if anyone has benchmarks on these 3 approaches, i am curious to see which is more appropriate (only for retrieving http response headers of course)
Code 0 returns often when used invalid URL syntax or host not found error.
You can also call curl_error($ch) function (http://php.net/manual/en/function.curl-error.php) to determine error details.
I want to show contents of a remote file (a file on another server) on my website.
I used the following code, readfile() function is working fine on the current server
<?php
echo readfile("editor.php");
But when I tried to get a remote file
<?php
echo readfile("http://example.com/php_editor.php");
It showed the following error :
301 moved
The document has moved here 224
I am getting this error remote files only, local files are showing with no problem.
Is there anyway to fix this?
Thanks!
Option 1 - Curl
Use CURL and set the CURLOPT_FOLLOWLOCATION-option to true:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http//example.com");
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
if(curl_exec($ch) === FALSE) {
echo "Error: " . curl_error($ch);
} else {
echo curl_exec($ch);
}
curl_close($ch);
?>
Option 2 - file_get_contents
According to the PHP Documentation file_get_contents() will follow up to 20 redirects as default. Therefore you could use that function. On failure, file_get_contents() will return FALSE and otherwise it will return the entire file.
<?php
$string = file_get_contents("http://www.example.com");
if($string === FALSE) {
echo "Could not read the file.";
} else {
echo $string;
}
?>
Im trying to download images from remote server, resize and then save it local machine.
To do this I use a WideImage.
<?php
include_once($_SERVER['DOCUMENT_ROOT'].'libraries/wideimage/index.php');
include_once($_SERVER['DOCUMENT_ROOT'].'query.php');
do {
wideImage::load($row_getImages['remote'])->resize(360, 206, 'outside')->saveToFile($_SERVER['DOCUMENT_ROOT'].$row_getImages['local']);}
while ($row_getImages = mysql_fetch_assoc($getImages));
?>
This works most of the time. But it has a fatal flaw.
If for some reason one of these images is not available or doesn't exists.
Wideimage throws a fatal error. Preventing any further images that may exist from downloading.
I have tried checking file exists like this
do {
if(file_exists($row_getImages['remote'])){
wideImage::load($row_getImages['remote'])->resize(360, 206, 'outside')->saveToFile($_SERVER['DOCUMENT_ROOT'].$row_getImages['local']);}
}
while ($row_getImages = mysql_fetch_assoc($getImages));
But this doesn't work.
What am I doing wrong??
Thanks
According to this page, file_exists cannot check for remote files. Someone suggested there, in the comments, that they use fopen as a workaround:
<?php
function fileExists($path){
return (#fopen($path,"r")==true);
}
?>
You could check via CURL:
$curl = curl_init('http://example.com/my_image.jpg');
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($curl, CURLOPT_NOBODY, TRUE);
$httpcode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
curl_close($curl);
if($httpcode < 400) {
// do stuff
}
After some digging around the net I decided on requesting the HTTP headers, instead of a CURL request, as apparently it's less overhead.
This is an adaption of Nick's comment from the PHP forums at:
http://php.net/manual/en/function.get-headers.php
function get_http_response_code($theURL) {
$headers = get_headers($theURL);
return substr($headers[0], 9, 3);
}
$URL = htmlspecialchars($postURL);
$statusCode = intval(get_http_response_code($URL));
if($statusCode == 200) { // 200 = ok
echo '<img src="'.htmlspecialchars($URL).'" alt="Image: '.htmlspecialchars($URL).'" />';
} else {
echo '<img src="/Img/noPhoto.jpg" alt="This remote image link is broken" />';
}
Nick calls this function "a quick and nasty solution", although it is working well for me :-)
I need to check the url is image url or not? How can i do this?
Examples :
http://www.google.com/ is not an image url.
http://www.hoax-slayer.com/images/worlds-strongest-dog.jpg is an image url.
https://stackoverflow.com/search?q=.jpg is not an image url.
http://www.google.com/profiles/c/photos/private/AIbEiAIAAABECK386sLjh92M4AEiC3ZjYXJkX3Bob3RvKigyOTEzMmFmMDI5ODQ3MzQxNWQxY2VlYjYwYmE2ZTA4YzFhNDhlMjBmMAEFQ7chSa4PMFM0qw02kilNVE1Hpw is an image url.
If you want to be absolutely sure, and your PHP is enabled for remote connections, you can just use
getimagesize('url');
If it returns an array, it is an image type recognized by PHP, even if the image extension is not in the url (per your second link). You have to keep in mind that this method will make a remote connection for each request, so perhaps cache urls that you already probed in a database to lower connections.
You can send a HEAD request to the server and then check the Content-type. This way you at least know what the server "thinks" what the type is.
You can check if a url is an image by using the getimagesize function like below.
function validImage($file) {
$size = getimagesize($file);
return (strtolower(substr($size['mime'], 0, 5)) == 'image' ? true : false);
}
$image = validImage('http://www.example.com/image.jpg');
echo 'this image ' . ($image ? ' is' : ' is not') . ' an image file.';
i think that the idea is to get a content of the header url via curl
and check the headers
After calling curl_exec() to get a web page, call curl_getinfo() to get the content type string from the HTTP header
look how to do it in this link :
http://nadeausoftware.com/articles/2007/06/php_tip_how_get_web_page_content_type#IfyouareusingCURL
Can use this:
$is = #getimagesize ($link);
if ( !$is ) $link='';
elseif ( !in_array($is[2], array(1,2,3)) ) $link='';
elseif ( ($is['bits']>=8) ) $srcs[] = $link;
Here is a way that requires curl, but is faster than getimagesize, as it does not download the whole image.
Disclaimer: it checks the headers, and they are not always correct.
function is_url_image($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url );
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_NOBODY, 1);
$output = curl_exec($ch);
curl_close($ch);
$headers = array();
foreach(explode("\n",$output) as $line){
$parts = explode(':' ,$line);
if(count($parts) == 2){
$headers[trim($parts[0])] = trim($parts[1]);
}
}
return isset($headers["Content-Type"]) && strpos($headers['Content-Type'], 'image/') === 0;
}
$ext = strtolower(end(explode('.', $filename)));
switch($ext)
{
case 'jpg':
///Blah
break;
}
Hard version (just trying)
//Turn off E_NOTICE reporting first
if(getimagesize($url) !== false)
{
//Image
}
The best I could find, an if fclose fopen type thing, makes the page load really slowly.
Basically what I'm trying to do is the following: I have a list of websites, and I want to display their favicons next to them. However, if a site doesn't have one, I'd like to replace it with another image rather than display a broken image.
You can instruct curl to use the HTTP HEAD method via CURLOPT_NOBODY.
More or less
$ch = curl_init("http://www.example.com/favicon.ico");
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_exec($ch);
$retcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
// $retcode >= 400 -> not found, $retcode = 200, found.
curl_close($ch);
Anyway, you only save the cost of the HTTP transfer, not the TCP connection establishment and closing. And being favicons small, you might not see much improvement.
Caching the result locally seems a good idea if it turns out to be too slow.
HEAD checks the time of the file, and returns it in the headers. You can do like browsers and get the CURLINFO_FILETIME of the icon.
In your cache you can store the URL => [ favicon, timestamp ]. You can then compare the timestamp and reload the favicon.
As Pies say you can use cURL. You can get cURL to only give you the headers, and not the body, which might make it faster. A bad domain could always take a while because you will be waiting for the request to time-out; you could probably change the timeout length using cURL.
Here is example:
function remoteFileExists($url) {
$curl = curl_init($url);
//don't fetch the actual page, you only want to check the connection is ok
curl_setopt($curl, CURLOPT_NOBODY, true);
//do request
$result = curl_exec($curl);
$ret = false;
//if request did not fail
if ($result !== false) {
//if request was ok, check response code
$statusCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ($statusCode == 200) {
$ret = true;
}
}
curl_close($curl);
return $ret;
}
$exists = remoteFileExists('http://stackoverflow.com/favicon.ico');
if ($exists) {
echo 'file exists';
} else {
echo 'file does not exist';
}
CoolGoose's solution is good but this is faster for large files (as it only tries to read 1 byte):
if (false === file_get_contents("http://example.com/path/to/image",0,null,0,1)) {
$image = $default_image;
}
This is not an answer to your original question, but a better way of doing what you're trying to do:
Instead of actually trying to get the site's favicon directly (which is a royal pain given it could be /favicon.png, /favicon.ico, /favicon.gif, or even /path/to/favicon.png), use google:
<img src="http://www.google.com/s2/favicons?domain=[domain]">
Done.
A complete function of the most voted answer:
function remote_file_exists($url)
{
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); # handles 301/2 redirects
curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if( $httpCode == 200 ){return true;}
}
You can use it like this:
if(remote_file_exists($url))
{
//file exists, do something
}
If you are dealing with images, use getimagesize. Unlike file_exists, this built-in function supports remote files. It will return an array that contains the image information (width, height, type..etc). All you have to do is to check the first element in the array (the width). use print_r to output the content of the array
$imageArray = getimagesize("http://www.example.com/image.jpg");
if($imageArray[0])
{
echo "it's an image and here is the image's info<br>";
print_r($imageArray);
}
else
{
echo "invalid image";
}
if (false === file_get_contents("http://example.com/path/to/image")) {
$image = $default_image;
}
Should work ;)
This can be done by obtaining the HTTP Status code (404 = not found) which is possible with file_get_contentsDocs making use of context options. The following code takes redirects into account and will return the status code of the final destination (Demo):
$url = 'http://example.com/';
$code = FALSE;
$options['http'] = array(
'method' => "HEAD",
'ignore_errors' => 1
);
$body = file_get_contents($url, NULL, stream_context_create($options));
foreach($http_response_header as $header)
sscanf($header, 'HTTP/%*d.%*d %d', $code);
echo "Status code: $code";
If you don't want to follow redirects, you can do it similar (Demo):
$url = 'http://example.com/';
$code = FALSE;
$options['http'] = array(
'method' => "HEAD",
'ignore_errors' => 1,
'max_redirects' => 0
);
$body = file_get_contents($url, NULL, stream_context_create($options));
sscanf($http_response_header[0], 'HTTP/%*d.%*d %d', $code);
echo "Status code: $code";
Some of the functions, options and variables in use are explained with more detail on a blog post I've written: HEAD first with PHP Streams.
PHP's inbuilt functions may not work for checking URL if allow_url_fopen setting is set to off for security reasons. Curl is a better option as we would not need to change our code at later stage. Below is the code I used to verify a valid URL:
$url = str_replace(' ', '%20', $url);
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if($httpcode>=200 && $httpcode<300){ return true; } else { return false; }
Kindly note the CURLOPT_SSL_VERIFYPEER option which also verify the URL's starting with HTTPS.
To check for the existence of images, exif_imagetype should be preferred over getimagesize, as it is much faster.
To suppress the E_NOTICE, just prepend the error control operator (#).
if (#exif_imagetype($filename)) {
// Image exist
}
As a bonus, with the returned value (IMAGETYPE_XXX) from exif_imagetype we could also get the mime-type or file-extension with image_type_to_mime_type / image_type_to_extension.
A radical solution would be to display the favicons as background images in a div above your default icon. That way, all overhead would be placed on the client while still not displaying broken images (missing background images are ignored in all browsers AFAIK).
You could use the following:
$file = 'http://mysite.co.za/images/favicon.ico';
$file_exists = (#fopen($file, "r")) ? true : false;
Worked for me when trying to check if an image exists on the URL
function remote_file_exists($url){
return(bool)preg_match('~HTTP/1\.\d\s+200\s+OK~', #current(get_headers($url)));
}
$ff = "http://www.emeditor.com/pub/emed32_11.0.5.exe";
if(remote_file_exists($ff)){
echo "file exist!";
}
else{
echo "file not exist!!!";
}
This works for me to check if a remote file exist in PHP:
$url = 'https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico';
$header_response = get_headers($url, 1);
if ( strpos( $header_response[0], "404" ) !== false ) {
echo 'File does NOT exist';
} else {
echo 'File exists';
}
You can use :
$url=getimagesize(“http://www.flickr.com/photos/27505599#N07/2564389539/”);
if(!is_array($url))
{
$default_image =”…/directoryFolder/junal.jpg”;
}
If you're using the Laravel framework or guzzle package, there is also a much simpler way using the guzzle client, it also works when links are redirected:
$client = new \GuzzleHttp\Client(['allow_redirects' => ['track_redirects' => true]]);
try {
$response = $client->request('GET', 'your/url');
if ($response->getStatusCode() != 200) {
// not exists
}
} catch (\GuzzleHttp\Exception\GuzzleException $e) {
// not exists
}
More in Document : https://docs.guzzlephp.org/en/latest/faq.html#how-can-i-track-redirected-requests
You should issue HEAD requests, not GET one, because you don't need the URI contents at all. As Pies said above, you should check for status code (in 200-299 ranges, and you may optionally follow 3xx redirects).
The answers question contain a lot of code examples which may be helpful: PHP / Curl: HEAD Request takes a long time on some sites
There's an even more sophisticated alternative. You can do the checking all client-side using a JQuery trick.
$('a[href^="http://"]').filter(function(){
return this.hostname && this.hostname !== location.hostname;
}).each(function() {
var link = jQuery(this);
var faviconURL =
link.attr('href').replace(/^(http:\/\/[^\/]+).*$/, '$1')+'/favicon.ico';
var faviconIMG = jQuery('<img src="favicon.png" alt="" />')['appendTo'](link);
var extImg = new Image();
extImg.src = faviconURL;
if (extImg.complete)
faviconIMG.attr('src', faviconURL);
else
extImg.onload = function() { faviconIMG.attr('src', faviconURL); };
});
From http://snipplr.com/view/18782/add-a-favicon-near-external-links-with-jquery/ (the original blog is presently down)
all the answers here that use get_headers() are doing a GET request.
It's much faster/cheaper to just do a HEAD request.
To make sure that get_headers() does a HEAD request instead of a GET you should add this:
stream_context_set_default(
array(
'http' => array(
'method' => 'HEAD'
)
)
);
so to check if a file exists, your code would look something like this:
stream_context_set_default(
array(
'http' => array(
'method' => 'HEAD'
)
)
);
$headers = get_headers('http://website.com/dir/file.jpg', 1);
$file_found = stristr($headers[0], '200');
$file_found will return either false or true, obviously.
If the file is not hosted external you might translate the remote URL to an absolute Path on your webserver. That way you don't have to call CURL or file_get_contents, etc.
function remoteFileExists($url) {
$root = realpath($_SERVER["DOCUMENT_ROOT"]);
$urlParts = parse_url( $url );
if ( !isset( $urlParts['path'] ) )
return false;
if ( is_file( $root . $urlParts['path'] ) )
return true;
else
return false;
}
remoteFileExists( 'https://www.yourdomain.com/path/to/remote/image.png' );
Note: Your webserver must populate DOCUMENT_ROOT to use this function
Don't know if this one is any faster when the file does not exist remotely, is_file(), but you could give it a shot.
$favIcon = 'default FavIcon';
if(is_file($remotePath)) {
$favIcon = file_get_contents($remotePath);
}
If you're using the Symfony framework, there is also a much simpler way using the HttpClientInterface:
private function remoteFileExists(string $url, HttpClientInterface $client): bool {
$response = $client->request(
'GET',
$url //e.g. http://example.com/file.txt
);
return $response->getStatusCode() == 200;
}
The docs for the HttpClient are also very good and maybe worth looking into if you need a more specific approach: https://symfony.com/doc/current/http_client.html
You can use the filesystem:
use Symfony\Component\Filesystem\Filesystem;
use Symfony\Component\Filesystem\Exception\IOExceptionInterface;
and check
$fileSystem = new Filesystem();
if ($fileSystem->exists('path_to_file')==true) {...
Please check this URL. I believe it will help you. They provide two ways to overcome this with a bit of explanation.
Try this one.
// Remote file url
$remoteFile = 'https://www.example.com/files/project.zip';
// Initialize cURL
$ch = curl_init($remoteFile);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_exec($ch);
$responseCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
// Check the response code
if($responseCode == 200){
echo 'File exists';
}else{
echo 'File not found';
}
or visit the URL
https://www.codexworld.com/how-to/check-if-remote-file-exists-url-php/#:~:text=The%20file_exists()%20function%20in,a%20remote%20server%20using%20PHP.