I have been searching for an answer for this all day, but with no luck!
I want to download/copy an image from the web to a location on my server, The code below doesn't seam to throw any errors other than the image is just not saving to the required or any directory.
As you can see I am using cURL to get the image and the variable $contents is returning true (1) so I am assuming the script works but I am actually missing something.
Many thanks in advance for your help. :-)
$dir = URL::base() . "/img/products/";
$imgSrc = "an image on the web";
$file = fopen($dir, "wb");
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$ch = curl_init($imgSrc);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FILE, $file); // location to write to
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
$contents = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
fclose($lfile);
if ($curl_errno > 0)
{
Log::write("CURL", "cURL Error (".$curl_errno."): ".$curl_error);
}
else
{
Log::write("CURL", "Data received: " . $contents);
}
return;
Provide the file the writing access to PHP FILE using curl to store the contents. This can be done in three ways:
If you have the terminal access then use chmod to provide the writing access
If you have the CPanel access then use directory explorer then provide the writing access to the file by changing file properties.
You must have the access to FTP and change the file access attributes and provide the writing access.
Don't use curl.
If all you need to do is download an image, go for "file_get_contents" instead.
It's dead easy:
$fileContents = file_get_contents("https://www.google.com/images/srpr/logo4w.png");
File::put('where/to/store/the/image.jpg', $fileContents);
function saveImageToFile($image_url,$output_filename)
{
$ch = curl_init ($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$raw=curl_exec($ch);
curl_close ($ch);
if(file_exists($saveto))
{
unlink($saveto); //Saves over files
}
$fp = fopen($saveto,'x');
fwrite($fp, $raw);
fclose($fp);
}
Your problem is quite simple, and I have no idea how everyone else ignored it. It is Laravel-specific. Your $dir variable returns an HTTP resource identifier. What you need is a filesystem identifier.
For laravel, change your URL::to() to path("public") to tell Laravel to stop using HTTP URIs and instead take the local path to the public folder (/your/laravel/setup/path/public/).
code
$dir = path("public") . "img/products/";
$imgSrc = "an image on the web";
$file = fopen($dir . substr($imgSrc,strrpos("/",$imgSrc)+1), "wb");
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$ch = curl_init($imgSrc);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FILE, $file); // location to write to
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
$contents = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
if ($curl_errno > 0)
{
Log::write("CURL", "cURL Error (".$curl_errno."): ".$curl_error);
}
else
{
Log::write("CURL", "Data received: " . $contents);
fwrite($file,$contents);
fclose($file);
}
return;
OK, finally got it all working and here is the code if anyone else ever tries to do the same sort of thing!
I was missing these parts:
$dir = $_SERVER['DOCUMENT_ROOT'] . "/img/products/";
and
fwrite($file,$contents);
So here is my final code... credit to Sébastien for pointing me in the right direction. Thanks.
if($method == 'save')
{
$productId = Input::get('pId');
$removeProductImages = DB::table('product_ref_images')->where('product_id', '=', $productId)->delete();
$imagesData = Input::get('imageRefs');
$dir = $_SERVER['DOCUMENT_ROOT'] . "/img/products/";
$sortOrder = 0;
for ($i=0; $i < count($imagesData); $i++) {
$imgSrc = trim($imagesData[$i]['imgSrc']);
$imgId = trim($imagesData[$i]['imgId']);
$file = fopen($dir . basename($imgSrc), "wb");
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$ch = curl_init($imgSrc);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FILE, $file);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
$contents = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
if ($curl_errno > 0)
{
Log::write("CURL", "cURL Error (".$curl_errno."): ".$curl_error);
break;
}
else
{
fwrite($file,$contents);
fclose($file);
$imageIds = DB::table('product_ref_images')->order_by('image_id', 'desc')->first();
if($imageIds == null)
{
$imageIds = 0;
}
else
{
$imageIds = $imageIds->image_id;
}
$updateImages = DB::table('product_ref_images')
->insert(array(
'image_id' => $imageIds + 1,
'product_id' => $productId,
'flickr_image_id' => $imgId,
'sort_order' => $sortOrder++,
'local_storage_url' => $dir . basename($imgSrc),
'created_at' => date("Y-m-d H:i:s"),
'updated_at' => date("Y-m-d H:i:s")
));
}
}
return Response::json('Complete');
}
Remove this line:
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
You're storing the response to a file, not to the return variable. Otherwise, you have to save it yourself (like you did in the other solution).
Related
with the following code a name with an ID (each name in the gnd can be addressed by an ID) is received via the GND interface. This works.
Now i want to get many names at the same time with a cURL loop. The ID of the URL must always be increased by one and the cURL request must loop. How can I do this?
With this Code i receive for example names from the GND database
<?php
header('Content-type: text/html; charset=utf-8');
$User_Agent = 'Mozilla/5.0 (Windows NT 6.1; rv:60.0) Gecko/20100101 Firefox/60.0';
$url = "http://hub.culturegraph.org/entityfacts/118600001";
$request_headers = [];
$request_headers[] = 'Accept: application/json';
$request_headers[] = 'charset=utf-8';
$request_headers[] = 'Content-Type: application/json; charset=utf-8';
$request_headers[] = 'Accept-Encoding: gzip, deflate, identity';
$request_headers[] = 'Accept-Language: de,en-US;q=0.7,en;q=0.3';
for ($i = 1; $i <= 10; $i++)
{
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, $User_Agent);
curl_setopt($ch, CURLOPT_HTTPHEADER, $request_headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_ENCODING, "");
$result = curl_exec($ch);
$code = curl_getinfo($ch, CURLINFO_HTTP_CODE);
}
curl_close($ch);
$data = json_decode($result, true);
if ($code == 200) {
$data = json_decode($result, true);
echo 'Erfolg';
} else {
$error = $data['Error'];
echo 'Beim anfordern der GND-Daten ist ein Fehler aufgetreten: Fehlercode ' . $error;
echo ' Zurueck<br />';
}
var_dump($data['preferredName']);
Result for URL with ID 118600001 = Gerhardt von Reutern
But how must the code be adapted, so that also the names of the URL's 118600002, 118600003 and so on are output? So as often as it is specified. Be it 100 or 1000 times.
change your $url variable to something like $base_url which is the url without the ID
$base_url = "http://hub.culturegraph.org/entityfacts/";
Then in your for loop you do:
$ch = curl_init($base_url . (118600001 + $i));
this rule can be removed, this is not necessary:
curl_setopt($ch, CURLOPT_URL, $url);
You should also handle the response inside the for loop, else you will only see the last person's name, after an incredibly long load time.
I have tried to call API for send image file using CURL .But I am getting below error:
"statusCode":415,
"error":"Unsupported Media Type"
Please help me.
I have attached code here:
$filename = "screenshot.jpg";
$handle = fopen($filename, "r");
$xml = fread($handle, filesize($filename));
fclose($handle);
$jwt_token = "xxx";
$authorization = "Authorization:".$jwt_token;
$url = "https://prod0-commerce-api.sprinklr.com/media_upload";
$headers = array(
"Content-Type: image/jpg]",
"Cache-Control: no-cache",
"Pragma: no-cache",
$authorization
);
$postdata = array('fileName' => '#'.$filename,
'type' => 'IMAGE'); //<-------------
$soap_do = curl_init();
curl_setopt($soap_do, CURLOPT_URL, $url);
//curl_setopt($soap_do, CURLOPT_CONNECTTIMEOUT, 60);
//curl_setopt($soap_do, CURLOPT_TIMEOUT, 60);
curl_setopt($soap_do, CURLOPT_RETURNTRANSFER, true );
curl_setopt($soap_do, CURLOPT_SSL_VERIFYPEER, false);
//curl_setopt($soap_do, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($soap_do, CURLOPT_POST, true );
curl_setopt($soap_do, CURLOPT_POSTFIELDS, $postdata); //<-----------
curl_setopt($soap_do, CURLOPT_HTTPHEADER, $headers);
$result = curl_exec($soap_do);
// Check for errors and display the error message
curl_close($soap_do);
$httpcode = curl_getinfo($soap_do, CURLINFO_HTTP_CODE);
echo $httpcode;
echo "<pre>";
var_dump($result);
if($errno = curl_errno($soap_do)) {
$error_message = curl_strerror($errno);
echo "cURL error ({$errno}):\n {$error_message}";
}
echo "string";exit();
print_r($result);
The problem is in this block:
$postdata = array('fileName' => '#'.$filename,
'type' => 'IMAGE'); //<-------------
The value of type should be a valid MIME type (aka "media type" or "content type").
A MIME type is an identifier composed of two parts (the type and the subtype) joined by slash (/).
The type identifies the category of content (text, image, audio, video, application etc). The subtype identifies more accurate the content inside the category.
For images, the type is image and there are several subtypes: gif, jpeg, png etc.
A correct MIME type for an image file looks like image/jpeg or image/png and not just IMAGE. This is why the server rejects your query.
The PHP function getimagesize() can be used to find the MIME type of a image stored in a file.
Your code should be like this:
$imgInfo = getimagesize($filename);
$postdata = array('fileName' => '#'.$filename,
'type' => $imgInfo['mime']);
And no, there is no setting in php.ini that writes correct code for you.
I have solved using this solution
<?php
$file = "http://localhost/xxx/screenshot.jpg";
$boundary = md5(time());
$eol = "\r\n";
$params = "----".$boundary.$eol
. "Content-Disposition: form-data;name=\"type\"".$eol
. $eol
. "IMAGE"
. $eol
. "----".$boundary.$eol
. "Content-Disposition: form-data;name=\"file\"; filename=\"screenshot.jpg\"".$eol
. '"Content-Type: image/jpeg\"'.$eol
. $eol
. file_get_contents($file) .$eol
. "----".$boundary."--";
$jwt_token = "xxxx";
$authorization = "Authorization:".$jwt_token;
$first_newline = strpos($params, $eol);
$multipart_boundary = substr($params, 2, $first_newline - 2);
$request_headers = array();
$request_headers[] = $authorization;
$request_headers[] = 'Accept: application/json';
$request_headers[] = 'Content-Length: ' . strlen($params);
$request_headers[] = 'Content-Type: multipart/form-data; boundary='. $multipart_boundary;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'xxx');
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, $request_headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLINFO_HEADER_OUT, true);
curl_setopt($ch, CURLOPT_HEADER, 1);
$result = curl_exec($ch);
$info = curl_getinfo($ch);
curl_close($ch);
echo "<pre>";
print_r($result);
exit();
Most of time a error 415 appears when you don't set properly the Content-Type. Maybe you can put Content-Type:
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Content-Type: image/jpg"));
I am trying to make a PHP script that accepts a URL and displays it (trying to keep SSL on my project)
However, I am unable to get anything to output with my script. No errors are being displayed and I am at a loss of words. What am I not seeing?
<?php
//if parameter is empty
if((!isset($_GET['img'])) or ($_GET['img'] == '') or (!isset($per['scheme'])))
{
exit;
}
$per = parse_url($_GET['img']);
print_r ($per);
$imgurl = $_GET['img'];
print_r ($imgurl);
$imgurl = str_replace(' ', "%20", $imgurl);
$aFile = getimagesize($imgurl);
print_r ($aFile);
//check file extension
if($aFile == 'jpg' or $aFile == 'jpeg'){
header('Content-Type: image/jpeg');
imagejpeg(getimg($file));
} elseif($aFile == 'png') {
header('Content-Type: image/png');
imagepng(getimg($file));
} elseif($aFile == 'gif') {
header('Content-Type: image/gif');
imagegif(getimg($file));
} else {
die('not supported');
}
$imgurl = $_GET['img'];
$imgurl = str_replace(' ', "%20", $imgurl);
function getimg($imageurl) {
$cache_expire = 60*60*24*365;
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg, image/png';
$headers[] = 'Cache-Control: maxage=. $cache_expire';
$headers[] = 'Pragma: public';
$headers[] = 'Accept-Encoding: None';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6';
$process = curl_init($imageurl);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_REFERER, $_GET['img']);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_HTTPGET, true);
curl_setopt($process, CURLOPT_USERAGENT, $user_agent);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
exit;
?>
Here is an edited version of the script that outputs garbled text when I use readfile:
<?php
error_reporting(E_ALL);
ini_set('display_errors', '1');
//if parameter is empty
if((!isset($_GET['img'])) or ($_GET['img'] == ''))
{
exit;
}
$per = parse_url($_GET['img']);
$imgurl = $_GET['img'];
$imgurl = str_replace(' ', "%20", $imgurl);
$aFile = getimagesize($imgurl);
//check file extension
$checkmime = getimagesize($imgurl);
if($checkmime['mime'] != 'image/png' && $checkmime['mime'] != 'image/gif' && $checkmime['mime'] != 'image/jpeg') {
die('not supported');
} else {
readfile("$imgurl");
}
function getimg($imageurl) {
$cache_expire = 60*60*24*365;
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg, image/png';
$headers[] = 'Cache-Control: maxage=. $cache_expire';
$headers[] = 'Pragma: public';
$headers[] = 'Accept-Encoding: None';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6';
$process = curl_init($imageurl);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_REFERER, $_GET['img']);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_HTTPGET, true);
curl_setopt($process, CURLOPT_USERAGENT, $user_agent);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
exit;
?>
If this truly is your entire script your problem first lies with $per['scheme'] not being set so it will trip the first if() statement and have the script exit.
You check if it's not set here: !isset($per['scheme']) so it will make the expression TRUE and thus end the script, reason no output even from print_r();
Use this instead:
if (empty($_GET['img'])) // Check if $_GET['img'] is not set AND is = '' (empty)
{
exit;
}
Use:
} else {
header('Content-Type: ' . $checkmime['mime']);
readfile($imgurl);
}
I need to save an image from url directly to my server, i've tried many methods but all seems doesn't work properly. file_put_contents($file_location, file_get_contents($image_url)); keeps me getting no file directory found error. Simple fopen and fwrite keeps returning corrupted image. This one worked, but it keeps returning html file instead of jpg file.
function getimg($url) {
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$process = curl_init($url);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_USERAGENT, $user_agent);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($process, CURLOPT_FOLLOWLOCATION, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
$imgurl = 'http://some/url/to/image.jpg';
$imagename= basename($imgurl);
if(file_exists('./image/'.$imagename)){continue;}
$image = getimg($imgurl);
file_put_contents('image/'.$imagename,$image);
Something is missing?
Thanks.
Your code works correct. It downloads the image from the given url.
Your issue will be in the path where the image is stored.
if(file_exists('./image/'.$imagename)){continue;}
$image = getimg($imgurl);
file_put_contents('image/'.$imagename,$image);
In the above code check the path ./image/ and give the path as in the file_put_contents path.
This method works:
<?php
file_put_contents("/var/www/test/test.png", file_get_contents("http://www.google.com/intl/en_com/images/srpr/logo3w.png"));
?>
You need to enable allow_url_fopen and it's the simplest method. See http://php.net/manual/en/features.remote-files.php
I'm looking to build a PHP script that parses HTML for particular tags. I've been using this code block, adapted from this tutorial:
<?php
$data = file_get_contents('http://www.google.com');
$regex = '/<title>(.+?)</';
preg_match($regex,$data,$match);
var_dump($match);
echo $match[1];
?>
The script works with some websites (like google, above), but when I try it with other websites (like, say, freshdirect), I get this error:
"Warning: file_get_contents(http://www.freshdirect.com) [function.file-get-contents]: failed to open stream: HTTP request failed!"
I've seen a bunch of great suggestions on StackOverflow, for example to enable extension=php_openssl.dll in php.ini. But (1) my version of php.ini didn't have extension=php_openssl.dll in it, and (2) when I added it to the extensions section and restarted the WAMP server, per this thread, still no success.
Would someone mind pointing me in the right direction? Thank you very much!
It just requires a user-agent ("any" really, any string suffices):
file_get_contents("http://www.freshdirect.com",false,stream_context_create(
array("http" => array("user_agent" => "any"))
));
See more options.
Of course, you can set user_agent in your ini:
ini_set("user_agent","any");
echo file_get_contents("http://www.freshdirect.com");
... but I prefer to be explicit for the next programmer working on it.
$html = file_get_html('http://google.com/');
$title = $html->find('title')->innertext;
Or if you prefer with preg_match and you should be really using cURL instead of fgc...
function curl($url){
$headers[] = "User-Agent:Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13";
$headers[] = "Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
$headers[] = "Accept-Language:en-us,en;q=0.5";
$headers[] = "Accept-Encoding:gzip,deflate";
$headers[] = "Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$headers[] = "Keep-Alive:115";
$headers[] = "Connection:keep-alive";
$headers[] = "Cache-Control:max-age=0";
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_HTTPHEADER, $headers);
curl_setopt($curl, CURLOPT_ENCODING, "gzip");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1);
$data = curl_exec($curl);
curl_close($curl);
return $data;
}
$data = curl('http://www.google.com');
$regex = '#<title>(.*?)</title>#mis';
preg_match($regex,$data,$match);
var_dump($match);
echo $match[1];
Another option: Some hosts disable CURLOPT_FOLLOWLOCATION so recursive is what you want, also will log into a text file any errors. Also a simple example of how to use DOMDocument() to extract the content, obviously its not extensive but something you could build appon.
<?php
function file_get_site($url){
(function_exists('curl_init')) ? '' : die('cURL Must be installed. Ask your host to enable it or uncomment extension=php_curl.dll in php.ini');
$curl = curl_init();
$header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,";
$header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
$header[] = "Cache-Control: max-age=0";
$header[] = "Connection: keep-alive";
$header[] = "Keep-Alive: 300";
$header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$header[] = "Accept-Language: en-us,en;q=0.5";
$header[] = "Pragma: ";
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0 Firefox/5.0');
curl_setopt($curl, CURLOPT_HTTPHEADER, $header);
curl_setopt($curl, CURLOPT_HEADER, true);
curl_setopt($curl, CURLOPT_REFERER, $url);
curl_setopt($curl, CURLOPT_ENCODING, 'gzip,deflate');
curl_setopt($curl, CURLOPT_AUTOREFERER, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_TIMEOUT, 60);
$html = curl_exec($curl);
$status = curl_getinfo($curl);
curl_close($curl);
if($status['http_code']!=200){
if($status['http_code'] == 301 || $status['http_code'] == 302) {
list($header) = explode("\r\n\r\n", $html, 2);
$matches = array();
preg_match("/(Location:|URI:)[^(\n)]*/", $header, $matches);
$url = trim(str_replace($matches[1],"",$matches[0]));
$url_parsed = parse_url($url);
return (isset($url_parsed))? file_get_site($url):'';
}
$oline='';
foreach($status as $key=>$eline){$oline.='['.$key.']'.$eline.' ';}
$line =$oline." \r\n ".$url."\r\n-----------------\r\n";
$handle = #fopen('./curl.error.log', 'a');
fwrite($handle, $line);
return FALSE;
}
return $html;
}
function get_content_tags($source,$tag,$id=null,$value=null){
$xml = new DOMDocument();
#$xml->loadHTML($source);
foreach($xml->getElementsByTagName($tag) as $tags) {
if($id!=null){
if($tags->getAttribute($id)==$value){
return $tags->getAttribute('content');
}
}
return $tags->nodeValue;
}
}
$source = file_get_site('http://www.freshdirect.com/about/index.jsp');
echo get_content_tags($source,'title'); //FreshDirect
echo get_content_tags($source,'meta','name','description'); //Online grocer providing high quality fresh......
?>