Related
This question already has answers here:
HTTP requests with file_get_contents, getting the response code
(6 answers)
Closed 8 months ago.
I wrote a PHP code like this
$site="http://www.google.com";
$content = file_get_content($site);
echo $content;
But when I remove "http://" from $site I get the following warning:
Warning:
file_get_contents(www.google.com)
[function.file-get-contents]: failed
to open stream:
I tried try and catch but it didn't work.
Step 1: check the return code: if($content === FALSE) { // handle error here... }
Step 2: suppress the warning by putting an error control operator (i.e. #) in front of the call to file_get_contents():
$content = #file_get_contents($site);
You can also set your error handler as an anonymous function that calls an Exception and use a try / catch on that exception.
set_error_handler(
function ($severity, $message, $file, $line) {
throw new ErrorException($message, $severity, $severity, $file, $line);
}
);
try {
file_get_contents('www.google.com');
}
catch (Exception $e) {
echo $e->getMessage();
}
restore_error_handler();
Seems like a lot of code to catch one little error, but if you're using exceptions throughout your app, you would only need to do this once, way at the top (in an included config file, for instance), and it will convert all your errors to Exceptions throughout.
My favorite way to do this is fairly simple:
if (($data = #file_get_contents("http://www.google.com")) === false) {
$error = error_get_last();
echo "HTTP request failed. Error was: " . $error['message'];
} else {
echo "Everything went better than expected";
}
I found this after experimenting with the try/catch from #enobrev above, but this allows for less lengthy (and IMO, more readable) code. We simply use error_get_last to get the text of the last error, and file_get_contents returns false on failure, so a simple "if" can catch that.
You can prepend an #:
$content = #file_get_contents($site);
This will supress any warning - use sparingly!. See Error Control Operators
Edit: When you remove the 'http://' you're no longer looking for a web page, but a file on your disk called "www.google....."
One alternative is to suppress the error and also throw an exception which you can catch later. This is especially useful if there are multiple calls to file_get_contents() in your code, since you don't need to suppress and handle all of them manually. Instead, several calls can be made to this function in a single try/catch block.
// Returns the contents of a file
function file_contents($path) {
$str = #file_get_contents($path);
if ($str === FALSE) {
throw new Exception("Cannot access '$path' to read contents.");
} else {
return $str;
}
}
// Example
try {
file_contents("a");
file_contents("b");
file_contents("c");
} catch (Exception $e) {
// Deal with it.
echo "Error: " , $e->getMessage();
}
function custom_file_get_contents($url) {
return file_get_contents(
$url,
false,
stream_context_create(
array(
'http' => array(
'ignore_errors' => true
)
)
)
);
}
$content=FALSE;
if($content=custom_file_get_contents($url)) {
//play with the result
} else {
//handle the error
}
Here's how I did it... No need for try-catch block... The best solution is always the simplest... Enjoy!
$content = #file_get_contents("http://www.google.com");
if (strpos($http_response_header[0], "200")) {
echo "SUCCESS";
} else {
echo "FAILED";
}
Here's how I handle that:
$this->response_body = #file_get_contents($this->url, false, $context);
if ($this->response_body === false) {
$error = error_get_last();
$error = explode(': ', $error['message']);
$error = trim($error[2]) . PHP_EOL;
fprintf(STDERR, 'Error: '. $error);
die();
}
The best thing would be to set your own error and exception handlers which will do something usefull like logging it in a file or emailing critical ones.
http://www.php.net/set_error_handler
Since PHP 4 use error_reporting():
$site="http://www.google.com";
$old_error_reporting = error_reporting(E_ALL ^ E_WARNING);
$content = file_get_content($site);
error_reporting($old_error_reporting);
if ($content === FALSE) {
echo "Error getting '$site'";
} else {
echo $content;
}
something like this:
public function get($curl,$options){
$context = stream_context_create($options);
$file = #file_get_contents($curl, false, $context);
$str1=$str2=$status=null;
sscanf($http_response_header[0] ,'%s %d %s', $str1,$status, $str2);
if($status==200)
return $file
else
throw new \Exception($http_response_header[0]);
}
You could use this script
$url = #file_get_contents("http://www.itreb.info");
if ($url) {
// if url is true execute this
echo $url;
} else {
// if not exceute this
echo "connection error";
}
You should use file_exists() function before to use file_get_contents().
With this way you'll avoid the php warning.
$file = "path/to/file";
if(file_exists($file)){
$content = file_get_contents($file);
}
Simplest way to do this is just prepend an # before file_get_contents,
i. e.:
$content = #file_get_contents($site);
I was resolve all problem, it's work all links
public function getTitle($url)
{
try {
if (strpos($url, 'www.youtube.com/watch') !== false) {
$apikey = 'AIzaSyCPeA3MlMPeT1CU18NHfJawWAx18VoowOY';
$videoId = explode('&', explode("=", $url)[1])[0];
$url = 'https://www.googleapis.com/youtube/v3/videos?id=' . $videoId . '&key=' . $apikey . '&part=snippet';
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_VERBOSE, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$response = curl_exec($ch);
curl_close($ch);
$data = json_decode($response);
$value = json_decode(json_encode($data), true);
$title = $value['items'][0]['snippet']['title'];
} else {
set_error_handler(
function () {
return false;
}
);
if (($str = file_get_contents($url)) === false) {
$title = $url;
} else {
preg_match("/\<title\>(.*)\<\/title\>/i", $str, $title);
$title = $title[1];
if (preg_replace('/[\x00-\x1F\x7F-\xFF]/', '', $title))
$title = utf8_encode($title);
$title = html_entity_decode($title);
}
restore_error_handler();
}
} catch (Exception $e) {
$title = $url;
}
return $title;
}
This will try to get the data, if it does not work, it will catch the error and allow you to do anything you need within the catch.
try {
$content = file_get_contents($site);
} catch(\Exception $e) {
return 'The file was not found';
}
if (!file_get_contents($data)) {
exit('<h1>ERROR MESSAGE</h1>');
} else {
return file_get_contents($data);
}
I would like to retrieve broken links of a given website.
I have this code but it doesn't work.
Can you help me ?
// function to check url
function check_url($url) {
//echo "Test broken liens";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch , CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
$headers = curl_getinfo($ch);
curl_close($ch);
return $headers['http_code'];
}
if(check_url("https://www.amazon.com/")==200){
echo "<br> The link is validated <br>";
}else{
echo "<br>broken links<br>";
}
// this function check all the code of a website and retrieve the tag of a hyperlink
function getLinks(){
$html = file_get_contents('https://www.amazon.com/');
$dom = new domDocument;
#$dom->loadHTML($html);
$dom->preserveWhiteSpace = false;
$images = $dom->getElementsByTagName('a');
foreach ($images as $image) {
$file= $image->getAttribute('href')."<br>";
$lien= "https://www.amazon.com/".$file;
echo $lien;
echo existenceLien($lien);
}
}
echo getLinks();
// The target is to search the broken links in a website and worn the existence of those links
//check if link exist and display the result for each
function linkexistence($url){
// get the url
$test = get_headers($url , 1);
$message="";
// use preg_match function
if (preg_match("#HTTP/1.1 200i#", $test[0])) {
$message="Valid";
}elseif (preg_match("#HTTP/1.1 404i#", $test[0])) {
$message="Non-existent page ! (404)";
}elseif (preg_match("#HTTP/1.1 301i#", $test[0])) {
$message="The page has been moved";
}elseif (preg_match("#HTTP/1.1 403i#", $test[0])) {
$message="Access to the page refused! (403)";
}else {
$message="Invalid links";
}
return $message;
}*****
The mask is wrong in preg_match function, currently your mask is
#HTTP/1.1 200i#
but I believe that you have to use the following mask
#HTTP/1.1 200#i
thus you have to move the "i" after "#" in all your preg_match functions.
the "i" means the case sensitivity will be ignored
I would like to create a PHP script that will go to another website (given a URL) and check the page source of that page for a certain string of data.
I actually have a way of doing it right now, but looking for an alternative way.
Right now I'm using the file_get_contents php function to read in the page source of the URL into a variable.
$link = "www.example.com";
$linkcontents = file_get_contents($link);
Then I use the strpos php function to search the page for the string I'm looking for:
$needle = "<div>find me</div>";
if (strpos($linkcontents, $needle) == false) {
echo "String not found";
} else {
echo "String found";
}
I have heard the cURL command is good for handling things that have to do with URLs, I'm just not sure how I would use it to do what I'm doing with file_get_contents and strpos functions combined like I have put above.
Or if there is another way to do it, I'm all ears :-)
Well we construct a CURL function like this
function Visit($irc_server){
// Open the connection
$user_agent = $_SERVER['HTTP_USER_AGENT'];
$port = '80';
$ch = curl_init(); // initialize curl handle
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_URL, $irc_server);
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_TIMEOUT, 50);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_PORT, $port);
$data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
if ($curl_errno > 0) {
$return = ("cURL Error ($curl_errno): $curl_error\n");
} else {
$return = $data;
}
curl_close($ch);
/*if($httpcode >= 200 && $httpcode < 300){
$return = 'OK';
}else{
$return ='Nok';
}*/
return $return;
}
Another function to process our url
function tenta($url){
// Now, create a instance of your class, define the behaviour
// of the crawler (see class-reference for more options and details)
// and start the crawling-process.
$crawler = new MyCrawler();
// URL to crawl
$crawler->setURL($url);
// Only receive content of files with content-type "text/html"
$crawler->addContentTypeReceiveRule("#text/html#");
// Ignore links to pictures, dont even request pictures
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");
// Store and send cookie-data like a browser does
$crawler->enableCookieHandling(true);
// Set the traffic-limit to 1 MB (in bytes,
// for testing we dont want to "suck" the whole site)
$crawler->setTrafficLimit(1000 * 1024);
// Thats enough, now here we go
$crawler->go();
// At the end, after the process is finished, we print a short
// report (see method getProcessReport() for more information)
$report = $crawler->getProcessReport();
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
/*
echo "Summary:".$lb;
echo "Links followed: ".$report->links_followed.$lb;
echo "Documents received: ".$report->files_received.$lb;
echo "Bytes received: ".$report->bytes_received." bytes".$lb;
echo "Process runtime: ".$report->process_runtime." sec".$lb; */
}
We construct our class
// It may take a whils to crawl a site ...
set_time_limit(110000);
// Inculde the phpcrawl-mainclass
include("libs/PHPCrawler.class.php");
// Extend the class and override the handleDocumentInfo()-method
class MyCrawler extends PHPCrawler
{
function handleDocumentInfo($DocInfo)
{
global $find;
// Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
// Print the URL and the HTTP-status-Code
echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
//echo $img_url = '<img src="'.$DocInfo->url.'.jpg" width = "150" height = "150" />'.$lb;
//we looking for kenya on this domain
foreach ($find as $matche) {
$matchb = implode(',',$matche);
//$matchb = $matche['word'];
if(preg_match("/(".$matchb.")/i", Visit($DocInfo->url))) {
echo "<a href=".$DocInfo->url." target=_blank>".$DocInfo->url."</a><b style='color:red;'>".$matche['word']."</b>".$lb;
}
}
// Print the refering URL
echo "Referer-page: ".$DocInfo->referer_url.$lb;
// Print if the content of the document was be recieved or not
if ($DocInfo->received == true)
echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
else
echo "Content not received".$lb;
// Now you should do something with the content of the actual
// received page or file ($DocInfo->source), we skip it in this example
echo $lb;
flush();
}
}
Our variables in array
Urls we will be crawling.
$url = array(
array("id"=>7, "name"=>"soltechit","url" => "soltechit.co.uk"),
array("id"=>5, "name"=>"CNN","url" => "cnn.com", "description" => "A social utility that connects people, to keep up with friends, upload photos, share links")
);
strings we are looking for
$find = array(
array("word" => "routers"),
array("word" => "Moose"),
array("word" => "worm"),
array("word" => "kenya"),
array("word" => "alshabaab"),
array("word" => "ISIS"),
array("word" => "security"),
array("word" => "windows 10 release"),
array("word" => "hacked")
);
Which we call like this
foreach ($url as $urls) {
$url = $urls['url'];
echo '<h2>'.$urls['name'].'</h2>';
echo $urls['description'].'<br>';
echo tenta($url).'<br>';
}
If file_get_contents works just fine for the task at hand, why change anything...? I say keep using it.
Note that you'll need to pass it a URL that starts with "http://", otherwise it'll try to open a local file called "www.example.com".
Also it's good practice to do === false with strpos, since otherwise a match at position 0 will not be recognized (since 0 == false but not 0 === false)
Something better is here I guess it would be of help which goes like:
<?php
// It may take a whils to crawl a site ...
set_time_limit(10000);
// Inculde the phpcrawl-mainclass
include("libs/PHPCrawler.class.php");
// Extend the class and override the handleDocumentInfo()-method
class MyCrawler extends PHPCrawler
{
function handleDocumentInfo($DocInfo)
{
// Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
// Print the URL and the HTTP-status-Code
echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
// Print the refering URL
echo "Referer-page: ".$DocInfo->referer_url.$lb;
// Print if the content of the document was be recieved or not
if ($DocInfo->received == true)
echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
else
echo "Content not received".$lb;
// Now you should do something with the content of the actual
// received page or file ($DocInfo->source), we skip it in this example
echo $lb;
flush();
}
}
// Now, create a instance of your class, define the behaviour
// of the crawler (see class-reference for more options and details)
// and start the crawling-process.
$crawler = new MyCrawler();
// URL to crawl
$crawler->setURL("www.php.net");
// Only receive content of files with content-type "text/html"
$crawler->addContentTypeReceiveRule("#text/html#");
// Ignore links to pictures, dont even request pictures
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");
// Store and send cookie-data like a browser does
$crawler->enableCookieHandling(true);
// Set the traffic-limit to 1 MB (in bytes,
// for testing we dont want to "suck" the whole site)
$crawler->setTrafficLimit(1000 * 1024);
// Thats enough, now here we go
$crawler->go();
// At the end, after the process is finished, we print a short
// report (see method getProcessReport() for more information)
$report = $crawler->getProcessReport();
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
echo "Summary:".$lb;
echo "Links followed: ".$report->links_followed.$lb;
echo "Documents received: ".$report->files_received.$lb;
echo "Bytes received: ".$report->bytes_received." bytes".$lb;
echo "Process runtime: ".$report->process_runtime." sec".$lb;
?>
This question already has answers here:
HTTP requests with file_get_contents, getting the response code
(6 answers)
Closed 8 months ago.
I wrote a PHP code like this
$site="http://www.google.com";
$content = file_get_content($site);
echo $content;
But when I remove "http://" from $site I get the following warning:
Warning:
file_get_contents(www.google.com)
[function.file-get-contents]: failed
to open stream:
I tried try and catch but it didn't work.
Step 1: check the return code: if($content === FALSE) { // handle error here... }
Step 2: suppress the warning by putting an error control operator (i.e. #) in front of the call to file_get_contents():
$content = #file_get_contents($site);
You can also set your error handler as an anonymous function that calls an Exception and use a try / catch on that exception.
set_error_handler(
function ($severity, $message, $file, $line) {
throw new ErrorException($message, $severity, $severity, $file, $line);
}
);
try {
file_get_contents('www.google.com');
}
catch (Exception $e) {
echo $e->getMessage();
}
restore_error_handler();
Seems like a lot of code to catch one little error, but if you're using exceptions throughout your app, you would only need to do this once, way at the top (in an included config file, for instance), and it will convert all your errors to Exceptions throughout.
My favorite way to do this is fairly simple:
if (($data = #file_get_contents("http://www.google.com")) === false) {
$error = error_get_last();
echo "HTTP request failed. Error was: " . $error['message'];
} else {
echo "Everything went better than expected";
}
I found this after experimenting with the try/catch from #enobrev above, but this allows for less lengthy (and IMO, more readable) code. We simply use error_get_last to get the text of the last error, and file_get_contents returns false on failure, so a simple "if" can catch that.
You can prepend an #:
$content = #file_get_contents($site);
This will supress any warning - use sparingly!. See Error Control Operators
Edit: When you remove the 'http://' you're no longer looking for a web page, but a file on your disk called "www.google....."
One alternative is to suppress the error and also throw an exception which you can catch later. This is especially useful if there are multiple calls to file_get_contents() in your code, since you don't need to suppress and handle all of them manually. Instead, several calls can be made to this function in a single try/catch block.
// Returns the contents of a file
function file_contents($path) {
$str = #file_get_contents($path);
if ($str === FALSE) {
throw new Exception("Cannot access '$path' to read contents.");
} else {
return $str;
}
}
// Example
try {
file_contents("a");
file_contents("b");
file_contents("c");
} catch (Exception $e) {
// Deal with it.
echo "Error: " , $e->getMessage();
}
function custom_file_get_contents($url) {
return file_get_contents(
$url,
false,
stream_context_create(
array(
'http' => array(
'ignore_errors' => true
)
)
)
);
}
$content=FALSE;
if($content=custom_file_get_contents($url)) {
//play with the result
} else {
//handle the error
}
Here's how I did it... No need for try-catch block... The best solution is always the simplest... Enjoy!
$content = #file_get_contents("http://www.google.com");
if (strpos($http_response_header[0], "200")) {
echo "SUCCESS";
} else {
echo "FAILED";
}
Here's how I handle that:
$this->response_body = #file_get_contents($this->url, false, $context);
if ($this->response_body === false) {
$error = error_get_last();
$error = explode(': ', $error['message']);
$error = trim($error[2]) . PHP_EOL;
fprintf(STDERR, 'Error: '. $error);
die();
}
The best thing would be to set your own error and exception handlers which will do something usefull like logging it in a file or emailing critical ones.
http://www.php.net/set_error_handler
Since PHP 4 use error_reporting():
$site="http://www.google.com";
$old_error_reporting = error_reporting(E_ALL ^ E_WARNING);
$content = file_get_content($site);
error_reporting($old_error_reporting);
if ($content === FALSE) {
echo "Error getting '$site'";
} else {
echo $content;
}
something like this:
public function get($curl,$options){
$context = stream_context_create($options);
$file = #file_get_contents($curl, false, $context);
$str1=$str2=$status=null;
sscanf($http_response_header[0] ,'%s %d %s', $str1,$status, $str2);
if($status==200)
return $file
else
throw new \Exception($http_response_header[0]);
}
You could use this script
$url = #file_get_contents("http://www.itreb.info");
if ($url) {
// if url is true execute this
echo $url;
} else {
// if not exceute this
echo "connection error";
}
You should use file_exists() function before to use file_get_contents().
With this way you'll avoid the php warning.
$file = "path/to/file";
if(file_exists($file)){
$content = file_get_contents($file);
}
Simplest way to do this is just prepend an # before file_get_contents,
i. e.:
$content = #file_get_contents($site);
I was resolve all problem, it's work all links
public function getTitle($url)
{
try {
if (strpos($url, 'www.youtube.com/watch') !== false) {
$apikey = 'AIzaSyCPeA3MlMPeT1CU18NHfJawWAx18VoowOY';
$videoId = explode('&', explode("=", $url)[1])[0];
$url = 'https://www.googleapis.com/youtube/v3/videos?id=' . $videoId . '&key=' . $apikey . '&part=snippet';
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_VERBOSE, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$response = curl_exec($ch);
curl_close($ch);
$data = json_decode($response);
$value = json_decode(json_encode($data), true);
$title = $value['items'][0]['snippet']['title'];
} else {
set_error_handler(
function () {
return false;
}
);
if (($str = file_get_contents($url)) === false) {
$title = $url;
} else {
preg_match("/\<title\>(.*)\<\/title\>/i", $str, $title);
$title = $title[1];
if (preg_replace('/[\x00-\x1F\x7F-\xFF]/', '', $title))
$title = utf8_encode($title);
$title = html_entity_decode($title);
}
restore_error_handler();
}
} catch (Exception $e) {
$title = $url;
}
return $title;
}
This will try to get the data, if it does not work, it will catch the error and allow you to do anything you need within the catch.
try {
$content = file_get_contents($site);
} catch(\Exception $e) {
return 'The file was not found';
}
if (!file_get_contents($data)) {
exit('<h1>ERROR MESSAGE</h1>');
} else {
return file_get_contents($data);
}
I wanted to ask you help I have an xml source (http://livefmhits.6te.net/nowplay.xml) it gives me the source of the song and I wanted to remove the cover through the lastfm (artist.getinfo) in echo I tried as follows:
<?php
$xml = simplexml_load_file('http://livefmhits.6te.net/nowplay.xml');
$artist = urlencode($xml->TRACK["ARTIST"]);
$url = 'http://ws.audioscrobbler.com/2.0/?method=artist.getinfo&artist='.$artist.&api_key=b25b959554ed76058ac220b7b2e0a026;
$xml2 = #simplexml_load_file($url);
if ($xml2 === false)
{
echo("Url failed"); // do whatever you want to do
}
else
{
if($xml2->track->album->image[3])
{
echo '<img src="';
echo((string) $xml2->track->album->image[3]);
echo '">';
}
else
{
echo "<img src='http://3.bp.blogspot.com/-SEsYAbASI68/VZ7xNuKy-GI/AAAAAAAAA3M/IWcGRDoXXms/s1600/capaindisponivel.png'"; // do whatever you want to do
}
}
I'm not able to extract the source must be wrong echo, I like to remove the image that says "mega". I present to you the complete link
http://ws.audioscrobbler.com/2.0/?method=artist.getinfo&lang=ru&artist=COLDPLAY&api_key=ae9dc375e16f12528b329b25a3cca3ee and yet I was to do a post yours but I could not (Get large artist image from last.fm xml (api artist.getinfo))
I came to ask your help in this work from the outset thanks for availability
Here is how I'm doing it in json. It's pretty much the same in XML.
First, we define the API KEY:
define('YOUR_API_KEY', 'b25b959554ed76058ac220b7b2e0a026');
It's better to separate it from the code, it makes things easier if you need to reuse it somewhere else in your code. (for eg. in another function)
Then, we create the 2 functions we need to make the magic happen.
1) To query Lastfm's API and get its content, we will use CURL:
function _curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 15);
if(strtolower(parse_url($url, PHP_URL_SCHEME)) == 'https')
{
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,1);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,1);
}
curl_setopt($ch, CURLOPT_URL, $url);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
2) Lastfm offers many options. Personally, I find it's easier to separate main queries into functions. But as you simply target images, here is the function I'd use:
function lfm_img($artist)
{
$url = "http://ws.audioscrobbler.com/2.0/?method=artist.getinfo&artist=$artist&api_key=".YOUR_API_KEY."&format=json";
$json = _cul($url);
$data = str_ireplace("#text", "text", $json);
$list = json_decode($data);
//If an error occurs...
if($list->error)
return 'ERROR.'. $list->error;
//That's where we get the photo. We try to get the biggest size first, if not we try smaller sizes. Returns '0' if nothing is found.
if($list->artist->image[4])
$img = $list->artist->image[4]->text;
else if($list->artist->image[3])
$img = $list->artist->image[3];
else if($list->artist->image[2])
$img = $list->artist->image[2];
else if($list->artist->image[1])
$img = $list->artist->image[1];
else if($list->artist->image[0])
$img = $list->artist->image[0];
else
$img = 0;
return $img;
}
And finally, use them:
$artist_query = 'Nirvana';
$artist_image = lfm_img($artist);
//display image
echo '<img src="'. $artist_image .'" alt="'. $artist_query .'" />';
I think it's self explanatory here. ;)
Hope it helped!