php : link preview or url preview not working - php

Using ubuntu server, 18.04, php 7.2. php.ini includes allow_url_fopen= on. Disabling the firewall didn't work. I also tried searching for 6 hours now applied some random answers didn't work.
This is my code so far:
if (preg_match('#(?<=^|(?<=[^a-zA-Z0-9-_\.//]))((https?://)?([-\w]+\.[-\w\.]+)+\w(:\d+)?(/([-\w/_\.\,]*(\?\S+)?)?)*)#', htmlspecialchars_decode(stripslashes($urlData)), $results)) {
$page = $results[0];
$page = $this->addScheme($page);
// $default_socket_timeout = ini_get('default_socket_timeout');
// ini_set('default_socket_timeout', 3);
$content = file_get_contents("https://api.urlmeta.org/?url=".$page);
// ini_set('default_socket_timeout', $default_socket_timeout);
if ($content) {
$data = json_decode($content, true);
$urlResult = $data['result'];
$urlResponse = $data['meta'];
if ($urlResult['status'] === "OK") {
$urlLink = $page;
if (isset($urlResponse['image'])) {
$urlImage = $urlResponse['image'];
}
if (isset($urlResponse['description'])) {
$urlDescription = $urlResponse['description'];
}
if (isset($urlResponse['title'])) {
$urlTitle = $urlResponse['title'];
}
}
}
}

Related

Read JSON with php from instagram __a=1

original
I want to emphasize first that it is my first script in PHP, so many things can be improved, but for now I just need it to work!
I created this script in php to get public profile information from the public instagram json file located at https://www.instagram.com/{{username}}/?__a=1
trying it locally, everything works correctly, but hosting it on a website file_get_contents($ url) doesn't work (line 29) , I tried to use CURL to read the file, but it doesn't work anyway, it doesn't read the json file correctly, trying to do an echo of what he reads the instagram logo appears on the site screen.
how can I solve it?
update
I just noticed that if I try to make file_get_contents () of a link of any profile www.instagram.com/USERNAME, it gives me the exact same result, it may be that trying to read www.instagram.com/USERNAME/?__a= 1 instagram notice and redirect me to the profile page?
I've tried htmlentities() on the data I receive through file_get_contents ... tatan .. actually the script reads a strange html page that is NOT found at the address I gave it!
<?php
$commentiPost;
$likePost;
$postData;
$image;
$urlprofilo;
$followers;
$username;
$follow;
$like;
$commenti;
function getMediaByUsername($count) {
global $image;
global $commentiPost;
global $likePost;
global $urlprofilo;
global $followers;
global $username;
global $follow;
global $postData;
global $like;
global $commenti;
$uname = htmlspecialchars($_GET["name"]);
$username = strtolower(str_replace(' ','_',$uname));
$url = "https://www.instagram.com/".$username."/?__a=1";
$userinfo = file_get_contents($url);
$userdata = json_decode($userinfo,true);
$user = $userdata['graphql']['user'];
$iteration_url = $url;
if(!empty($user)){
$followers = $user['edge_followed_by']['count'];
$follow = $user['edge_follow']['count'];
$fullname = $user['full_name'];
$username = $user['username'];
$profilepic = $user['profile_pic_url'];
$profilepic = (explode("/",$profilepic));
$urlprofilo = "https://scontent-frt3-1.cdninstagram.com/v/t51.2885-19/s150x150/$profilepic[6]";
$limit = $count;
$tryNext = true;
$found = 0;
while ($tryNext) {
$tryNext = false;
$remote = file_get_contents( $iteration_url );
$response = $remote;
if ($response === false) {
return false;
}
$data = json_decode($response, true);
if ( $data === null) {
return false;
}
$media = $data['graphql']['user']['edge_owner_to_timeline_media'];
foreach ( $media['edges'] as $index => $node ) {
if ( $found + $index < $limit ) {
if (isset($node['node']['is_video']) && $node['node']['is_video'] == true) {
$type = 'video';
} else {
$type = 'image';
}
$like = $like + $node['node']['edge_liked_by']['count'];
$commenti = $commenti + $node['node']['edge_media_to_comment']['count'];
$image[] = array( "<a href=".$node['node']['display_url'].">
<img src=".$node['node']['display_url']." alt="." />
<h3>Like: </strong>".$node['node']['edge_liked_by']['count']."</strong> Commenti: <strong>".$node['node']['edge_media_to_comment']['count']."</strong></h3>
</a>");
$postData[] = array(" '".gmdate("d-m-Y",$node['node']['taken_at_timestamp'])."',");
$likePost[] = array(" ".$node['node']['edge_liked_by']['count'].",");
$commentiPost[] = array(" ".$node['node']['edge_media_to_comment']['count'].",");
}
}
$found += count($media['edges']);
if ( $media['page_info']['has_next_page'] && $found < $limit ) {
$iteration_url = $url . '&max_id=' . $media['page_info']['end_cursor'];
$tryNext = true;
}
}
} else{
}
}
getMediaByUsername( 12);
if(isset($image))
{
$postTot = count($image);
}
else {
$postTot = 0;
}
if($postTot > 0 and $followers > 0){
$ER = round(((($like + $commenti)/$postTot)/$followers)*100, 1);
}
else {
$ER = 0;
}
?>
I belive that is SSL certificate problem. When you modify your function to:
function url_get_contents ( $url ) {
if ( ! function_exists( 'curl_init' ) ){
die( 'The cURL library is not installed.' );
}
$ch = curl_init();
curl_setopt( $ch, CURLOPT_URL, $url );
curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true );
// curl_setopt( $ch, CURLOPT_SSL_VERIFYHOST, false);
// curl_setopt( $ch, CURLOPT_SSL_VERIFYPEER, false);
$output = curl_exec( $ch );
if(curl_errno( $ch )) {
die ('Curl error: ' . curl_error($ch));
}
curl_close( $ch );
return $output;
}
Probably you will see as a result: Curl error: SSL certificate problem: unable to get local issuer certificate.
Add that certificate to your system or uncomment lines with options: CURLOPT_SSL_VERIFYHOSTand CURLOPT_SSL_VERIFYHOST.

PHP Notice : undefined offset : 3 in my array [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
*/
// File called with:
// e_PLUGIN_ABS."log/log.php?referer=' + ref + '&color=' + colord + '&eself=' + eself + '&res=' + res + '\">' );\n";
// referer= ref
// color= colord
// eself= eself
// res= res
// err_direct - optional error flag
// err_referer - referrer if came via error page
define("log_INIT", TRUE);
$colour = strip_tags((isset($_REQUEST['color']) ? $_REQUEST['color'] : ''));
$res = strip_tags((isset($_REQUEST['res']) ? $_REQUEST['res'] : ''));
$self = strip_tags((isset($_REQUEST['eself']) ? $_REQUEST['eself'] : ''));
$ref = addslashes(strip_tags((isset($_REQUEST['referer']) ? $_REQUEST['referer'] : '')));
$date = date("z.Y", time());
$logPfile = "logs/logp_".$date.".php";
// vet resolution and colour depth some more - avoid dud values
if ($res && preg_match("#.*?((\d+)\w+?(\d+))#", $res, $match))
{
$res = $match[2].'x'.$match[3];
}
else
{
$res = '??'; // Can't decode resolution
}
if ($colour && preg_match("#.*?(\d+)#",$colour,$match))
{
$colour = intval($match[1]);
}
else
{
$colour='??';
}
if ($err_code = strip_tags((isset($_REQUEST['err_direct']) ? $_REQUEST['err_direct'] : '')))
{
$ref = addslashes(strip_tags(isset($_REQUEST['err_referer']) ? $_REQUEST['err_referer'] : ''));
$log_string = $err_code.",".$self.",".$ref;
// Uncomment the next two lines to create a separate CSV format log of invalid accesses - error code, entered URL, referrer
// $logname = "logs/errpages.csv";
// $logfp = fopen($logname, 'a+'); fwrite($logfp, $log_string."\n\r"); fclose($logfp);
$err_code .= ':';
}
if(strstr($ref, "admin"))
{
$ref = FALSE;
}
$screenstats = $res."#".$colour;
$agent = $_SERVER['HTTP_USER_AGENT'];
$ip = getip();
$oldref = $ref; // backup for search string being stripped off for referer
if($ref && !strstr($ref, $_SERVER['HTTP_HOST']))
{
if(preg_match("#http://(.*?)($|/)#is", $ref, $match))
{
$ref = $match[0];
}
}
$pageDisallow = "cache|file|eself|admin";
$tagRemove = "(\\\)|(\s)|(\')|(\")|(eself)|( )|(\.php)|(\.html)";
$tagRemove2 = "(\\\)|(\s)|(\')|(\")|(eself)|( )";
preg_match("#/(.*?)(\?|$)#si", $self, $match);
$match[1] = isset($match[1]) ? $match[1] : '';
$pageName = substr($match[1], (strrpos($match[1], "/")+1));
$PN = $pageName;
$pageName = preg_replace("/".$tagRemove."/si", "", $pageName);
if($pageName == "") $pageName = "index";
$pageName = $err_code.$pageName; // Add the error code at the beginning, so its treated uniquely
if(preg_match("/".$pageDisallow."/i", $pageName)) return;
$p_handle = fopen($logPfile, 'r+');
if($p_handle && flock( $p_handle, LOCK_EX ) )
{
$log_file_contents = '';
while (!feof($p_handle))
{ // Assemble a string of data
$log_file_contents.= fgets($p_handle,1000);
}
$log_file_contents = str_replace(array('<'.'?php','?'.'>'),'',$log_file_contents);
if (eval($log_file_contents) === FALSE) echo "error in log file contents<br /><br /><br /><br />";
}
else
{
echo "Couldn't log data<br /><br /><br /><br />";
exit;
}
$flag = FALSE;
if(array_key_exists($pageName, $pageInfo))
{ // Existing page - just increment stats
$pageInfo[$pageName]['ttl'] ++;
}
else
{ // First access of page
$url = preg_replace("/".$tagRemove2."/si", "", $self);
if(preg_match("/".$pageDisallow."/i", $url)) return;
$pageInfo[$pageName] = array('url' => $url, 'ttl' => 1, 'unq' => 1);
$flag = TRUE;
}
if(!strstr($ipAddresses, $ip))
{ /* unique visit */
if(!$flag)
{
$pageInfo[$pageName]['unq'] ++;
}
$siteUnique ++;
$ipAddresses .= $ip."."; // IP address is stored as hex string
require_once("loginfo.php");
}
$siteTotal ++;
$info_data = var_export($pageInfo, true);
//$date_stamp = date("z:Y", time()); // Same as '$date' variable
$data = "<?php
/* e107 website system: Log file: {$date} */
\$ipAddresses = '{$ipAddresses}';
\$siteTotal = '{$siteTotal}';
\$siteUnique = '{$siteUnique}';
\$pageInfo = {$info_data};
?>";
if ($p_handle)
{
ftruncate( $p_handle, 0 );
fseek( $p_handle, 0 );
fwrite($p_handle, $data);
fclose($p_handle);
}
function getip($mode=TRUE)
{
if (getenv('HTTP_X_FORWARDED_FOR'))
{
$ip = $_SERVER['REMOTE_ADDR'];
if (preg_match("#^(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})#", getenv('HTTP_X_FORWARDED_FOR'), $ip3))
{
$ip2 = array('#^0\..*#',
'#^127\..*#', // Local loopbacks
'#^192\.168\..*#', // RFC1918 - Private Network
'#^172\.(?:1[6789]|2\d|3[01])\..*#', // RFC1918 - Private network
'#^10\..*#', // RFC1918 - Private Network
'#^169\.254\..*#', // RFC3330 - Link-local, auto-DHCP
'#^2(?:2[456789]|[345][0-9])\..*#' // Single check for Class D and Class E
);
$ip = preg_replace($ip2, $ip, $ip3[1]);
}
}
else
{
$ip = $_SERVER['REMOTE_ADDR'];
}
if ($ip == "")
{
$ip = "x.x.x.x";
}
if($mode)
{
$ipa = explode(".", $ip);
return sprintf('%02x%02x%02x%02x', $ipa[0], $ipa[1], $ipa[2], $ipa[3]);
}
else
{
return $ip;
}
}
?>​
ERROR CODED : PHP Notice : undefined offset : 3 in C:\inetpub\wwwroot\oss_plugins\log\log.php on line 202
Below is the log.php file in wwwroot folder : line 202 is the 3rd line from the
bottom where it states
(return sprintf('%02x%02x%02x%02x', $ipa[0], $ipa[1],
$ipa[2], $ipa[3]);
The length of your array $ipa is 3 then' you don't have $ipa[3].
replace your line 202 with this:
return sprintf('%02x%02x%02x', $ipa[0], $ipa[1], $ipa[2]);
If you want use your line 202:
return sprintf('%02x%02x%02x%02x', $ipa[0], $ipa[1], $ipa[2], $ipa[3]);
You must check that your $ip must be like this
x.x.x.x
the error is because the format is
x.x.x
Check your code.

PHP Scrape - Create multidimensional arrays from results - current code only returning one result

I'm quite new to PHP and am creating a web scraper for a project. From this website, https://www.bloglovin.com/en/blogs/1/2/all, I am scraping the blog title, blog url, image url and concatenating a follow through link for later use. As you can see on the page, there are several fields with information for each blogger.
Here is my PHP code so far;
<?php
// Function to make GET request using cURL
function curlGet($url) {
$ch = curl_init(); // Initialising cURL session
// Setting cURL options
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_URL, $url);
$results = curl_exec($ch); // Executing cURL session
curl_close($ch); // Closing cURL session
return $results; // Return the results
}
$blogStats = array();
function returnXPathObject($item) {
$xmlPageDom = new DomDocument();
#$xmlPageDom->loadHTML($item);
$xmlPageXPath = new DOMXPath($xmlPageDom);
return $xmlPageXPath;
}
$blPage = curlGet('https://www.bloglovin.com/en/blogs/1/2/all');
$blPageXpath = returnXPathObject($blPage);
$title = $blPageXpath->query('//*[#id="content"]//div/a/h2/span[1]');
if ($title->length > 0) {
$blogStats['title'] = $title->item(0)->nodeValue;
}
$url = $blPageXpath->query('//*[#id="content"]//div/a/h2/span[2]');
if ($url->length > 0) {
$blogStats['url'] = $url->item(0)->nodeValue;
}
$img = $blPageXpath->query('//*[#id="content"]//div/a/div/#href');
if ($img->length > 0) {
$blogStats['img'] = $img->item(0)->nodeValue;
}
$followLink = $blPageXpath->query('//*[#id="content"]/div[1]/div/a/#href');
if ($followLink->length > 0) {
$blogStats['followLink'] = 'http://www.bloglovin.com' . $followLink->item($i)->nodeValue;
}
print_r($blogStats);
/*$data = $blogStats;
header('Content-Type: application/json');
echo json_encode($data);*/
?>
Currently, this only returns:
Array ( [title] => Fashion Toast [url] => fashiontoast.com [followLink] => http://www.bloglovin.com/blog/4735/fashion-toast )
My question is, what is the best way to loop through each of the results? I've been looking through Stack Overflow and am struggling to find an answer to my question, and my heads going a bit loopy! If anyone could advise me or put me in the right direction, that would be fantastic.
Thank you.
Update:
I'm very sure this is wrong, i'm receiving errors!
<?php
// Function to make GET request using cURL
function curlGet($url) {
$ch = curl_init(); // Initialising cURL session
// Setting cURL options
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_URL, $url);
$results = curl_exec($ch); // Executing cURL session
curl_close($ch); // Closing cURL session
return $results; // Return the results
}
$blogStats = array();
function returnXPathObject($item) {
$xmlPageDom = new DomDocument();
#$xmlPageDom->loadHTML($item);
$xmlPageXPath = new DOMXPath($xmlPageDom);
return $xmlPageXPath;
}
$blogPage = curlGet('https://www.bloglovin.com/en/blogs/1/2/all');
$blogPageXpath = returnXPathObject($blogPage);
$blogger = $blogPageXpath->query('//*[#id="content"]/div/#data-blog-id');
if ($blogger->length > 0) {
$blogStats[] = $blogger->item(0)->nodeValue;
}
foreach($blogger as $id) {
$blPage = curlGet('https://www.bloglovin.com/en/blogs/1/2/all');
$blPageXpath = returnXPathObject($blPage);
$title = $blPageXpath->query('//*[#id="content"]//div/a/h2/span[1]');
if ($title->length > 0) {
$blogStats[$id]['title'] = $title->item(0)->nodeValue;
}
$url = $blPageXpath->query('//*[#id="content"]//div/a/h2/span[2]');
if ($url->length > 0) {
$blogStats[$id]['url'] = $url->item(0)->nodeValue;
}
$img = $blPageXpath->query('//*[#id="content"]//div/a/div/#href');
if ($img->length > 0) {
$blogStats[$id]['img'] = $img->item(0)->nodeValue;
}
$followLink = $blPageXpath->query('//*[#id="content"]/div[1]/div/a/#href');
if ($followLink->length > 0) {
$blogStats[$id]['followLink'] = 'http://www.bloglovin.com' . $followLink->item($i)->nodeValue;
}
}
print_r($blogStats);
/*$data = $blogStats;
header('Content-Type: application/json');
echo json_encode($data);*/ ?>
maybe you want to actually add a dimension to your array. I guess bloggers have a unique id, or somesuch identifier.
moreover, your code seems to execute only once? it might need to be in something like a foreach
I can't do that part for you, but you need an array containing each blogger, or a way to do a while, or for! you have to understand how to iterate over your different bloggers by yourself :)
here an exemple of array of bloggers
[14]['bloggerOne']
[15]['bloggerTwo']
[16]['bloggerThree']
foreach ($blogger as $id => $name)
{
$blPage = curlGet('https://www.bloglovin.com/en/blogs/1/2/' . $name);
// here you have something to do so that $blPage is actually different with each iteration, like changing the url
$blPageXpath = returnXPathObject($blPage);
$title = $blPageXpath->query('//*[#id="content"]//div/a/h2/span[1]');
if ($title->length > 0) {
$blogStats[$id]['title'] = $title->item(0)->nodeValue;
}
$url = $blPageXpath->query('//*[#id="content"]//div/a/h2/span[2]');
if ($url->length > 0) {
$blogStats[$id]['url'] = $url->item(0)->nodeValue;
}
$img = $blPageXpath->query('//*[#id="content"]//div/a/div/#href');
if ($img->length > 0) {
$blogStats[$id]['img'] = $img->item(0)->nodeValue;
}
$followLink = $blPageXpath->query('//*[#id="content"]/div[1]/div/a/#href');
if ($followLink->length > 0) {
$blogStats[$id]['followLink'] = 'http://www.bloglovin.com' . $followLink->item($i)->nodeValue;
}
}
so after the foreach, you array could look like:
['12345']['title'] = whatever
['url'] = url
['img'] = foo
['followLink'] = bar
['4141']['title'] = other
['url'] = urlss
['img'] = foo
['followLink'] = bar
['7415']['title'] = still
['url'] = url4
['img'] = foo
['followLink'] = bar

Tumblr XML not loading

I implemented the following code based on some code I found in another question:
Select specific Tumblr XML values with PHP
function getPhoto($photos, $desiredWidth) {
$currentPhoto = NULL;
$currentDelta = PHP_INT_MAX;
foreach ($photos as $photo) {
$delta = abs($desiredWidth - $photo['max-width']);
if ($photo['max-width'] <= $desiredWidth && $delta < $currentDelta) {
$currentPhoto = $photo;
$currentDelta = $delta;
}
}
return $currentPhoto;
}
$request_url = "http://ACCOUNT.tumblr.com/api/read?type=photo&start=0&num=30";
//$request_url = "tumblr.xml";
$xml = simplexml_load_file($request_url);
foreach ($xml->posts->post as $post) {
echo "<div class=\"item\"><a href='".$post['url']."'><img src='".getPhoto($post->{'photo-url'}, 250)."' width=\"250\" /></a></div>";
}
This code worked just fine on my development site, but when I pushed it live on another server, it would not load the external XML from Tumblr... It loaded the local text XML just fine (commented out in the code).
I'm waiting to get the account credentials from the client, so I can contact customer service and work with them...
In the meantime, does anyone have any ideas what might cause this?
Server settings?
Missing PHP code?
So, I ended up using cURL to load the XML... Here's the code that worked:
function getPhoto($photos, $desiredWidth) {
$currentPhoto = NULL;
$currentDelta = PHP_INT_MAX;
foreach ($photos as $photo) {
$delta = abs($desiredWidth - $photo['max-width']);
if ($photo['max-width'] <= $desiredWidth && $delta < $currentDelta) {
$currentPhoto = $photo;
$currentDelta = $delta;
}
}
return $currentPhoto;
}
$url="http://ACCOUNT.tumblr.com/api/read?type=photo&start=0&num=30";
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url); // get the url contents
$data = curl_exec($ch); // execute curl request
curl_close($ch); // close curl request
$xml = new SimpleXMLElement($data);
foreach($xml->posts->post as $post) {
echo "<div class=\"item\"><a href='".$post['url']."'><img src='".getPhoto($post->{'photo-url'}, 250)."' width=\"250\" /></a></div>";
}

Error Handling with LastFM API and simplexml_load_file

I'm using simplexml_load_file to pull album information from the LastFM API and having no problems when the requested album matches.
However, when the album is not found, LastFM returns an error, which causes the code below to output a "failed to open stream" error.
I can see that LastFM is giving me exactly what I need, but am unsure how to proceed. What is the proper way to update the code so that this error/error code is correctly handled?
Code:
$feed = simplexml_load_file("http://ws.audioscrobbler.com/2.0/?method=album.getinfo&api_key=".$apikey."&artist=".$artist."&album=".$album."&autocorrect=".$autocorrect);
$albums = $feed->album;
foreach($albums as $album) {
$name = $album->name;
$img = $album->children();
$img_big = $img->image[4];
$img_small = $img->image[2];
$releasedate = $album->releasedate;
$newdate = date("F j, Y", strtotime($releasedate));
if ($img == "") {
$img = $emptyart; }
}
?>
if ($headers[0] == 'HTTP/1.0 400 Bad Request') {
$img_big = $emptyart;
$img_small = $emptyart;
}
That will break with a 403 error ...
Method 1 (not practical):
Basically you can mute the error with # and check if your $feed is empty. In that case, "some error" has happened, either an error with your url or a status failed from last.fm.
$feed = #simplexml_load_file("http://ws.audioscrobbler.com/2.0/?method=album.getinfo&api_key=".$apikey."&artist=".$artist."&album=".$album."&autocorrect=".$autocorrect);
if ($feed)
{
// fetch the album info
}
else
{
echo "No results found.";
}
In both ways, you get no results so you can probably satisfy your needs and showing "No results found" instead of showing him "Last.fm error status code, etc.."
Method 2 (recommended) - Use curl:
function load_file_from_url($url) {
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_REFERER, 'http://www.test.com/');
$str = curl_exec($curl);
if(str === false)
{
echo 'Error loading feed, please try again later.';
}
curl_close($curl);
return $str;
}
function load_xml_from_url($url) {
return #simplexml_load_string(load_file_from_url($url));
}
function getFeed($feedURL)
{
$feed = load_xml_from_url($feedURL);
if ($feed)
{
if ($feed['status'] == "failed")
{
echo "FAIL";
}
else
{
echo "WIN";
}
}
else {echo "Last.fm error";}
}
/* Example:
Album: Heritage
Artist: Opeth
*/
$url = "http://ws.audioscrobbler.com/2.0/?method=album.getinfo&api_key=273f3dc84c2d55501f3dc25a3d61eb39&artist=opeth&album=hwwXCvuiweitage&autocorrect=0";
$feed = getFeed ($url);
// will echo FAIL
$url2 = "http://ws.audioscrobbler.com/2.0/?method=album.getinfo&api_key=273f3dc84c2d55501f3dc25a3d61eb39&artist=opeth&album=heritage&autocorrect=0";
$feed2 = getFeed ($url2);
// will echo WIN
Wouldn't it be better to first get the url, check if it doesn't contain the error, and then use simplexml_load_string instead?
Working along the lines of evaluating the state of the URL prior to passing it to simplexml_load_file, I tried get_headers, and that, coupled with an if/else statement, seems to have gotten things working.
$url = "http://ws.audioscrobbler.com/2.0/?method=album.getinfo&api_key=".$apikey."&artist=".$artist."&album=".$album."&autocorrect=".$autocorrect;
$headers = get_headers($url, 1);
if ($headers[0] == 'HTTP/1.0 400 Bad Request') {
$img_big = $emptyart;
$img_small = $emptyart;
}
else {
$feed = simplexml_load_file($url);
$albums = $feed->album;
foreach($albums as $album) {
$name = $album->name;
$img = $album->children();
$img_big = $img->image[4];
$img_small = $img->image[2];
$releasedate = $album->releasedate;
$newdate = date("F j, Y", strtotime($releasedate));
if ($img == "") {
$img_big = $emptyart;
$img_small = $emptyart;
}
}
}

Categories