I just can't seem to be able to solve this. I want to get the media:thumbnail from an RSS file (http://feeds.bbci.co.uk/news/rss.xml).
I did some research and tried to incorporate insights from
https://stackoverflow.com/questions/6707315/getting-xml-attribute-from-mediathumbnail-in-bbc-rss-feed
and from other sources.
This is what I got:
$source_link = "http://feeds.bbci.co.uk/news/rss.xml";
$source_xml = simplexml_load_file($source_link);
$namespace = "http://search.yahoo.com/mrss/";
foreach ($source_xml->channel->item as $rss) {
$title = $rss->title;
$description = $rss->description;
$link = $rss->link;
$date_raw = $rss->pubDate;
$date = date("Y-m-j G:i:s", strtotime($date_raw));
$image = $rss->attributes($namespace);
print_r($image);
}
When I run the script, all I see is a white page. If I echo or print_r any of the other variables, then it works like a charm. It's just the $image one which poses problems. Why isn't this working? Thx for any help!
OK, it works now. I replaced
$image = $rss->attributes($namespace);
with
$image = $rss->children($namespace)->thumbnail[1]->attributes();
$image_link = $image['url'];
and it works like a charm now.
Base from this blog, with post title Processing media:thumbnail in RSS feeds with php.
The solution that I found works best simply loads the xml file as a string, then find and replace 'media:thumbnail' with a correctly formatted 'thumbnail' and lastly convert it back to xml with simplexml_load_string:
$xSource = 'http://feeds.bbci.co.uk/news/rss.xml';
$xsourcefile = file_get_contents( $xSource );
$xsourcefile = str_replace("media:thumbnail","thumbnail",$xsourcefile);
$xml = simplexml_load_string( $xsourcefile );
echo $row['xtitle'] . '<BR>';
foreach ($xml->channel->item as $item) {
echo ':' . $item->title . '<BR>';
echo ':' . $item->thumbnail['url'] . '<BR>';
}
$image = $rss->attributes($namespace);
This says "Give me all attributes of this <item> element which are in the media namespace". There are no attributes on the item element (much less any in the media namespace), so this returns nothing.
You want this:
$firstimage = $rss->children($namespace)->thumbnail[0];
BTW, when you use SimpleXML you need to be careful to cast your SimpleXMLElements to string when you need the text value of the element. Something like $rss->title is a SimpleXMLElement, not a string.
Related
I am adding an RSS feed to my website. I created the RSS.xml index file and next I want to display its contents in a nicely formatted way in a webpage.
Using PHP, I can do this:
$index = file_get_contents ($path . 'RSS.xml');
echo $index;
But all that does is dump the contents as a long stream of text with the tags removed.
I know that treating RSS.xml as a link, like this:
<a href="../blogs/RSS.xml">
<img src="../blogs/feed-icon-16.gif">Blog Index
</a>
causes my browser to parse and display it in a reasonable way when the user clicks on the link. However I want to embed it directly in the web page and not make the user go through another click.
What is the proper way to do what I want?
Use the following code:
include_once('Simple/autoloader.php');
$feed = new SimplePie();
$feed->set_feed_url($url);
$feed->enable_cache(false);
$feed->set_output_encoding('utf-8');
$feed->init();
$i=0;
$items = $feed->get_items();
foreach ($items as $item) {
$i++;
/*You are getting title,description,date of your rss by the following code*/
$title = $item->get_title();
$url = $item->get_permalink();
$desc = $item->get_description();
$date = $item->get_date();
}
Download the Simple folder data from : https://github.com/jewelhuq/Online-News-Grabber/tree/master/worldnews/Simple
Hope it will work for you. There $url mean your rss feed url. If you works then response.
Turns out, it's simple by using the PHP xml parer function:
$xml = simplexml_load_file ($path . 'RSS.xml');
$channel = $xml->channel;
$channel_title = $channel->title;
$channel_description = $channel->description;
echo "<h1>$channel_title</h1>";
echo "<h2>$channel_description</h2>";
foreach ($channel->item as $item)
{
$title = $item->title;
$link = $item->link;
$descr = $item->description;
echo "<h3><a href='$link'>$title</a></h3>";
echo "<p>$descr</p>";
}
I am working on web scraping application using simple_html_dom. I need to extract all the images in a web page. The following are the possibilities:
<img> tag images
if there is a css with the <style> tag in the same page.
if there is an image with the inline style with <div> or with some other tag.
I can scrape all the images by using the following code.
function download_images($html, $page_url , $local_url){
foreach($html->find('img') as $element) {
$img_url = $element->src;
$img_url = rel2abs($img_url, $page_url);
$parts = parse_url($img_url);
$img_path= $parts['path'];
$url_to_be_change = $GLOBALS['website_server_root'].$img_path;
download_file($img_url, $GLOBALS['website_local_root'].$img_path);
$element->src=$url_to_be_change;
}
$css_inline = $html->find("style");
$matches = array();
preg_match_all( "/url\((.*?)\)/", $css_inline, $matches, PREG_SET_ORDER );
foreach ( $matches as $match ) {
$img_url = trim( $match[1], "\"'" );
$img_url = rel2abs($img_url, $page_url);
$parts = parse_url($img_url);
$img_path= $parts['path'];
$url_to_be_change = $GLOBALS['website_server_root'].$img_path ;
download_file($img_url , $GLOBALS['website_local_root'].$img_path);
$html = str_replace($img_url , $url_to_be_change , $html );
}
return $html;
}
$html = download_images($html , $page_url , $dir); // working fine
$html = str_get_html ($html);
$html->save($dir. "/" . $ff);
Please note that, I am modifying the HTML too after image downloading.
downloading is working fine. but when i am trying to save the HTML, then its giving the following error:
PHP Fatal error: Cannot use object of type simple_html_dom as array
Important: its working perfectly fine, if I am not using str_replace and second loop.
Fatal error: Cannot use object of type simple_html_dom as array in /var/www/html/app/framework/cache/includes/simple_html_dom.php on line 1167
Guess №1
I see a possible mistake here:
$html = str_get_html($html);
Looks like you pass an object to function str_get_html(), while it accepts a string as an argument. Lets fix that this way:
$html = str_get_html($html->plaintext);
We can only guess what is the content of the $html variable, that comes to this piece of code.
Guess №2
Or maybe we just need to use another variable in function download_images to make your code correct in both cases:
function download_images($html, $page_url , $local_url){
foreach($html->find('img') as $element) {
$img_url = $element->src;
$img_url = rel2abs($img_url, $page_url);
$parts = parse_url($img_url);
$img_path= $parts['path'];
$url_to_be_change = $GLOBALS['website_server_root'].$img_path ;
download_file($img_url , $GLOBALS['website_local_root'].$img_path);
$element->src=$url_to_be_change;
}
$css_inline = $html->find("style");
$result_html = "";
$matches = array();
preg_match_all( "/url\((.*?)\)/", $css_inline, $matches, PREG_SET_ORDER );
foreach ( $matches as $match ) {
$img_url = trim( $match[1], "\"'" );
$img_url = rel2abs($img_url, $page_url);
$parts = parse_url($img_url);
$img_path= $parts['path'];
$url_to_be_change = $GLOBALS['website_server_root'].$img_path ;
download_file($img_url , $GLOBALS['website_local_root'].$img_path);
$result_html = str_replace($img_url , $url_to_be_change , $html );
}
return $result_html;
}
$html = download_images($html , $page_url , $dir); // working fine
$html = str_get_html ($html);
$html->save($dir. "/" . $ff);
Explanation: if there was no matches (array $matches is empty) we never go in the second cycle, thats why variable $html still has the same value as at beginning of the function. This is common mistake when you're trying to use same variable in the place of code where you need two different variables.
As the error message states, you are dealing with an Object where you should have an array.
You could try tpyecasting your object:
$array = (array) $yourObject;
That should solve it.
I had this error, I solved it by using (in my case) return $html->save(); in end of function.
I can't explain why two instances with different variable names, and scoped in different functions made this error. I guess this is how the "simple html dom" class works.
So just to be clear, try: $html->save(), before you do anything else after
I hope this information helps somebody :)
I am trying to fetch the content inside a <div> via file_get_contents. What I want to do is to fetch the content from the div resultStats on google.com. My problem is (afaik) printing it.
A bit of code:
$data = file_get_contents("https://www.google.com/?gws_rd=cr&#q=" . $_GET['keyword'] . "&gws_rd=ssl");
preg_match("#<div id='resultStats'>(.*?)<\/div>#i", $data, $matches);
Simply using
print_r($matches);
only returns Array(), but I want to preg_match the number. Any help is appreciated!
Edit: thanks for showing me the right direction! I got rid of the preg_ call and went for DOM instead. Although I am pretty new to PHP and this is giving me an headache; I found this code here on Stack Overflow and I am trying to edit it to get it to work. This far I only receive a blank page, and don't know what I am doing wrong.
$str = file_get_contents("https://www.google.com/search?source=hp&q=" . $_GET['keyword'] . "&gws_rd=ssl");
$DOM = new DOMDocument;
#$dom->loadHTML($str);
//get
$items = $DOM->getElementsByTagName('resultStats');
//print
for ($i = 0; $i < $items->length; $i++)
echo $items->item($i)->nodeValue . "<br/>";
} else { exit("No keyword!") ;}
Posted on behalf of the OP.
I decided to use the PHP Simple HTML DOM Parser and ended up something like this:
include_once('/simple_html_dom.php');
$setDomain = "https://www.google.com/search?source=hp&q=" . $_GET['keyword'] . "&gws_rd=ssl";
$str = file_get_html($setDomain);
$html = str_get_html($str);
$html->find('div div[id=resultStats]', 0)->innertext . '<br>';
Problem solved!
My scraping code works for just about every site i've come accross while testing... except for nytimes.com articles. I use ajax with the following PHP code (i've left out some details to focus on my specific problem):
$link = "http://www.nytimes.com/2014/02/07/us/huge-leak-of-coal-ash-slows-at-north-carolina-power-plant.html?hp";
$article = new DOMDocument;
$article->loadHTMLFile($link);
//generate image array
$images = $article->getElementsByTagName("img");
foreach ($images as $image) {
$source = $image->getAttribute("src");
echo '<img src="' . $source . '" alt="alt"><br><br>';
}
My problem is that the main images on nytimes pages don't even seem to get picked up by the getElementsByTagName. Pinterest finds a way to scrape the main images from this site for example: http://www.nytimes.com/2014/02/07/us/huge-leak-of-coal-ash-slows-at-north-carolina-power-plant.html?hp whereas I cannot. Any suggestions?
OK. So this is what I tried so far as I found your question interesting.
When I do this on browser console using jQuery, I do get results on images. My query was
var a= new Array();
$('img[src]').each(function(){ a.push($(this).attr('src'));});
console.log(a);
Also see screenshot of results
Note that console.log(arrayname) work in Chrome browser.
So ideally your code must work. Please consider adding a is_null check like I've done.
Below is the code where I try loading the URL using a different approach(perhaps better too) and get the root cause of why you get only single image of NYT logo.
The resultant HTML screenshot is attached .
<?php
$html = file_get_contents("http://www.nytimes.com/2014/02/07/us/huge-leak-of-coal-ash-slows-at-north-carolina-power-plant.html?hp");
echo $html;
$doc = new DOMDocument();
$doc->strictErrorChecking = false;
$doc->recover=true;
#$doc->loadHTML("<html><body>".$html."</body></html>");
$xpath = new DOMXpath($doc);
$images = $xpath->query("//*/img");
if (!is_null($images)) {
echo sizeof($images);
foreach ($images as $image) {
$source = $image->getAttribute('src');
echo '<img src="' . $source . '" alt="alt"><br><br>';
}
}
?>
You can't get the content via feed unless you are authenticated.
You can try-
To use context parameter in file_get_contents method
You can try consuming the RSS/ATOM feeds of the article.
you download the page as HTML and then load it in file_get_contents methods. PS: This works.
I'm trying to read the xml information that tumblr provides to create a kind of news feed off the tumblr, but I'm very stuck.
<?php
$request_url = 'http://candybrie.tumblr.com/api/read?type=post&start=0&num=5&type=text';
$xml = simplexml_load_file($request_url);
if (!$xml)
{
exit('Failed to retrieve data.');
}
else
{
foreach ($xml->posts[0] AS $post)
{
$title = $post->{'regular-title'};
$post = $post->{'regular-body'};
$small_post = substr($post,0,320);
echo .$title.;
echo '<p>'.$small_post.'</p>';
}
}
?>
Which always breaks as soon as it tries to go through the nodes. So basically "tumblr->posts;....ect" is displayed on my html page.
I've tried saving the information as a local xml file. I've tried using different ways to create the simplexml object, like loading it as a string (probably a silly idea). I double checked that my webhosting was running PHP5. So basically, I'm stuck on why this wouldn't be working.
EDIT: Ok I tried changing from where I started (back to the original way it was, starting from tumblr was just another (actually silly) way to try to fix it. It still breaks right after the first ->, so displays "posts[0] AS $post....ect" on screen.
This is the first thing I've ever done in PHP so there might be something obvious that I should have set up beforehand or something. I don't know and couldn't find anything like that though.
This should work :
<?php
$request_url = 'http://candybrie.tumblr.com/api/read?type=post&start=0&num=5&type=text';
$xml = simplexml_load_file($request_url);
if ( !$xml ){
exit('Failed to retrieve data.');
}else{
foreach ( $xml->posts[0] AS $post){
$title = $post->{'regular-title'};
$post = $post->{'regular-body'};
$small_post = substr($post,0,320);
echo $title;
echo '<p>'.$small_post.'</p>';
echo '<hr>';
}
}
First thing in you code is that you used root element that should not be used.
<?php
$request_url = 'http://candybrie.tumblr.com/api/read?type=post&start=0&num=5&type=text';
$xml = simplexml_load_file($request_url);
if (!$xml)
{
exit('Failed to retrieve data.');
}
else
{
foreach ($xml->posts->post as $post)
{
$title = $post->{'regular-title'};
$post = $post->{'regular-body'};
$small_post = substr($post,0,320);
echo .$title.;
echo '<p>'.$small_post.'</p>';
}
}
?>
$xml->posts returns you the posts nodes, so if you want to iterate the post nodes you should try $xml->posts->post, which gives you the ability to iterate through the post nodes inside the first posts node.
Also as Needhi pointed out you shouldn't pass through the root node (tumblr), because $xml represents itself the root node. (So I fixed my answer).