Google news feed content - php

So lets say i have a google news feed, like this: https://news.google.com/news/feeds?pz=1&cf=all&ned=no_no&hl=no&q=%22something%22&output=atom&num=1
Grabbing the title, author and link would be easy, but how would i go around getting say the first 200 characters of the content? its full of html, and mixed in with the title and author aswell.
i could use strip_tags on it, but it would still be a mess.
Any way to make google return a ['description'] maybe?
or is there perhaps any other good news feeds that gives me the content in a way thats easier to manage?
[edit]
Update on how i ended up doing it.
$news = #simplexml_load_string(file_get_contents('https://news.google.com/news/feeds?pz=1&cf=all&ned=no_no&hl=no&q=%22molde+fotballklubb%22+OR+%22tornekrattet%22+OR+%22mfk%22+OR+%22oddmund+bjerkeset%22+-%22moss%22&output=atom&num=1'), 'SimpleXMLElement', LIBXML_NOCDATA);
$data = get_object_vars($news->{'entry'});
$test = explode('<font size="-1">', $data['content']);
$link = get_object_vars($data['link']);
$return['title'] = strip_tags($test[0]);
$return['author'] = strip_tags($test[1]);
$return['description'] = strip_tags($test[2]);
$return['link'] = $link['#attributes']['href'];
It is still not working properly, but thats because the feed gives me the content in different ways all the time. Sometimes the content of the news article itself will just be metadata like the authors and image descriptions.
And the breaking it up by html tags when the html have changes from time to time causes some problems. But i cant figure out any othe way of doing it with this feed.

You could try loading the HTML in a DOMDocument instance and extract the parts you need, or use a wrapper for it like Goutte which makes it a lot easier to extract portions you need.
http://php.net/manual/en/class.domdocument.php
https://github.com/fabpot/Goutte

Related

Scaping IFrame inside a HTML page with values loaded using Ajax request

I need to scrape this HTML page using PHP ...
http://www.cittadellasalute.to.it/index.php?option=com_content&view=article&id=6786:situazione-pazienti-in-pronto-soccorso&catid=165:pronto-soccorso&Itemid=372
... I need to extract the numbers for the rows "Rosso", "Giallo", Verde" and "Bianco" (note that these numbers are dynamic so they can change when you refresh the page but it doesn't matter....).
I've seen that these rows are inside some IFrames (for example ... http://listeps.cittadellasalute.to.it/?id=01090201 ), and the values are loaded using an ajax request (for examples http://listeps.cittadellasalute.to.it/gtotal.php?id=01090101).
Are there some solutions to scrape directly (I'd like to avoid to parse singular jsons ....), these values from the original HTML page using PHP and $xpath->query?
Suggestions / examples?
I think the problem is that the values aren't in the original page, they are built once the page is loaded. So you would need to use something which will honour all the Javascript functionality (i.e. Selinium webdriver) which is a bit overkill for what you want to do (I assume). Much easier to directly process the IFrame.
You could extract the URL's of the IFrames from the original page ...
$url = "http://www.cittadellasalute.to.it/index.php?option=com_content&view=article&id=6786:situazione-pazienti-in-pronto-soccorso&catid=165:pronto-soccorso&Itemid=372";
$pageContents = file_get_contents($url);
$page = simplexml_load_string($pageContents, "SimpleXMLElement", LIBXML_NOERROR | LIBXML_ERR_NONE);
$ns = $page->getDocNamespaces();
$page->registerXPathNamespace('def', array_values($ns)[0]);
$iframes = $page->xpath("//def:iframe");
foreach ( $iframes as $frame ) {
echo "iframe:".$frame['src'].PHP_EOL;
}
Which gives (just now)
iframe:http://listeps.cittadellasalute.to.it/?id=01090101
iframe:http://listeps.cittadellasalute.to.it/?id=01090201
iframe:http://listeps.cittadellasalute.to.it/?id=01090301
iframe:http://listeps.cittadellasalute.to.it/?id=01090302
You can then process these pages.

Scrape Text With PHP & Display On Website

I am a complete beginner with PHP. I understand the concepts but am struggling to find a tutorial I understand. My goal is this:
Use the xpath addons for Firefox to select which piece of text I would like to scrape from a site
Format the scraped text properly
Display the text on a website
Example)
// Get the HTML Source Code
$url='http://steamcommunity.com/profiles/76561197967713768';
$source = file_get_contents($url);
// DOM document Creation
$doc = new DOMDocument;
$doc->loadHTML($source);
// DOM XPath Creation
$xpath = new DOMXPath($doc);
// Get all events
$username = $xpath->query('//html/body/div[3]/div[1]/div/div/div/div[3]/div[1]');
echo $username;
?>
In this example, I would like to scrape the username (which at the time of writing is mopar410).
Thank you for your help - I am so lost :( Right now I managed to use xpath with importXML in Google doc spreadsheets and that works, but I would like to be able to do this on my own site with PHP to learn how.
This is code I found online and edited the URL and the variable - as I am not aware of how to write this myself.
They have a public API.
Simply use http://steamcommunity.com/profiles/STEAM_ID/?xml=1
<?php
$profile = simplexml_load_file('http://steamcommunity.com/profiles/76561197967713768/?xml=1', 'SimpleXMLElement', LIBXML_NOCDATA);
echo (string)$profile->steamID;
Outputs: mopar410 (at time of writing)
This also provides other information such as mostPlayedGame, hoursPlayed, etc (look for the xml node names).

how to get page contents

I'm trying to make a recent news like functionality for my site. For this i've made a web crawler and have being able to collect links from a page up till now by doing the following
$dom = new domDocument;
#$dom->loadHTML(file_get_contents($url));
$dom->preserveWhiteSpaces = false;
$linksToStore = $dom->getElementsByTagName('a');
foreach($linksToStore as $tag){
$links[$tag->getAttribute('href')]= $tag->childNodes->item(0)->nodeValue;
}
how can i get contents from the pages pointed by those links related to a particular domain which in my case is 'Medical'??
Use this http://simplehtmldom.sourceforge.net/ library to extract contents from the page. The selector works same as of jQuery, which makes it very familier and efficient to extract the contents.
Also, check this http://davidwalsh.name/php-notifications to know more

How to get post URL out of the Blogger API in PHP

In Short, I am pulling the feed from my blogger using the Zend API in PHP. I need to get the URL that will link to that post in blogger. What is the order of functions I need to call to get that URL.
Right now I am pulling the data using:
$query = new Zend_Gdata_Query('http://www.blogger.com/feeds/MYID/posts/default');
$query->setParam('max-results', "1");
$feed = $gdClient->getFeed($query);
$newestPost = $feed->entry[0];
I can not for the life of me figure out where I have to go from here to get the URL. I can successfully get the Post title using: $newestPost->getTitle() and I can get the body by using $newestPost->getContent()->getText(). I have tried a lot of function calls, even ones in the documentation and most of them error out. I have printed out the entire object to look through it and I can find the data I want (so I know it is there) but the object is too complex to be able to just look at and see what I have to do to get to that data.
If anyone can help me or at least point me to a good explanation of how that Object is organized and how to get to each sub object within it, that would be greatly appreciated.
EDIT: Never mind I figured it out.
You are almost there, really all you need to do is once you have your feed entry is access the link element inside. I like pretty URLs so I went with the alternate rather than the self entry in the atom feed.
$link = $entry->link[4]->href;
where $entry is the entry that you are setting from the feed.
The solution is:
$query = new Zend_Gdata_Query('http://www.blogger.com/feeds/MyID/posts/default');
$query->setParam('max-results', "1");
$feed = $gdClient->getFeed($query);
$newestPost = $feed->entry[0];
$body = $newestPost->getContent()->getText();
$body now contains the post contents of the latest post (or entry[0]) from the feed. This is just the contents of the body of the post, not the title or any other data or formatting.

Getting info from Wikipedia - how do I get HTML form?

I'm using curl to retrieve information from wikipedia. So far I've been successful in retrieving basic text information but I really would want to retrieve it in HTML.
Here is my code:
$s = curl_init();
$url = 'http://boss.yahooapis.com/ysearch/web/v1/site:en.wikipedia.org+'.$article_name.'?appid=myID';
curl_setopt($s,CURLOPT_URL, $url);
curl_setopt($s,CURLOPT_HEADER,false);
curl_setopt($s,CURLOPT_RETURNTRANSFER,1);
$rs = curl_exec($s);
$rs = Zend_Json::decode($rs);
$rs = ($rs['ysearchresponse']['resultset_web']);
$rs = array_shift($rs);
$article= str_replace('http://en.wikipedia.org/wiki/', '', $rs['url']);
$url = 'http://en.wikipedia.org/w/api.php?';
$url.='format=json';
$url.=sprintf('&action=query&titles=%s&rvprop=content&prop=revisions&redirects=1', $article);
curl_setopt($s,CURLOPT_URL, $url);
curl_setopt($s,CURLOPT_HEADER,false);
curl_setopt($s,CURLOPT_RETURNTRANSFER,1);
$rs = curl_exec($s);
//curl_close( $s );
$rs = Zend_Json::decode($rs);
$rs = array_pop(array_pop(array_pop($rs)));
$rs = array_shift($rs['revisions']);
$articleText = $rs['*'];
However the text retrieved this way isnt well enough to be displayed :( its all in this kind of format
'''Aix-les-Bains''' is a [[Communes of
France|commune]] in the [[Savoie]]
[[Departments of France|department]]
in the [[Rhône-Alpes]] [[regions of
France|region]] in southeastern
[[France]].
It lies near the [[Lac du Bourget]],
{{convert|9|km|mi|abbr=on}} by rail
north of [[Chambéry]].
==History== ''Aix'' derives from [[Latin]] ''Aquae'' (literally,
"waters"; ''cf'' [[Aix-la-Chapelle]]
(Aachen) or [[Aix-en-Provence]]), and
Aix was a bath during the [[Roman
Empire]], even before it was renamed
''Aquae Gratianae'' to commemorate the
[[Emperor Gratian]], who was
assassinated not far away, in
[[Lyon]], in [[383]]. Numerous Roman
remains survive. [[Image:IMG 0109 Lake
Promenade.jpg|thumb|left|Lac du
Bourget Promenade]]
How do I get the HTML of the wikipedia article?
UPDATE: Thanks but I'm kinda new to this here and right now I'm trying to run an xpath query [albeit for the first time] and can't seem to get any results. I actually need to know a couple of things here.
How do I request just a part of an article?
How do I get the HTML of the article requested.
I went through this url on data mining from wikipedia - it put an idea to make a second request to wikipedia api with the retrieved wikipedia text as parameters and that would retrieve the html - although it hasn't seemed to work so far :( - I don't want to just grab the whole article as a mess of html and dump it. Basically my application what it does is that you have some locations and cities pin pointed on the map - you click on the city marker and it would request via ajax details of the city to be shown in an adjacent div. This information I wish to get from wikipedia dynamically. I'll worry about about dealing with articles that don't exist for a particular city later on just need to make sure its working at this point.
Does anyone know of a nice working example that does what I'm looking for i.e. read and parse through selected portions of a wikipedia article.
According to the url provided - it says I should post the wikitext to the wikipedia api location for it to return parsed html. The issue is that if I post the information I get no response and instead an error that I'm denied access - however if I try to include the wikitext as GET it parses with no issue. But it fails of course when I have waaaaay too much text to parse.
Is this a problem with the wikipedia api? Because I've been hacking at it for two days now with no luck at all :(
The simplest solution would probably be to grab the page itself (e.g. http://en.wikipedia.org/wiki/Combination ) and then extract the content of <div id="content">, potentially with an xpath query.
There is a PEAR Wiki Filter that I have used and it does a very decent job.
Text Wiki
Phil
Try looking at the printable version of the desired Wikipedia article in question.
In other words, change this line of your source code:
$url.=sprintf('&action=query&titles=%s&rvprop=content&prop=revisions&redirects=1', $article);
to something like:
$url.=sprintf('&action=query&titles=%s&printable=yes&redirects=1', $article);
Disclaimer: Have not tested, and this is just a guess at how your API might work.
As far as I understand it, the Wikipedia software converts the Wiki markup into HTML when the page is requested. So using your current method, you'll need to deal with the results.
A good place to start is the Mediawiki API. You can also use http://pear.php.net/package/Text_Wiki to format the results retrieved via cURL.

Categories