Getting XML feed and outputting - php

Hello I would like to get the XML feed for my site:
http://buildworx-mc.com/forum/syndication.php?fid=4&limit=5
And display them in this format:
<ul>
<li> Topic 1 </li>
<li> Topic 2 </li>
<li> Topic 3 </li>
</ul>
I guess the best/easiest method to do this is using PHP, so how can I get the XML and display them in list items? It'll be displayed on http://example.com while the feed is http://example.com/forum
I tried the other answers from other questions asked, but nothing seems to work for me.

You may need to use the command "file_get_contents" to get a copy of the remote file in order to parse it using PHP. I'm surprised this step is necessary since you say you want to display items from your forum on your site so you might be able to set the 'feed' variable to the direct link, assuming everything is under the same domain. If not, this ought to work.
$feed = file_get_contents('http://buildworx-mc.com/forum/syndication.php?fid=4&limit=5');
$xml = simplexml_load_string($feed);
$items = $xml->channel->item;
foreach($items as $item) {
$title = $item->title;
$link = $item->link;
$pubDate = $item->pubDate;
$description = $item->description;
echo $title . "<br>";
// continue to format as an unordered list
}

You may try SimpleXML once you are using PHP:
http://php.net/manual/en/book.simplexml.php
Just load that URL, so that it will convert the XML file into an object:
http://www.php.net/manual/en/function.simplexml-load-file.php
Then you may iterate the objects with a simple "foreach", to generate that HTML list you want.
For testing purposes, and to understand how the object is created, you may use "print_r()".

Related

SimpleXML Parsing Feed with children

I've seen different versions of this question asked but nothing that specifically answered mine.
I'm trying to parse this Rss feed that pulls the results from a search on a Pet adoption site and turns it into a RSS/Atom feed.
<?php
//RSS solution
$feed = simplexml_load_file('http://www.serverstrategies.com/rss.php?sid=WV87&len=20&rand=0&drop_1=');
$children = $feed->children('http://www.w3.org/2005/Atom');
echo $children->entry[1]->item[0]->title;
?>
I've tried a lot of different variations of this but I've yet to get anything to print out.
Hope this solution will be helpful in getting all the titles and descriptions.
<?php
$feed = simplexml_load_file('http://www.serverstrategies.com/rss.php?sid=WV87&len=20&rand=0&drop_1=');
$result=array();
foreach($feed->channel->item as $node)
{
$result[]=array("title"=>(string)$node->title,
"description"=> strip_tags((string)$node->description));
}
print_r($result);

PHP-Retrieve specific content from multiple pages of a website

What I want to accomplish might be a little hardcore, but I want to know if it's possible:
The question:
My question is the same as PHP-Retrieve content from page, but I want to use it on multiple pages.
The situation:
I'm using a website about TV shows. All the TV shows have the same URL and then the name of the show:
http://bierdopje.com/shows/NAME_OF_SHOW
On every show page, there's a line which tells you if the show is cancelled or still running. I want to retrieve that line to make an overview of the cancelled shows (the website only supports an overview of running shows, so I want to make an extra functionality).
The real question:
How can I tell DOM to retrieve all the shows and check for the status of the show?
(http://bierdopje.com/shows/*).
The Note:
I understand that this process may take a while because it is reading the whole website (or is it too much data?).
use this code to fetch only the links from the single website.
include_once('simple_html_dom.php');
$html = file_get_html('http://www.couponrani.com/');
// Find all links
foreach($html->find('a') as $element)
echo $element->href . '<br>';
I use phpquery to fetch data from a web page, like jQuery in Dom.
For example, to get the list of all shows, you can do this :
<?php
require_once 'phpQuery/phpQuery/phpQuery.php';
$doc = phpQuery::newDocumentHTML(
file_get_contents('http://www.bierdopje.com/shows')
);
foreach (pq('.listing a') as $key => $a) {
$url = pq($a)->attr('href'); // will give "/shows/07-ghost"
$show = pq($a)->text(); // will give "07 Ghost"
}
Now you can process all shows individualy, make a new phpQuery::newDocumentHTML for each show and with an selector extract the information you need.
Get the status of a show
$html = file_get_contents('http://www.bierdopje.com/shows/alcatraz');
$doc = phpQuery::newDocumentHTML($html);
$status = pq('.content>span:nth-child(6)')->text();

Google news feed content

So lets say i have a google news feed, like this: https://news.google.com/news/feeds?pz=1&cf=all&ned=no_no&hl=no&q=%22something%22&output=atom&num=1
Grabbing the title, author and link would be easy, but how would i go around getting say the first 200 characters of the content? its full of html, and mixed in with the title and author aswell.
i could use strip_tags on it, but it would still be a mess.
Any way to make google return a ['description'] maybe?
or is there perhaps any other good news feeds that gives me the content in a way thats easier to manage?
[edit]
Update on how i ended up doing it.
$news = #simplexml_load_string(file_get_contents('https://news.google.com/news/feeds?pz=1&cf=all&ned=no_no&hl=no&q=%22molde+fotballklubb%22+OR+%22tornekrattet%22+OR+%22mfk%22+OR+%22oddmund+bjerkeset%22+-%22moss%22&output=atom&num=1'), 'SimpleXMLElement', LIBXML_NOCDATA);
$data = get_object_vars($news->{'entry'});
$test = explode('<font size="-1">', $data['content']);
$link = get_object_vars($data['link']);
$return['title'] = strip_tags($test[0]);
$return['author'] = strip_tags($test[1]);
$return['description'] = strip_tags($test[2]);
$return['link'] = $link['#attributes']['href'];
It is still not working properly, but thats because the feed gives me the content in different ways all the time. Sometimes the content of the news article itself will just be metadata like the authors and image descriptions.
And the breaking it up by html tags when the html have changes from time to time causes some problems. But i cant figure out any othe way of doing it with this feed.
You could try loading the HTML in a DOMDocument instance and extract the parts you need, or use a wrapper for it like Goutte which makes it a lot easier to extract portions you need.
http://php.net/manual/en/class.domdocument.php
https://github.com/fabpot/Goutte

xml problem with php

I'm trying to use an xml file and retrieve all links from it. so far I have:
$xmlHeadline = $xml->channel[0]->item[3]->link;
print($xmlHeadline);
This works ok to print the single headline link for item[3] in the xml. But as you can see it is 2 levels deep. Then next one will be at channel[0]->item[4]->link. There is no channel 1, just channel[0].
All the examples on the internet only deal with 1 level deep. They all used a foreach loop, but I am not sure if that can be used here...
How can I cycle through all item's in the xml and echo all links?
Try
foreach ($xml->channel[0]->item as $item) {
echo $item->link;
}
or
foreach ($xml->xpath('/channel/item/link') as $link) {
echo $link;
}
I think you want a DOM parser, that will allow you to load the xml as a structured hierarchy and then use a function like getElementById http://php.net/manual/en/domdocument.getelementbyid.php to parse the xml and get the specific items you want at any depth.
If you provide the structure of the xml file I may be able to help with the specific use of a DOM function.
$str = '<channels><channel>
<item><link>google.com</link></item>
<item><link>google.com.cn</link></item>
<item><link>google.com.sg</link></item>
<item><link>google.com.my</link></item>
</channel></channels>';
$xml = simplexml_load_string($str);
$objs = $xml->xpath('//channel/item/link');
PS: Please include some example of your xml

Updating the XML file using PHP script

I'm making an interface-website to update a concert-list on a band-website.
The list is stored as an XML file an has this structure :
I already wrote a script that enables me to add a new gig to the list, this was relatively easy...
Now I want to write a script that enables me to edit a certain gig in the list.
Every Gig is Unique because of the first attribute : "id" .
I want to use this reference to edit the other attributes in that Node.
My PHP is very poor, so I hope someone could put me on the good foot here...
My PHP script :
Well i dunno what your XML structure looks like but:
<gig id="someid">
<venue></venue>
<day></day>
<month></month>
<year></year>
</gig>
$xml = new SimpleXmlElement('gig.xml',null, true);
$gig = $xml->xpath('//gig[#id="'.$_POST['id'].'"]');
$gig->venue = $_POST['venue'];
$gig->month = $_POST['month'];
// etc..
$xml->asXml('gig.xml)'; // save back to file
now if instead all these data points are attributes you can use $gig->attributes()->venue to access it.
There is no need for the loop really unless you are doing multiple updates with one post - you can get at any specific record via an XPAth query. SimpleXML is also a lot lighter and a lot easier to use for this type of thing than DOMDOcument - especially as you arent using the feature of DOMDocument.
You'll want to load the xml file in a domdocument with
<?
$xml = new DOMDocument();
$xml->load("xmlfile.xml");
//find the tags that you want to update
$tags = $xml->getElementsByTagName("GIG");
//find the tag with the id you want to update
foreach ($tags as $tag) {
if($tag->getAttribute("id") == $id) { //found the tag, now update the attribute
$tag->setAttribute("[attributeName]", "[attributeValue]");
}
}
//save the xml
$xml->save();
?>
code is untested, but it's a general idea

Categories