reading data from xml feed using simplexml - php

Im using simplexml to bring in a data feed, and now i want to put that into working varables to use in my php doc.
Following the php.net guides on simplexml ive arrived at
<?php
$xml = simplexml_load_file('f1_feed.xml');
$xml = new SimpleXMLElement($xmlstr);
echo $xml->response->williamhill->class->type->market[0]->name;
?>
but i keep getting a blank page, have i completely missed to point of how to parse the xml and put it into a working var ?
(feed is local for development)

you don't need both new SimpleXMLElement and simplexml_load_file:
simplexml_load_file Returns an object of class SimpleXMLElement
SimpleXMLElement Returns a SimpleXMLElement object
try:
if (file_exists('f1_feed.xml')) {
$xml = simplexml_load_file('f1_feed.xml');
print_r($xml);
} else {
exit('Failed to open f1_feed.xml.');
}
or:
if (file_exists('f1_feed.xml')) {
$xml = new SimpleXMLElement(file_get_contents('f1_feed.xml'));
echo $xml->response->williamhill->class->type->market[0]->name;
} else {
exit('Failed to open f1_feed.xml.');
}
if it still doesn't work, add
error_reporting(E_ALL);
ini_set("display_errors", 1);
for better error reporting

Related

Saving XML_Query2XML output to file?

I have a short script that utilizes the XML_Query2XML PEAR package. It pulls data from a SQL database and outputs to the browser. The XML that appears in the browser is exactly what I want to be saved to a file, but any attempts to use ob_get_contents or any of the other methods I'm familiar with result in a blank output file. The code is as follows:
<?php
set_include_path('/Library/WebServer/Documents/PEAR/');
include 'XML/Query2XML.php';
include 'MDB2.php';
try {
// initialize Query2XML object
$q2x = XML_Query2XML::factory(MDB2::factory('mysql://root:pass#site.com/site'));
$sql = "SELECT * FROM Products";
$xml = $q2x->getFlatXML($sql);
header('Content-Type: text/xml');
$xml->formatOutput = true;
echo $xml->saveXML();
} catch (Exception $e) {
echo $e->getMessage();
}
?>
I'm wondering what the general procedure is for saving files with this plugin and output type (XML). Any help is greatly appreciated.
The $xml variable is a DOMDocument object, which means you can use its methods to save it into a file, e.g. save:
$xml->save('foo.xml');

PHP SimpleXML Breaking when trying to traverse nodes

I'm trying to read the xml information that tumblr provides to create a kind of news feed off the tumblr, but I'm very stuck.
<?php
$request_url = 'http://candybrie.tumblr.com/api/read?type=post&start=0&num=5&type=text';
$xml = simplexml_load_file($request_url);
if (!$xml)
{
exit('Failed to retrieve data.');
}
else
{
foreach ($xml->posts[0] AS $post)
{
$title = $post->{'regular-title'};
$post = $post->{'regular-body'};
$small_post = substr($post,0,320);
echo .$title.;
echo '<p>'.$small_post.'</p>';
}
}
?>
Which always breaks as soon as it tries to go through the nodes. So basically "tumblr->posts;....ect" is displayed on my html page.
I've tried saving the information as a local xml file. I've tried using different ways to create the simplexml object, like loading it as a string (probably a silly idea). I double checked that my webhosting was running PHP5. So basically, I'm stuck on why this wouldn't be working.
EDIT: Ok I tried changing from where I started (back to the original way it was, starting from tumblr was just another (actually silly) way to try to fix it. It still breaks right after the first ->, so displays "posts[0] AS $post....ect" on screen.
This is the first thing I've ever done in PHP so there might be something obvious that I should have set up beforehand or something. I don't know and couldn't find anything like that though.
This should work :
<?php
$request_url = 'http://candybrie.tumblr.com/api/read?type=post&start=0&num=5&type=text';
$xml = simplexml_load_file($request_url);
if ( !$xml ){
exit('Failed to retrieve data.');
}else{
foreach ( $xml->posts[0] AS $post){
$title = $post->{'regular-title'};
$post = $post->{'regular-body'};
$small_post = substr($post,0,320);
echo $title;
echo '<p>'.$small_post.'</p>';
echo '<hr>';
}
}
First thing in you code is that you used root element that should not be used.
<?php
$request_url = 'http://candybrie.tumblr.com/api/read?type=post&start=0&num=5&type=text';
$xml = simplexml_load_file($request_url);
if (!$xml)
{
exit('Failed to retrieve data.');
}
else
{
foreach ($xml->posts->post as $post)
{
$title = $post->{'regular-title'};
$post = $post->{'regular-body'};
$small_post = substr($post,0,320);
echo .$title.;
echo '<p>'.$small_post.'</p>';
}
}
?>
$xml->posts returns you the posts nodes, so if you want to iterate the post nodes you should try $xml->posts->post, which gives you the ability to iterate through the post nodes inside the first posts node.
Also as Needhi pointed out you shouldn't pass through the root node (tumblr), because $xml represents itself the root node. (So I fixed my answer).

If 1st XML is empty then load a different one

Im getting the contents of an xml feed and printing the titles on my web page with php:
$url = 'http://site.com/feed';
$xml = simplexml_load_file($url);
foreach($xml->ART as $ART) {
echo $ART->TITLE;
}
I want to be able to set a backup, so if the first xml isn't found a different one is loaded instead.
I tried the following code but it doesn't work. If the feed isnt found the page shows 'XML Parsing Error:' which i guess isnt the same as nothing.
if ($url != '') {
$xml = simplexml_load_file($url);
} else {
//Here I would load a different xml file.
}
What should I do? Should I write conditional php to check if the first url contains a TITLE, and if not load the 2nd url?
Thanks
UPDATE
This messed up my whole page:
$first_url = 'http://site.com/feed1';
$second_url = 'http://site.com/feed2';
// if URL wrappers is enabled
if (is_url($first_url))
{
// parse first url
$xml = simplexml_load_file($first_url);
}
else
{
// parse second url
$xml = simplexml_load_file($second_url);
}
foreach($xml->ART as $ART) {
echo $ART->TITLE;
}
See simplexml_load_file
Returns an object of class SimpleXMLElement with properties containing the data held within the XML document. On errors, it will return FALSE.
Example from php.net
<?php
// The file test.xml contains an XML document with a root element
// and at least an element /[root]/title.
if (file_exists('test.xml')) {
$xml = simplexml_load_file('test.xml');
print_r($xml);
} else {
exit('Failed to open test.xml.');
}
?>
EDIT: you can do
$url = 'http://site.com/feed';
if( $xml = simplexml_load_file($url) ) {
foreach($xml->ART as $ART) {
echo $ART->TITLE;
}
} else {
//parsing new url
}
function parse_xml($url)
{
// your code
}
try
{
parse_xml($first_url);
}
catch (Exception $e)
{
parse_xml($second_url);
}
Alternatively, you can do a check if the URL return XML before proceed to parsing :-
// if URL wrappers is enabled
if (is_url($first_url))
{
// parse first url
$xml = simplexml_load_file($first_url);
}
else
{
// parse second url
$xml = simplexml_load_file($second_url);
}
Think ive got it working with:
$url = 'site.com/feed1';
$xml = simplexml_load_file($url);
if ($xml == null) {
$url = 'site.com/feed2';
$xml = simplexml_load_file($url);
}
foreach($xml->ART as $ART) {
echo $ART->TITLE;
}

Can not figure out how to save the new outputed document with DOM

With the following script I rename the elements through XSLT of an XML and output the result. However I want to save it in a new file on my browser but the only I managed to do is to save the input XML.
<?php
// create an XSLT processor and load the stylesheet as a DOM
$xproc = new XsltProcessor();
$xslt = new DomDocument;
$xslt->load('stylesheet.xslt'); // this contains the code from above
$xproc->importStylesheet($xslt);
// your DOM or the source XML (copied from your question)
$xml = '<document><item><title>Hide your heart</title><quantity>1</quantity><price>9.90</price></item></document>';
$dom = new DomDocument;
$dom->loadXML($xml);
// do the transformation
if ($xml_output = $xproc->transformToXML($dom)) {
echo $xml_output;
} else {
trigger_error('Oops, XSLT transformation failed!', E_USER_ERROR);
}
?>
I used
echo $dom->saveXML();
$dom->save("write.xml")
and replaced it with xml_output with no luck.
if ($xml_output = $xproc->transformToDoc($dom)) {
$xml_output->save('write.xml');
} else {
trigger_error('Oops, XSLT transformation failed!', E_USER_ERROR);
}
transformToXML() returns a string instead of a DOMDocument. You can also write the string to a file using the common file handling functions
if ($xml_output = $xproc->transformToXML($dom)) {
echo $xml_output;
file_put_contents('write.xml', $xml_output);
} else {
trigger_error('Oops, XSLT transformation failed!', E_USER_ERROR);
}
Update: Just found the third method :) Depending on what you want to achieve this is the one you are looking for
$xproc->transformToURI($doc, 'write.xml');
http://php.net/xsltprocessor.transformtouri
You can find the signature of the whole class at the manual: http://php.net/class.xsltprocessor

PHP returning page error on simplexml print_r

The problem is only happening with one file when I try to do a DocumentDOM/SimpleXML method, so it seems like the issue is with that file. No clue what it could be.
If I do the following:
$file = "test1.html";
$dom = DOMDocument::loadHTMLFile($file);
$xml = simplexml_import_dom($dom);
print_r($xml);
in Chrome, I get a "Page Unavailable" error. In Firefox, I get nothing.
If I do the same thing but to a "test2.html", I get a print out as expected.
If I try the same thing but doing it this way:
$file = "test1.html";
$data = file_get_contents($file)
$dom = DOMDocument::loadHTML($data);
$xml = simplexml_import_dom($dom);
print_r($xml);
I get the same issue.
If I comment out the print_r line, Chrome goes from the "Page Unavailable" to blank.
I changed the permissions to 777, in case that was an issue, no fix.
I tried simply echoing out the contents of the html, no problem at all.
Any clues as to why a) Chrome would do that, and b) why I'm not getting any usable results?
Update:
If I put in:
$file = "test1.html";
$dom = DOMDocument::loadHTMLFile($file);
if(!$dom) {
echo "No Load!";
}
else {
$xml = simplexml_import_dom($dom);
print_r($xml);
}
I get the same issue. If I put in:
$file = "test1.html";
$dom = DOMDocument::loadHTMLFile($file);
if(!$dom) {
echo "No Load!";
}
else {
echo "Load!";
}
I get the "Load!" output, meaning that the dom method shouldn't be the problem (?)
I'll try the same exact test with the simplexml.
Update2:
If I do this:
I get the same issue. If I put in:
$file = "test1.html";
$dom = DOMDocument::loadHTMLFile($file);
$xml = simplexml_import_dom($dom);
if(!$xml) {
echo "No Load!";
}
else {
echo "Load!";
}
I get "Load!" but if I do:
$file = "test1.html";
$dom = DOMDocument::loadHTMLFile($file);
$xml = simplexml_import_dom($dom);
if(!$xml) {
echo "No Load!";
}
else {
echo "Load!";
print_r($xml);
}
I get the error. I did finally notice that I had an option to view the error in Chrome:
Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error.
The troublesome html file is 288Kb. Could that be the issue? If so, how would I adjust for that?
Last Update:
Very Odd. I can use methods and functions on the object (as simplexml or domdocument), so I can do things like xpath to delete or parse the html, etc. In some cases (small results) it can echo out results, but for big stuff (show all spans), it fails in the same way.
So, since the end result, I think will fit in these parameters, I SHOULD be okay (I guess).
But any real solution is very welcome.
Turn on error reporting: error_reporting(E_ALL); in the first line of your PHP code.
Check the memory limit of your PHP configuration: memory_limit in the respective php.ini
What's the difference between test1.html and test2.html? Perhaps test1.html is not well-formed.
DocumentDOM and/or SimpleXML may bail out if the document is malformed. Try something like:
$dom = DOMDocument::loadHTMLFile($file);
if (!$dom) {
echo 'Loading file failed';
exit;
}
$xml = simplexml_import_dom($dom);
if (!$xml) {
...
}
If creating the $dom worked, conversion to $xml should work as well, but make sure anyway.
Edit: As Gehrig said, make sure error reporting is on, that should make it obvious where the process fails.

Categories