Im using a xml data feed to get information using simplexml and then generate a page using that data.
for this im getting the xml feed using
$xml = simplexml_load_file
Am i right in thinking that to parse the xml data the server has to download it all before it can work with it ?
Obviously this is no such problem with a 2kb file, but some files are nearing 100kb, so for every page load that has to be downloaded first before the php can start generating the page.
On some of the pages were only looking for a 1 attribute of an xml array so parseing the whole document seems unessarcery, normally i would look into caching the feed, but these feeds relate to live makets that are changing frequently so that not ideal as i would always have the up to the minute data.
Is there a better way to make more efficient calls of the xml feed ?
One of the first tactics to optimize XML parsing is by parsing on-the-fly - meaning, don't wait until the entire data arrives, and start parsing immediately when you have something to parse.
This is much more efficient, since the bottleneck is often network connection and not CPU, so if we can find our answer without waiting for all network info, we've optimized quite a bit.
You should google the term XML push parser or XML pull parser
In the article Pull parsing XML in PHP - Create memory-efficient stream processing you can find a tutorial that shows some code on how to do it with PHP using the XMLReader library that is bundled with PHP5
Here's a quote from this page which says basically what I just did in nicer words:
PHP 5 introduced XMLReader, a new class for reading Extensible Markup Language (XML). Unlike SimpleXML or the Document Object Model (DOM), XMLReader operates in streaming mode. That is, it reads the document from start to finish. You can begin to work with the content at the beginning before you see the content at the end. This makes it very fast, very efficient, and very parsimonious with memory. The larger the documents you need to process, the more important this is.
Parsing in streaming mode is a bit different from procedural parsing. Keep in mind that all the data isn't already there. What you usually have to do is supply event handlers that implement some sort of state-machine. If you see tag A, do this, if you see tag B, do that.
Regarding the difference between push parsing and pull parsing take a look at this article. Long story short, both are stream-based parsers. You will probably need a push parser since you want to parse whenever data arrives over the network from your XML feed.
Push parsing in PHP can also be done with xml_parse() (libexpat with a libxml compatibility layer). You can see a code example xml_parse PHP manual page.
Related
I have to modify a bunch of XMLs files to make them compliant to a given XSD. I know how to read or write an XML. I already know how to validate a generic XML against a given XSD, however, since the XSD is quite complex I'm looking for a solution to save me the burden to check every single node.
Otherwise, also the mere converter to produce an empty XML to be filled in a second passage would be appreciated.
I've heard about XSL, but it looks like only works with XSL stylesheets.
Thanks in advance.
I have to modify a bunch of XMLs files to make them compliant to a given XSD. I know how to read or write an XML. I already know how to validate a generic XML against a given XSD,
All quite typical.
however, since the XSD is quite complex I'm looking for a solution to save me the burden to check every single node.
This part appears to reflect a misunderstanding of validation.
The validating parser itself takes on the burden of checking every single node, leaving you with the substantially smaller task of addressing validation issues that it reports to you via diagnostic messages.
Otherwise, also the mere converter to produce an empty XML to be filled in a second passage would be appreciated.
There are tools that can instantiate an XSD with a starter XML document that's valid against the XSD. Such tools can be helpful in creating a new XML document that conforms to an XSD, not in validating existing XML documents.
I've heard about XSL, but it looks like only works with XSL stylesheets.
XSLT would help if you wanted to transform one XML document to another via an mapping you specify via templates. Starting with XSLT 2.0, there's support for obtaining type information from XSDs. However, none of this is designed to help with correcting validation errors in an automated manner.
I've got a number of REST feeds I'd like to store in a MYSQL database, can anyone suggest a solution for this? Something PHP related appreciated....
It's not PHP related, but PERL has both a REST interface and a DBI interface (for interfacing with MYSQL).
http://metacpan.org/pod/WWW::REST
There are many other REST interfaces for Google, Twitter, etc. Just search CPAN modules at search.cpan.org
To my knowledge there is no such thing as a REST feed. There are RSS feeds and Atom feeds, so I will assume you are talking about one of those.
Both are based on XML so I suggest you find an XML parser for PHP and do an HTTP request to get the feed contents, parse the XML into a DOM and then copy the DOM data into MYSQL!
I'm not sure how to be more precise.
Are you looking for someone to write the code?
Ok, I'm assuming you are talking about "RSS" feeds. Here's a great opensource library that makes it easy -- http://simplepie.org/ . Point it at an RSS or Atom feed, it will give you back PHP arrays and objects. From there you can interpret them and save them any way you want.
Depending on what you actually want to do with the database, you could use RSS as an XML clob format. Not fast, but easy. Again, it totally depends on what you want to do with the database.
I'm trying to get up to speed on HTML/CSS/PHP development and was wondering how I should validate my code when it contains content I can't control, like an RSS feed?
For example, my home page is a .php doc that contains HTML and PHP code. I use the PHP to create a simple RSS reader (using SimpleXML) to grab some feeds from another blog and display them on my Web page.
Now, as much as possible, I'd like to try to write valid HTML. So I'm assuming the way to do this is to view the page in the browser (I'm using NetBeans, so I click "Preview page"), copy the source (using View Source), and stick that in W3C's validator. When I do that, I get all sorts of validation errors (like "cannot generate system identifier for general entity" and "general entity "blogId" not defined and no default entity") coming from the RSS feed.
Am I following the right process for this? Should I just ignore all the errors that are flagged in the RSS feed?
Thanks.
In this case, where you are dealing with an untrusted on uncontrolled feed, you have limited options for being safe.
Two that come to mind are:
use something like striptags() to take all of the formatting out of the RSS feed content.
use a library like HTMLPurifier to validate and sanitize the content before outputting.
For performance, you should cache the output-ready content, FYI.
--
Regarding Caching
There are many ways to do this... If you are using a framework, chances are it already has a way to do it. Zend_Cache is a class provided by the Zend framework, for example.
If you have access to memcached, then that is super easy. But if you don't then there are a lot of other ways.
The general concept is to prepare the output, and then store it, ready to be outputted many times. That way, you do not incur the overhead of fetching and preparing the output if it is simply going to be the same every time.
Consider this code, which will only fetch and format the RSS feed every 5 minutes... All the other requests are a fast readfile() command.
# When called, will prepare the cache
function GenCache1()
{
//Get RSS feed
//Parse it
//Purify it
//Format your output
file_put_contents('/tmp/cache1', $output);
}
# Check to see if the file is available
if(! file_exists('/tmp/cache1'))
{
GenCache1();
}
else
{
# If the file is older than 5 minues (300 seconds), then regen
$a = stat('/tmp/cache1');
if($a['mtime'] + 300 < time())
GenCache1();
}
# Now, simply use this code to output
readfile('/tmp/cache1');
I generally use HTML Tidy to clean up the data from outside the system.
RSS should always be XML compliant. So I suggest you use XHTML for your website. Since XHTML is also XML compliant you should not have any errors when validating an XHTML page with RSS.
EDIT:
Of course, this only counts if the content your getting is actually valid XML...
I need to parse pretty big XML in PHP (like 300 MB). How can i do it most effectively?
In particular, i need to locate the specific tags and extract their content in a flat TXT file, nothing more.
You can read and parse XML in chunks with an old-school SAX-based parsing approach using PHP's xml parser functions.
Using this approach, there's no real limit to the size of documents you can parse, as you simply read and parse a buffer-full at a time. The parser will fire events to indicate it has found tags, data etc.
There's a simple example in the manual which shows how to pick up start and end of tags. For your purposes you might also want to use xml_set_character_data_handler so that you pick up on the text between tags also.
The most efficient way to do that is to create static XSLT and apply it to your XML using XSLTProcessor. The method names are a bit misleading. Even though you want to output plain text, you should use either transformToXML() if you need is as a string variable, or transformToURI() if you want to write a file.
If it's one or few time job I'd use XML Starlet. But if you really want to do it PHP side then I'd recommend to preparse it to smaller chunks and then processing it. If you load it via DOM as one big chunk it will take a lot of memory. Also use CLI side PHP script to speed things up.
This is what SAX was designed for. SAX has a low memory footprint reading in a small buffer of data and firing events when it encounter elements, character data etc.
It is not always obvious how to use SAX, well it wasn't to me the first time I used it but in essence you have to maintain your own state and view as to where you are within the document structure so generally you will end up with variables describing what section of the document you are in e.g. inFoo, inBar etc which you set when you encounter particular start/end elements.
There is a short description and example of a sax parser here
Depending on your memory requirements, you can either load it up and parse it with XSLT (the memory-consuming route), or you can create a forward-only cursor and walk the tree yourself, printing the values you're looking for (the memory-efficient route).
Pull parsing is the way to go. This way it's memory-efficient AND easy to process. I have been processing files that are as large as 50 Mb or more.
The original question is below, but I changed the title because I think it will be easier to find others with the same doubt. In the end, a XHTML document is a XML document.
It's a beginner question, but I would like to know which do you think is the best library for parsing XHTML documents in PHP5?
I have generated the XHTML from HTML files (which where created using Word :S) with Tidy, and know I need to replace some elements from them (like the and element, replace some attributes in tags).
I haven't used XML very much, there seems to be many options for parsing in PHP (Simple XML, DOM, etc.) and I don't know if all of them can do what I need, an which is the easiest one to use.
Sorry for my English, I'm form Argentina. Thanks!
I bit more information: I have a lot of HTML pages, done in Word 97. I used Tidy for cleaning and turning them in XHTML Strict, so now they are all XML compatible. I want to use an XML parser to find some elements and replace them (the logic by which I do this doesn't matter). For example, I want all of the pages to use the same CSS stylesheet and class attributes, for unified appearance. They are all static pages which contains legal documents, nothing strange there. Which of the extensions should I use? Is SimpleXML enough? Should I learn DOM in spite of being more difficult?
You could use SimpleXML, which is included in a default PHP install. This extensions offers easy object-oriented access to XML-structures.
There's also DOM XML. A "downside" to this extension is that it is a bit harder to use and that it is not included by default.
Just to clear up the confusion here. PHP has a number of XML libraries, because php4 didn't have very good options in that direction. From PHP5, you have the choice between SimpleXml, DOM and the sax-based expat parser. The latter also existed in php4. php4 also had a DOM extension, which is not the same as php5's.
DOM and SimpleXml are alternatives to the same problem domain; They læoad the document into memory and let you access it as a tree-structure. DOM is a rather bulky api, but it's also very consistent and it's implemented in many languages, meaning that you can re-use your knowledge across languages (In Javascript for example). SimpleXml may be easier initially.
The SAX parser is a different beast. It treats an xml document as a stream of tags. This is useful if you are dealing with very large documents, since you don't need to hold it all in memory.
For your usage, I would probably use the DOM api.
DOM is a standard, language-independent API for heirarchical data such as XML which has been standardized by the W3C. It is a rich API with much functionality. It is object based, in that each node is an object.
DOM is good when you not only want to read, or write, but you want to do a lot of manipulation of nodes an existing document, such as inserting nodes between others, changing the structure, etc.
SimpleXML is a PHP-specific API which is also object-based but is intended to be a lot less 'terse' than the DOM: simple tasks such as finding the value of a node or finding its child elements take a lot less code. Its API is not as rich than DOM, but it still includes features such as XPath lookups, and a basic ability to work with multiple-namespace documents. And, importantly, it still preserves all features of your document such as XML CDATA sections and comments, even though it doesn't include functions to manipulate them.
SimpleXML is very good for read-only: if all you want to do is read the XML document and convert it to another form, then it'll save you a lot of code. It's also fairly good when you want to generate a document, or do basic manipulations such as adding or changing child elements or attributes, but it can become complicated (but not impossible) to do a lot of manipulation of existing documents. It's not easy, for example, to add a child element in between two others; addChild only inserts after other elements. SimpleXML also cannot do XSLT transformations. It doesn't have things like 'getElementsByTagName' or getElementById', but if you know XPath you can still do that kind of thing with SimpleXML.
The SimpleXMLElement object is somewhat 'magical'. The properties it exposes if you var_dump/printr/var_export don't correspond to its complete internal representation. It exposes some of its child elements as if they were properties which can be accessed with the -> operator, but still preserves the full document internally, and you can do things like access a child element whose name is a reserved word with the [] operator as if it was an associative array.
You don't have to fully commit to one or the other, because PHP implements the functions:
simplexml_import_dom(DOMNode)
dom_import_simplexml(SimpleXMLElement)
This is helpful if you are using SimpleXML and need to work with code that expects a DOM node or vice versa.
PHP also offers a third XML library:
XML Parser (an implementation of SAX, a language-independent interface, but not referred to by that name in the manual) is a much lower level library, which serves quite a different purpose. It doesn't build objects for you. It basically just makes it easier to write your own XML parser, because it does the job of advancing to the next token, and finding out the type of token, such as what tag name is and whether it's an opening or closing tag, for you. Then you have to write callbacks that should be run each time a token is encountered. All tasks such as representing the document as objects/arrays in a tree, manipulating the document, etc will need to be implemented separately, because all you can do with the XML parser is write a low level parser.
The XML Parser functions are still quite helpful if you have specific memory or speed requirements. With it, it is possible to write a parser that can parse a very long XML document without holding all of its contents in memory at once. Also, if you not interested in all of the data, and don't need or want it to be put into a tree or set of PHP objects, then it can be quicker. For example, if you want to scan through an XHTML document and find all the links, and you don't care about structure.
I prefer SimpleXMLElement as it's pretty easy to use to lop through elements.
Edit: It says no version info avaliable but it's avaliable in PHP5, at least 5.2.5 but probably earlier.
It's really personal choice though, there's plenty of XML extensions.
Bear in mind many XML parsers will balk if you have invalid markup - XHTML should be XML but not always!
It's been a long time (2 years or more) since I worked with XML parsing in PHP, but I always had good, usable results from the XML_Parser Pear package. Having said that, I have had minimal exposure to PHP5, so I don't really know if there are better, inbuilt alternatives these days.
I did a little bit of XML parsing in PHP5 last year and decided to use a combination of SimpleXML.
DOM is a bit more useful if you want to create a new XML tree or add to an existing one, its slightly more flexible.
It really depends on what you're trying to accomplish.
For pulling rather large amounts of data, I.E many records of say, product information from a store website, I'd probably use Expat, since its supposedly a bit faster...
Personally, I've has XML's large enough to create a noticeable performance boost.
At those quantities you might as well be using SQL.
I recommend using SimpleXML.
It's pretty intuitive, easy to use/write.
Also, works great with XPath.
Never really got to use DOM much, but if you're using the XML Parser for something as large as you're describing you might want to use it, since its a bit more functional than SimpleXML.
You can read about all three at W3C Schools:
http://www.w3schools.com/php/php_xml_parser_expat.asp
http://www.w3schools.com/php/php_xml_simplexml.asp
http://www.w3schools.com/php/php_xml_dom.asp