Html Parser for PHP like Java - php

I have been developing Java programs that parse html source code of webpages by using various html parsers like Jericho, NekoHtml etc...
Now I want to develop parsers in PHP language. So before starting, I want to know that are there any html parsers available that I can use with PHP to parse html code

Check out DOMDocument.
Example #1 Creating a Document
<?php
$doc = new DOMDocument();
$doc->loadHTML("<html><body>Test<br></body></html>");
echo $doc->saveHTML();

The builtin class DOM parser does a very good job. There are many other xml parsers, too.

DOM is pretty good for this. It can also deal with invalid markup, however, it will throw undocumented errors and exceptions in cases of imperfect markup so I suggest you filter HTML with HTMLPurifier or some other library before loading it with the DOM.

Related

DOMDocument wrappers?

Are there any HTML parsers written in PHP that use DOMDocument for parsing?
I'm basically looking for a wrapper class that provides nicer and more natural API than DOMDocument, which is problematic to work with.
There is SmartDOMDocument, its fixes a few things like encoding and outputting as string.
I don't know of any other wrappers, but you can use an alternative to DOMDocument:
PHPQuery
PHP Simple HTML DOM Parser
Ganon
Also, do you realize DOMXPath exists?
It makes it way easier to retrieve values.
http://www.phpbuilder.com/columns/PHP_HTML_DOM_parser/PHPHTMLDOMParser.cc_09-07-2011.php3 is another possibility.

php regex problem

I want to get the <form> from the site. but between the form part in this situation, there still have mnay other html code. how to remove them? I mean how to use php just regular the and part from the site?
$str = file_get_contents('http://bingphp.codeplex.com');
preg_match_all('~<form.+</form>~iUs', $str, $match);
var_dump($match);
You should not use regular expressions for extracting HTML content. Use a DOM parser.
E.g.
$doc = new DOMDocument();
$doc->loadHTMLFile("http://bingphp.codeplex.com");
$forms = $doc->getElementsByTagName('form');
Update: If you want to remove the forms (not sure if you meant that):
for($i = $forms.length;$i--;) {
$node = $forms->item($i);
$node->parentNode->removeChild($node);
}
Update 2:
I just noticed that they have one form that wraps the whole body content. So this way or another, you will get the whole page actually.
The regex problem lies in the greedyness. For such cases .+? is advisable.
But what #Felix said. While a regular expression is workable for HTML extraction, you often look for something specific, and should thus rather parse it. It's also much simpler if you use QueryPath:
$str = file_get_contents('http://bingphp.codeplex.com');
print qp($str)->find("form")->html();
The best way i can think of is to use the Simple HTML DOM library with PHP to get the form(s) from the HTML page using DOM queries.
It is a little more convenient than using built-in xml parsers like simplexml or domdocument.
You can find the library here.
Normally you should use DOM to parse HTML, but in this case the web site is very far from being standard HTML, with some of the code being modified in place by javascript. It can therefore not be loaded into the DOM object. This might be intentional, a way of obfuscating the code.
In any case, it is not so much your RE (although using a non-greedy match would help), but the design of the site itself which is preventing you from parsing out what you want.

PHP DOMDocument - get html source of BODY

I'm using PHP's DOMDocument to parse and normalize user-submitted HTML using the loadHTML method to parse the content then getting a well-formed result via saveHTML:
$dom= new DOMDocument();
$dom->loadHTML('<div><p>Hello World');
$well_formed= $dom->saveHTML();
echo($well_formed);
This does a beautiful job of parsing the fragment and adding the appropriate closing tags. The problem is that I'm also getting a bunch of tags I don't want such as <!DOCTYPE>, <html>, <head> and <body>. I understand that every well-formed HTML document needs these tags, but the HTML fragment I'm normalizing is going to be inserted into an existing valid document.
The quick solution to your problem is to use an xPath expression to grab the body.
$dom= new DOMDocument();
$dom->loadHTML('<div><p>Hello World');
$xpath = new DOMXPath($dom);
$body = $xpath->query('/html/body');
echo($dom->saveXml($body->item(0)));
A word of warning here. Sometimes loadHTML will throw a warning when it encounters certainly poorly formed HTML documents. If you're parsing those kind of HTML documents, you'll need to find a better html parser [self link warning].
IN your case, you do not want to work with an HTML document, but with an HTML fragment -- a portion of HTML code ;; which means DOMDocument is not quite what you need.
Instead, I would rather use something like HTMLPurifier (quoting) :
HTML Purifier is a standards-compliant
HTML filter library written in PHP.
HTML Purifier will not only remove all
malicious code (better known as XSS)
with a thoroughly audited, secure yet
permissive whitelist, it will also
make sure your documents are standards compliant, something only
achievable with a comprehensive
knowledge of W3C's specifications.
And, if you try your portion of code :
<div><p>Hello World
Using the demo page of HTMLPurifier, you get this clean HTML as an output :
<div><p>Hello World</p></div>
Much better, isn't it ? ;-)
(Note that HTMLPurfier suppots a wide range of options, and that taking a look at its documentation might not hurt)
Faced with the same problem, I've created a wrapper around DOMDocument called SmartDOMDocument to overcome this and some other shortcomings (such as encoding problems).
You can find it here: http://beerpla.net/projects/smartdomdocument
This was taken from another post and worked perfectly for my use:
$layout = preg_replace('~<(?:!DOCTYPE|/?(?:html|head|body))[^>]*>\s*~i', '', $layout);
TL;DR: $dom->saveHTML($dom->documentElement->lastChild);
Where $dom->documentElement->lastChild is the body-node but could be every other available DOMNode of the document.
Actucally the DOMDocument::saveHTML-method itself is capable of doing what you want.
It takes a DOMNode-object as the first argument to output a subset of the document.
$dom = new DOMDocument();
$dom->loadHTML('<div><p>Hello World');
$well_formed= $dom->saveHTML($dom->documentElement->lastChild);
echo($well_formed);
There are several ways of retrieving the body-node. Here are 2:
$bodyNode = $dom->documentElement->lastChild;
$bodyNode = $dom->getElementsByTagName('body')->item(0);
From the PHP Manual
public DOMDocument::saveHTML(?DOMNode $node = null): string|false
Parameters
node
Optional parameter to output a subset of the document.
https://www.php.net/manual/en/domdocument.savehtml.php

Parsing of badly formatted HTML in PHP

In my code I convert some styled xls document to html using openoffice.
I then parse the tables using xml_parser_create.
The problem is that openoffice creates oldschool html with unclosed <BR> and <HR> tags, it doesn't create doctypes and don't quote attributes <TABLE WIDTH=4>.
The php parsers I know off don't like this, and yield xml formatting errors. My current solution is to run some regexes over the file before I parse it, but this is neither nice nor fast.
Do you know a (hopefully included) php-parser, that doesn't care about these kinds of mistakes? Or perhaps a fast way to fix a 'broken' html?
A solution to "fix" broken HTML could be to use HTMLPurifier (quoting) :
HTML Purifier is a standards-compliant
HTML filter library written in PHP.
HTML Purifier will not only remove
all malicious code (better known as
XSS) with a thoroughly audited,
secure yet permissive whitelist, it
will also make sure your documents are standards compliant
An alternative idea might be to try loading your HTML with DOMDocument::loadHTML (quoting) :
The function parses the HTML contained
in the string source . Unlike loading
XML, HTML does not have to be
well-formed to load.
And if you're trying to load HTML from a file, see DOMDocument::loadHTMLFile.
There is SimpleHTML
For repairing broken HTML, you could use Tidy.
As an alternative you can use the native XML Reader. Because it is acts as a cursor going forward on the document stream and stopping at each node on the way, it will not break on invalid XML documents.
See http://www.ibm.com/developerworks/library/x-pullparsingphp.html
Any particular reason you're still using the PHP 4 XML API?
If you can get away with using PHP 5's XML API, there are two possibilities.
First, try the built-in HTML parser. It's really not very good (it tends to choke on poorly formatted HTML), but it might do the trick. Have a look at DomDocument::LoadHTML.
Second option - you could try the HTML parser based on the HTML5 parser specification:
http://code.google.com/p/html5lib/
This tends to work better than the built-in PHP HTML parser. It loads the HTML into a DomDocument object.
A solution is to use DOMDocument.
Example :
$str = "
<html>
<head>
<title>test</title>
</head>
<body>
</div>error.
<p>another error</i>
</body>
</html>
";
$doc = new DOMDocument();
#$doc->loadHTML($str);
echo $doc->saveHTML();
Advantage : natively included in PHP, contrary to PHP Tidy.

DOM manipulation in PHP

I am looking for good methods of manipulating HTML in PHP. For example, the problem I currently have is dealing with malformed HTML.
I am getting input that looks something like this:
<div>This is some <b>text
As you noticed, the HTML is missing closing tags. I could use regex or an XML Parser to solve this problem. However, it is likely that I will have to do other DOM manipulation in the future. I wonder if there are any good PHP libraries that handle DOM manipulation similar to how Javascript deals with DOM manipulation.
PHP has a PECL extension that gives you access to the features of HTML Tidy. Tidy is a pretty powerful library that should be able to take code like that and close tags in an intelligent manner.
I use it to clean up malformed XML and HTML sent to me by a classified ad system prior to import.
I've found PHP Simple HTML DOM to be the most useful and straight forward library yet. Better than PECL I would say.
I've written an article on how to use it to scrape myspace artist tour dates (just an example.) Here's a link to the php simple html dom parser.
The DOM library which is now built-in can solve this problem easily. The loadHTML method will accept malformed XML while the load method will not.
$d = new DOMDocument;
$d->loadHTML('<div>This is some <b>text');
$d->saveHTML();
The output will be:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html>
<body>
<div>This is some <b>text</b></div>
</body>
</html>
For manipulating the DOM i think that what you're looking for is this. I've used to parse HTML documents from the web and it worked fine for me.

Categories