I have been attempting to use SimpleXML to parse XHTML which has been captured through output buffering. To get there I used a DOM object as per some excellent suggestions given in another question.
The simpleXML object is fired off to other classes that makes use of it to inspect and/or make changes to the page (but currently ignores it so I can get this working) and then the $xml->asXML() is then output once the manipulation is done with. It’s not simple but it is a reasonably elegant solution to working with some legacy code that I would like to replace sometime.
So far so good but the output now has odd characters that look for all the world like the result of a text encoding issue.
$doc = new DOMDocument();
$doc->loadHTML($this->PAGE); // <-- in goes nicely behaved XHTML
$doc->encoding = 'utf-8'; // This did not seem to help
$xmlObject = simplexml_import_dom($doc,$customClass);
//[...stuff...]
$this->PAGE = $xmlObject->asXML(); // -->outcomes XHTML with cruft
//[...logging and so forth...]
echo $this->PAGE;
According to the meta tag the HTML is assuming iso-8859-1
I’m seeing plenty of  if that is meaningful to anyone?
I tried to use the DOM Doc to convert to utf-8 which I guess was happening anyway because it did not make any difference. Not to mention the meta data in the xHTML is now wrong.
Is there a way of detecting the encoding in use up to that point switching and then switching back perhaps by tokenizing trouble characters or somesuch? Failing that is there another approach that would minimise the resultant mess?
I need to get this right as when finished it will be shared with a bunch of different people (in theory this could be a really big number) with different setups that I am worried about breaking.
UPDATE: Nuts! It seems that the received wisdom is that simpleXML will only deal with utf-8 so as a slight change to the question how can I html-encode anything that would suffer from the change (for the many different languages used) without also encoding the xHTML structure? Or have I hit a dead end?
Related
I am using the following function to get the inner html of html string
function DOMinnerHTML($element)
{
$innerHTML = "";
$children = $element->childNodes;
foreach ($children as $child)
{
$tmp_dom = new DOMDocument('1.0', 'UTF-8');
$tmp_dom->appendChild($tmp_dom->importNode($child, true));
$innerHTML .= trim($tmp_dom->saveHTML());
}
return $innerHTML;
}
my html string also contains unicode character. here is example of html string
$html = '<div>Thats True. Yes it is well defined آپ مجھے تم کہہ کر پکاریں</div>';
When I use the above function
$output = DOMinnerHTML($html);
the output is as below
$output = '<div>Thats True. Yes it is well defined
کے۔سلطا</div>';
the actual unicode characters converted to numeric values.
I have debugged the code and found that in DOMinnerHTML function before the following line
$innerHTML .= trim($tmp_dom->saveHTML());
if I echo
echo $tmp_dom->textContent;
It shows the actual unicode characters but after saving to $innerHTML it outputs the numeric symbols.
Why it is doing that.
Note: please don't suggest me html_entity_decode like functions to convert numeric symbols to real unicode characters because, I also have user formatted data in my html string, that I don't want to convert.
Note: I have also tried by putting the
<meta http-equiv="content-type" content="text/html; charset=utf-8">
before my html string but no difference.
I had a similar problem. After reading the above comment, and after further investigation, I found a very simple solution.
All you have to do is use html_entity_decode() to convert the output of saveHTML(), as follows:
// Create a new dom document
$dom = new DOMDocument();
// .... Do some stuff, adding nodes, ...etc.
// the html_entity_decode function will solve the unicode issue you described
$result = html_entity_decode($dom->saveHTML();
// echo your output
echo $result;
This will ensure that unicode characters are displayed properly
Good question, and you did an excellent job narrowing down the problem to a single line of code that caused things to go haywire! This allowed me to figure out what is going wrong.
The problem is with the DOMDocument's saveHTML() function. It is doing exactly what it is supposed to do, but it's design is not what you wanted.
saveHTML() converts the document into a string "using HTML formatting" - which means that it does HTML entity encoding for you! Sadly, this is not what you wanted. Comments in the PHP docs also indicate that DOMDocument does not handle utf-8 especially well and does not do very well with fragments (as it automatically adds html, doctype, etc).
Check out this comment for a proposed solution by simply using another class: alternative to DOMDocument
After seeing many complaints about certain DOMDocument shortcomings,
such as bad handling of encodings and always saving HTML fragments
with , , and DOCTYPE, I decided that a better solution is
needed.
So here it is: SmartDOMDocument. You can find it at
http://beerpla.net/projects/smartdomdocument/
Currently, the main highlights are:
SmartDOMDocument inherits from DOMDocument, so it's very easy to use - just declare an object of type SmartDOMDocument instead of DOMDocument and enjoy the new behavior on top of all existing
functionality (see example below).
saveHTMLExact() - DOMDocument has an extremely badly designed "feature" where if the HTML code you are loading does not contain
and tags, it adds them automatically (yup, there are no
flags to turn this behavior off). Thus, when you call
$doc->saveHTML(), your newly saved content now has and
DOCTYPE in it. Not very handy when trying to work with code fragments
(XML has a similar problem). SmartDOMDocument contains a new function
called saveHTMLExact() which does exactly what you would want - it
saves HTML without adding that extra garbage that DOMDocument does.
encoding fix - DOMDocument notoriously doesn't handle encoding (at least UTF-8) correctly and garbles the output. SmartDOMDocument tries
to work around this problem by enhancing loadHTML() to deal with
encoding correctly. This behavior is transparent to you - just use
loadHTML() as you would normally.
mb_convert_encoding($html,'HTML-ENTITIES','UTF-8');
This worked for me
I'm using XHTML Transitional doctype for displaying content in a browser. But, the content is displayed it is passed through a XML Parser (DOMDocument) for giving final touches before outputting to the browser.
I use a custom designed CMS for my website, that allows me to make changes to the site. I have a module that allows me to display HTML scripts on my website in a way similar to WordPress widgets.
The problem i am facing right now is that I need to make sure any code provided through this module should be in a valid XHTML format or else the module will need to convert the code to valid XHTML. Currently if a portion of the input code is not XHTML compliant then my XML parser breaks and throws warnings.
What I am looking for is a solution that encodes the entities present in the URLs and text portions of the input provided via TextArea control. For example the following string will break the parser giving entity reference error:
<script type="text/javascript" src="http://www.abcxyz.com/foo?bar=1&sumthing"></script>
Also the following line would cause same error:
<a href="http://www.somesite.com">Books & Cool stuff<a/>
P.S. If i use htmlentities or htmlspecialchars, they also convert the angle brackets of tags, which is not required. I just need the urls and text portions of the string to be escaped/encoded.
Any help would be greatly appreciated.
Thanks and regards,
Waqar Mushtaq
What you'd need to do is generate valid XHTML in the first place. All your attributes much be htmlentitied.
<script type="text/javascript" src="http://www.abcxyz.com/foo?bar=1&sumthing"></script>
should be
<script type="text/javascript" src="http://www.abcxyz.com/foo?bar=1&sumthing"></script>
and
Books & Cool stuff
should be
Books & Cool stuff
It's not easy to always generate valid XHTML. If at all possible I would recommend you find some other way of doing the post processing.
As already suggested in a quick comment, you can solve the problem with the PHP tidy extensionDocs quite comfortable.
To convert a HTML fragment - even a good tag soup - into something DomDocument or SimpleXML can deal with, you can use something like the following:
$config = array(
'output-xhtml' => 1,
'show-body-only' => 1
);
$fragment = tidy_repair_string($html, $config);
$xhtml = sprintf("<body>%s</body>", $fragment);
Example: Format tag soup html as valid xhtml with tidy_repair_stringDocs.
Tidy has many options, these two used are needed for fragments and XHTML compatibility.
The only problem left now is that this XHTML fragment can contain entities that DomDocument or SimpleXML do not understand, for example . This and others are undefined in XML.
As far as DomDocument is concerned (you wrote you use it), it supports loading html instead of xml as well which deals with those entities:
$dom = new DomDocument;
$dom->loadHTML($xhtml);
Example: Loading HTML with DomDocument
HTML Tidy is a computer program and a library whose purpose is to fix invalid HTML and to improve the layout and indent style of the resulting markup.
http://tidy.sourceforge.net/
Examples of bad HTML it is able to fix:
Missing or mismatched end tags, mixed up tags
Adding missing items (some tags, quotes, ...)
Reporting proprietary HTML extensions
Change layout of markup to predefined style
Transform characters from some encodings into HTML entities
If I load an HTML page using DOMDocument::loadHTMLFile() then pass it to simplexml_import_dom() everything is fine, however, if I using $dom->saveHTML() to get a string representation from the DOMDocument then use simplexml_load_string(), I get nothing. Actually, if I use a very simple page it will work, but as soon as there is anything more complex, it fails without any errors in the PHP log file.
Can anyone shed light on this?
Is it something to do with HTML not being parsable XML?
I am trying to strip out CR's and newlines from the formatted HTML text before using the contents as they have nothing to do with the content but get inserted into the SimpleXMLElement object, which is rather tedious.
Is it something to do with HTML not being parsable XML?
YES! HTML is a far less strict syntax so simplexml_load_string will not work with it by itself. This is because simplexml is simple and HTML is convoluted. On the other hand, DOMDocument is designed to be able to read the convoluted HTML structure, which means that since it can make sense of HTML and simplexml can make sense of it, you can bridge the proverbial gap there.
<!-- Valid HTML but not valid XML -->
<ul>
<li>foo
<li>bar
</ul>
HTML may or may not be valid XML. when you use loadHTMLFile it doesnt necessarily have to be well formed xml because the DOM is an HTML one so different rules, but when you pass a string to SimpleXML it must indeed be well formed.
If I get your question correclty and you simply want no whitespace in your output, then there is no need to use simplexml here.
Use: DOMDocument::preservewhitespace
like:
$dom->preserveWhiteSpace = false;
before saveHTML and you're set.
In my code I convert some styled xls document to html using openoffice.
I then parse the tables using xml_parser_create.
The problem is that openoffice creates oldschool html with unclosed <BR> and <HR> tags, it doesn't create doctypes and don't quote attributes <TABLE WIDTH=4>.
The php parsers I know off don't like this, and yield xml formatting errors. My current solution is to run some regexes over the file before I parse it, but this is neither nice nor fast.
Do you know a (hopefully included) php-parser, that doesn't care about these kinds of mistakes? Or perhaps a fast way to fix a 'broken' html?
A solution to "fix" broken HTML could be to use HTMLPurifier (quoting) :
HTML Purifier is a standards-compliant
HTML filter library written in PHP.
HTML Purifier will not only remove
all malicious code (better known as
XSS) with a thoroughly audited,
secure yet permissive whitelist, it
will also make sure your documents are standards compliant
An alternative idea might be to try loading your HTML with DOMDocument::loadHTML (quoting) :
The function parses the HTML contained
in the string source . Unlike loading
XML, HTML does not have to be
well-formed to load.
And if you're trying to load HTML from a file, see DOMDocument::loadHTMLFile.
There is SimpleHTML
For repairing broken HTML, you could use Tidy.
As an alternative you can use the native XML Reader. Because it is acts as a cursor going forward on the document stream and stopping at each node on the way, it will not break on invalid XML documents.
See http://www.ibm.com/developerworks/library/x-pullparsingphp.html
Any particular reason you're still using the PHP 4 XML API?
If you can get away with using PHP 5's XML API, there are two possibilities.
First, try the built-in HTML parser. It's really not very good (it tends to choke on poorly formatted HTML), but it might do the trick. Have a look at DomDocument::LoadHTML.
Second option - you could try the HTML parser based on the HTML5 parser specification:
http://code.google.com/p/html5lib/
This tends to work better than the built-in PHP HTML parser. It loads the HTML into a DomDocument object.
A solution is to use DOMDocument.
Example :
$str = "
<html>
<head>
<title>test</title>
</head>
<body>
</div>error.
<p>another error</i>
</body>
</html>
";
$doc = new DOMDocument();
#$doc->loadHTML($str);
echo $doc->saveHTML();
Advantage : natively included in PHP, contrary to PHP Tidy.
I have a bunch of legacy documents that are HTML-like. As in, they look like HTML, but have additional made up tags that aren't a part of HTML
<strong>This is an example of a <pseud-template>fake tag</pseud-template></strong>
I need to parse these files. PHP is the only only tool available. The documents don't come close to being well formed XML.
My original thought was to use the loadHTML methods on PHPs DOMDocument. However, these methods choke on the make up HTML tags, and will refuse to parse the string/file.
$oDom = new DomDocument();
$oDom->loadHTML("<strong>This is an example of a <pseud-template>fake tag</pseud-template></strong>");
//gives us
DOMDocument::loadHTML() [function.loadHTML]: Tag pseud-template invalid in Entity, line: 1 occured in ....
The only solution I've been able to come up with is to pre-process the files with string replacement functions that will remove the invalid tags and replace them with a valid HTML tag (maybe a span with an id of the tag name).
Is there a more elegant solution? A way to let DOMDocument know about additional tags to consider as valid? Is there a different, robust HTML parsing class/object out there for PHP?
(if it's not obvious, I don't consider regular expressions a valid solution here)
Update: The information in the fake tags is part of the goal here, so something like Tidy isn't an option. Also, I'm after something that does the some level, if not all, of well-formedness cleanup for me, which is why I was looking the DomDocument's loadHTML method in the first place.
You can suppress warnings with libxml_use_internal_errors, while loading the document. Eg.:
libxml_use_internal_errors(true);
$doc = new DomDocument();
$doc->loadHTML("<strong>This is an example of a <pseud-template>fake tag</pseud-template></strong>");
libxml_use_internal_errors(false);
If, for some reason, you need access to the warnings, use libxml_get_errors
I wonder if passing the "bad" HTML through HTML Tidy might help as a first pass? Might be worth a look, if you can get the document to be well formed, maybe you could load it as a regular XML file with DomDocument.
#Twan
You don't need a DTD for DOMDocument to parse custom XML. Just use DOMDocument->load(), and as long as the XML is well-formed, it can read it.
Once you get the files to be well-formed, that's when you can start looking at XML parsers, before that you're S.O.L. Lok Alejo said, you could look at HTML TIDY, but it looks like that's specific to HTML, and I don't know how it would go with your custom elements.
I don't consider regular expressions a valid solution here
Until you've got well-formedness, that might be your only option. Once you get the documents to that stage, then you're in the clear with the DOM functions.
Take a look at the Parser in the PHP Fit port. The code is clean and was originally designed for loading the dirty HTML saved by Word. It's configured to pull tables out, but can easily be adapated.
You can see the source here:
http://gerd.exit0.net/pat/PHPFIT/PHPFIT-0.1.0/Parser.phps
The unit test will show you how to use it:
http://gerd.exit0.net/pat/PHPFIT/PHPFIT-0.1.0/test/parser.phps
My quick and dirty solution to this problem was to run a loop that matches my list of custom tags with a regular expression. The regexp doesn't catch tags that have another inner custom tag inside them.
When there is a match, a function to process that tag is called and returns the "processed HTML". If that custom tag was inside another custom tag than the parent becomes childless by the fact that actual HTML was inserted in place of the child, and it will be matched by the regexp and processed at the next iteration of the loop.
The loop ends when there are no childless custom tags to be matched. Overall it's iterative (a while loop) and not recursive.
#Alan Storm
Your comment on my other answer got me to thinking:
When you load an HTML file with DOMDocument, it appears to do some level of cleanup re: well well-formedness, BUT requires all your tags to be legit HTML tags. I'm looking for something that does the former, but not the later. (Alan Storm)
Run a regex (sorry!) over the tags, and when it finds one which isn't a valid HTML element, replace it with a valid element that you know doesn't exist in any of the documents (blink comes to mind...), and give it an attribute value with the name of the illegal element, so that you can switch it back afterwards. eg:
$code = str_replace("<pseudo-tag>", "<blink rel=\"pseudo-tag\">", $code);
// and then back again...
$code = preg_replace('<blink rel="(.*?)">', '<\1>', $code);
obviously that code won't work, but you get the general idea?