Simple DomDocument and GetElementsByTagName [duplicate] - php

This question already has answers here:
How i can get td nodeValue with specific class?
(2 answers)
Closed 2 years ago.
I'm trying to parse a webpage for the ISBN number, the HTML looks like:
<tr>
<td>ISBN: </td>
<td itemprop="isbn">9781472223821</td>
</tr>
I currently have:
header('Content-Type:application/json');
$url = "URL Removed";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$res = curl_exec($ch);
curl_close($ch);
$dom = new DomDocument();
#$dom->loadHTML($res);
$searchNodes = $dom->getElementsByTagName("//td[#itemprop='isbn']");
foreach ($searchNodes as $node) {
echo $node->nodeValue, PHP_EOL;
}
When i run this i get no output, i've double checked the xpath query in the chrome dev tools and that correctly selects the element i'm after. i believe its something to do with teh nodeValue option. I've tried a var_dump on the $searchNode variable and get
object(DOMNodeList)#2 (1) {
["length"]=>
int(0)
}
Is anyone able to highlight my next steps to investigate with this.

getElementsByTagName expects only a single tag name. Here is a working example using DOMXPath:
$res = '
<tr>
<td>ISBN: </td>
<td itemprop="isbn">9781472223821</td>
</tr>
';
$dom = new DomDocument();
$dom->loadHTML($res);
$xpath = new DOMXPath($dom);
$searchNodes = $xpath->query("//td[#itemprop='isbn']");
foreach ($searchNodes as $node) {
echo $node->nodeValue, PHP_EOL;
}

Related

Is it possible to extract Dom Elements from htmlentities() function in php?

I appreciate the time you take to try and help me with my question.
So what i am doing is trying an html parser from a link. So I use curl first to link to the website then I convert it into htmlentities() so it doesn't load on the page so I get a string from that then i use the DOM object to extract the tag from. I checked different methods for a parser on google search so i learned a little bit about it then i execute my script but the problem is that the string is getting saved as textCont and not as a real html document so i would like to know how can convert htmlentities string into a real dom document and extract elements from it ?
the image of the var_dump is here
here is my script:
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://www.usatoday.com/story/news/world/2021/02/17/dubai-princess-sheikha-latifa-says-she-hostage-after-flee-attempt/6778014002/?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=usatodaycomworld-topstories');
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($curl);
curl_close($curl);
$htmlentities = htmlentities($result);
// I added the code here
$htmlDom = new DOMDocument();
$htmlDom->loadHTML($htmlentities);
$htmlDom->preserveWhiteSpace = false;
$styles = $htmlDom->getElementsByTagName('style');
foreach ($styles as $style) {
$item = $style->getElementsByTagName('td');
//echo the values
echo '1: '.$item->item(0)->nodeValue.'<br />';
echo '2: '.$item->item(1)->nodeValue.'<br />';
echo '3: '.$item->item(2)->nodeValue;
}
EDIT:
what i added next to the code is this:
$htmlentities = htmlentities($result);
$htmlentities = str_replace(""",'"', $htmlentities);
$htmlentities = str_replace("'","'", $htmlentities);
$htmlentities = str_replace("<","<", $htmlentities);
$htmlentities = str_replace(">",">", $htmlentities);
libxml_use_internal_errors(true);
$htmlDom = new DOMDocument();
$htmlDom->loadHTML($htmlentities);
libxml_clear_errors();
var_dump($htmlDom);

how to print whole XML using DOMDocument

i want to grab all link from a url.
but i want it showing in XML.
for example i want to take all link from this url http://www.example.com/xxxx/
i want it to print like this:
anotherxxx
here my code but i got error
Fatal error: Uncaught TypeError: Argument 1 passed to
DOMDocument::saveXML() must be an instance of DOMNode or null, string
given in C:\xampp\htdocs\sh\index.php:18 Stack trace: #0
C:\xampp\htdocs\sh\index.php(18): DOMDocument->saveXML('/') #1 {main}
thrown in C:\xampp\htdocs\sh\index.php on line 18
$url = "http://www.example.com/xxxx/";
$ch = curl_init();
$timeout = 5;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$html = curl_exec($ch);
curl_close($ch);
$dom = new DOMDocument();
#$dom->loadHTML($html);
foreach($dom->getElementsByTagName('a') as $link) {
$short_link = $link->getAttribute('href');
echo $short_link1 = $dom->saveXML($short_link);
echo "<br />";
}
Use DOMXPath to retrieve all links as
$links = $xpath->query("//a/#href");
Then loop through links and get its html content as
$dom->saveHTML($link)
Full code here ..
$dom = new domDocument();
$dom->loadHTML($html);
$xpath = new DomXPath($this->dom);
$links = $xpath->query("//a/#href");
foreach($links as $link){
echo $dom->saveHTML($link);
echo "<br />";
}
The call to getAttribute() will return the attributes value as a string. So if you just want the href then
$short_link = $link->getAttribute('href');
echo $short_link;
with...
anotherxxx
will give you http://www.example.com/yyyy/
If you want the anchor tag itself...
foreach($dom->getElementsByTagName('a') as $link) {
echo $dom->saveXML($link);
}
Will give
anotherxxx

PHP curl Inside Foreach

EDIT:What is really happening is that a new xml is created each time but it is adding the new $html information to the previous so by the time it gets to the last element in the list being curled, it is saving parsed information from all previous curls. Can't figure out what is wrong.
Having trouble with a curl not executing as expected. In the code below I have a foreach loop that loops thru a list ($textarray) and passes the list element to a curl and also used to create an xml file using the element as the file name. The curl then returns $html which is then parsed and saved to an xml. The script runs, the list is passed, the url is created and passed to the curl function. I get an echo showing the correct url, a return is made and then each return is parsed and saved to the appropriate file. The problem seems to be that the curl is not actually curling the new $url. I get the exact same information saved in every xml file. I no this is not correct. Not sure why this is happening. Any help appreciated.
Function FeedXml($textarray){
$doc=new DOMDocument('1.0', 'UTF-8');
$feed=$doc->createElement("feed");
Foreach ($textarray as $text){
$url="http://xxx/xxx/".$text;
echo "PATH TO CURL".$url."<br>";
$html=curlurl($url);
$xmlsave="http://xxxx/xxx/".$text;
$dom = new DOMDocument(); //NEW dom FOR EACH SHOW
libxml_use_internal_errors(true);
$dom->loadHTML($html);
$xpath = new DOMXPath($dom);
$dom->formatOutput = true;
$dom->preserveWhiteSpace = true;
//PARSE EACH RETURN INFORMATION
$images= $dom->getElementsByTagName('img');
foreach($images as $img){
$icon= $img ->getAttribute('src');
if( preg_match('/\.(jpg|jpeg|gif)(?:[\?\#].*)?$/i', $icon) ) {
// ITEM TAG
$item= $doc->createElement("item");
$sdAttribute = $doc->createAttribute("sdImage");
$sdAttribute->value = $icon;
$item->appendChild($sdAttribute);
} // IMAGAGE FOR EACH
$feed->appendChild($item);
$doc->appendChild($feed);
$doc->save($xmlsave);
}
}
}
Function curlurl($url){
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch,CURLOPT_FRESH_CONNECT, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_VERBOSE, 1);//0-FALSE 1 TRUE
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER ,FALSE);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_TIMEOUT,'10');
$html = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
echo $httpcode;
return $html;
}
Thanks for pointing out my shortcomings on the above. I have figured out the problem. The following needed to be moved into the Foreach.
$doc=new DOMDocument('1.0', 'UTF-8');
$feed=$doc->createElement("feed");

PHP search for website with specific words

I'm trying to monitor a new products page of a website with specific words. I already have a basic script that searches for a single word using file_get_contents(); however this is not effective.
Looking at the code they are in <td> tags within a <table>
How do I get PHP to search for the words no matter what order and get declaration they are in? e.g.
$searchTerm = "Orange Boots";
from:
<table>
<td>Boots (RED)</td>
</table>
<table>
<td>boots (ORANGE)</td>
</table>
<table>
<td>Shirt (GREEN)</td>
</table>
Returns a match.
Sorry if its not clear, but I hope you understand
you can do this like
$newcontent= (str_replace( 'Boots', '<span class="Red">Boots</span>',$cont));
and just write css for class red like you want to show the red color than color:red; and do same thing for rest
but the better approach will be DOM and Xpath
If you're looking to make a quick and dirty search over that HTML block, you can try a simple regular expression with the preg_match_all() function. For example, you can try:
$html_block = get_file_contents(...);
$matches_found = preg_match_all('/(orange|boots|shirt)/i', $html_block, $matches);
$matches_found would be either 1 or 0, as an indication if a match was found or not. $matches would be populated with any matches in accordance.
Use curl. It's much faster than filegetcontents(). Here's a starting point:
$target_url="http://www.w3schools.com/htmldom/dom_nodes.asp";
// make the cURL request to $target_url
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$target_url);
curl_setopt($ch, CURLOPT_FAILONERROR, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$html= curl_exec($ch);
if (!$html) {exit;}
$dom = new DOMDocument();
#$dom->loadHTML($html);
$query = "(/html/body//tr)"; //this is where the search takes place
$xpath = new DOMXPath($dom);
$result = $xpath->query($query);
for ($i = 0; $i <$result->length; $i++) {
$node = $result->item(0);
echo "{$node->nodeName} - {$node->nodeValue}<br />";
}

Get div and the correct close tag preg

Now preg has always been a tool to me that i like but i cant figure out for the life if me if what i want to do is possible let and how to do it is going over my head
What i want is preg_match to be able to return me a div's innerHTML the problem is the div im tring to read has more divs in it and my preg keeps closing on the first tag it find
Here is my Actual code
$scrape_address = "http://isohunt.com/torrent_details/133831593/98e034bd6382e0f4ecaa9fe2b5eac01614edc3c6?tab=summary";
$ch = curl_init($scrape_address);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, '1');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_ENCODING, "");
$data = curl_exec($ch);
preg_match('% <div id="torrent_details">(.*)</div> %six', $data, $match);
print_r($match);
This has been updated for TomcatExodus's help
Live at :: http://megatorrentz.com/beta/details.php?hash=98e034bd6382e0f4ecaa9fe2b5eac01614edc3c6
<?php
$scrape_address = "http://isohunt.com/torrent_details/133831593/98e034bd6382e0f4ecaa9fe2b5eac01614edc3c6?tab=summary";
$ch = curl_init($scrape_address);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, '1');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_ENCODING, "");
$data = curl_exec($ch);
$domd = new DOMDocument();
libxml_use_internal_errors(true);
$domd->loadHTML($data);
libxml_use_internal_errors(false);
$div = $domd->getElementById("torrent_details");
if ($div) {
$dom2 = new DOMDocument();
$dom2->appendChild($dom2->importNode($div, true));
echo $dom2->saveHTML();
} else {
echo "Has no element with the given ID\n";
}
Using regular expression leads often to problems when parsing markup documents.
XPath version - independent of the source layout. The only thing you need is a div with that id.
loadHTMLFile($url);
$xp = new domxpath($dom);
$result = $xp->query("//*[#id = 'torrent_details']");
$div=$result->item(0);
if($result->length){
$out =new DOMDocument();
$out->appendChild($out->importNode($div, true));
echo $out->saveHTML();
}else{
echo "No such id";
}
?>
And this is the fix for Maerlyn solution. It didn't work because getElementById() wants a DTD with the id attribute specified. I mean, you can always build a document with "apple" as the record id, so you need something that says "id" is really the id for this tag.
validateOnParse = true;
#$domd->loadHTML($data);
//this doesn't work as the DTD is not specified
//or the specified id attribute is not the attributed called "id"
//$div = $domd->getElementById("torrent_details");
/*
* workaround found here: https://fosswiki.liip.ch/display/BLOG/GetElementById+Pitfalls
* set the "id" attribute as the real id
*/
$elements = $domd->getElementsByTagName('div');
if (!is_null($elements)) {
foreach ($elements as $element) {
//try-catch needed because of elements with no id
try{
$element->setIdAttribute('id', true);
}catch(Exception $e){}
}
}
//now it works
$div = $domd->getElementById("torrent_details");
//Print its content or error
if ($div) {
$dom2 = new DOMDocument();
$dom2->appendChild($dom2->importNode($div, true));
echo $dom2->saveHTML();
} else {
echo "Has no element with the given ID\n";
}
?>
Both of the solutions work for me.
You can do this:
/]>(.)<\/div>/i
Which would give you the largest possible innerHTML.
You cannot. I will not link to the famous question, because I dislike the pointless drivel on top. But still regular expressions are unfit to match nested structures.
You can use some trickery, but this is neither reliable, nor necessarily fast:
preg_match_all('#<div id="1">((<div>.*?</div>|.)*?)</div>#ims'
Your regex had a problem due to the /x flag not matching the opening div. And you used a wrong assertion notation.
preg_match_all('% <div \s+ id="torrent_details">(?<innerHtml>.*)</div> %six', $html, $match);
echo $match['innerHtml'];
That one will work, but you should only need preg_match not preg_match_all if the pages are written well, there should only be one instance of id="torrent_details" on the given page.
I'm retracting my answer. This will not work properly. Use DOM for navigating the document.
haha did it with a bit of tampering thanks for the DOMDocument idea i just to use simple
$ch = curl_init($scrape_address);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, '1');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_ENCODING, "");
$data = curl_exec($ch);
$doc = new DOMDocument();
libxml_use_internal_errors(false);
$doc->strictErrorChecking = FALSE;
libxml_use_internal_errors(true);
$doc->loadHTML($data);
$xml = simplexml_import_dom($doc);
print_r($xml->body->table->tr->td->table[2]->tr->td[0]->span[0]->div);

Categories