I get a page using file_get_contents from a remote server, but I want to filter that page and get a DIV from it that has class "text" using PHP. I started with DOMDocument but I'm lost now.
Any help?
$file = file_get_contents("xx");
$elements = new DOMDocument();
$elements->loadHTML($file);
foreach ($elements as $element) {
if( !is_null($element->attributes)) {
foreach ($element->attributes as $attrName => $attrNode) {
if( $attrName == "class" && $attrNode== "text") {
echo $element;
}
}
}
}
Once you have loaded the document to a DOMDocument instance, you can use XPath queries on it -- which might be easier than going yourself through the DOM.
For that, you can use the DOMXpath class.
For example, you should be able to do something like this :
$dom = new DOMDocument();
$dom->loadHTML($html);
$xpath = new DOMXPath($dom);
$tags = $xpath->query('//div[#class="text"]');
foreach ($tags as $tag) {
var_dump($tag->textContent);
}
(Not tested, so you might need to adapt the XPath query a bit...)
Personally, I like Simple HTML Dom Parser.
include "lib.simple_html_dom.php"
$html = file_get_html('http://scrapeyoursite.com');
$html->find('div.text')->plaintext;
Pretty simple, huh? It accommodates selectors like jQuery :)
you can use simple_html_dom like here simple_html_dom doc
or use my code like here :
include "simple_html_dom.php";
$html = new simple_html_dom();
$html->load_file('www.yoursite.com');
$con_div = $html->find('div',0);//get value plaintext each html
echo the $con_div in plaintext..
$con_div->plaintext;
it's mean you will find the first div in array ('div',0) and show it in plaintext..
i hope it help you :cheer
Related
The following code where I try to find divs by class is not working for google search results, I have also tried for id.
include('simple_html_dom.php');
$dom = file_get_html("https://www.google.com/search?q=best+mug");
$all_divs = $dom->find("div[class='g']");
foreach ($all_divs as $div) {
echo $div->plaintext;
}
I think it's better to use XPath to do that, here is a sample of what your code could look like with XPath:
$dom = file_get_contents("https://www.google.com/search?q=best+mug");
#$doc = new DOMDocument();
#$doc->loadHTML($dom);
$xpath = new DomXPath($doc);
$all_divs = $xpath->query("//div[#class='g']");
foreach ($all_divs as $div) {
echo $div->plaintext;
}
Try it out and let me know if it works.
I know there are similar question, but, trying to study PHP I met this error and I want understand why this occurs.
<?php
$url = 'http://aice.anie.it/quotazione-lme-rame/';
echo "hello!\r\n";
$html = new DOMDocument();
#$html->loadHTML($url);
$xpath = new DOMXPath($html);
$nodelist = $xpath->query(".//*[#id='table33']/tbody/tr[2]/td[3]/b");
foreach ($nodelist as $n) {
echo $n->nodeValue . "\n";
}
?>
this prints just "hello!". I want to print the value extracted with the xpath, but the last echo doesn't do anything.
You have some errors in your code :
You try to get the table from the url http://aice.anie.it/quotazione-lme-rame/, but it's actually in an iframe located at http://www.aiceweb.it/it/frame_rame.asp, so get the iframe url directly.
You use the function loadHTML(), which load an HTML string. What you need is the loadHTMLFile function, which takes the link of an HTML document as a parameter (See http://www.php.net/manual/fr/domdocument.loadhtmlfile.php)
You assume there is a tbody element on the page but there is no one. So remove that from your query filter.
Working code :
$url = 'http://www.aiceweb.it/it/frame_rame.asp';
echo "hello!\r\n";
$html = new DOMDocument();
#$html->loadHTMLFile($url);
$xpath = new DOMXPath($html);
$nodelist = $xpath->query(".//*[#id='table33']/tr[2]/td[3]/b");
foreach ($nodelist as $n) {
echo $n->nodeValue . "\n";
}
I am trying to write a script that would get the contents between the div tags
<div class="bio">
<label>Bio:</label>
<div class="value">[This Is The Content I'm Trying To Get]</div>
</div>
This is the URL I'm trying to get the contents from:
https://live.xbox.com/en-US/Profile?gamertag=EMT%20PoRsChE
How would I be able to do this?
You will want to use DOMDocument and DOMXPath
// if the below line does not work, you will need to use CURL or similar.
$theHtmlToParse = file_get_contents('http://url.to/page.html');
$doc = new DOMDocument();
$doc->loadHTMLFile($theHtmlToParse);
$xpath = new DOMXpath($doc);
$elements = $xpath->query("*/div[#class='bio']/div[#class='value']");
// We now have an array of elements, or null
if ($elements !== null)
{
foreach ($elements as $element)
{
echo "<br/>[". $element->nodeName. "]";
$nodes = $element->childNodes;
foreach ($nodes as $node)
{
echo $node->nodeValue. "\n";
}
}
}
This should give you enough to go on :)
Yes, this is actually possible.
You might use something like visionmedia/php-selector to get the content of .value
and Guzzle or some curl to get the source before, if you haven't already.
well, this can be done using file_get_contents() function.
Simply pass the url of the webpage into this function and then create an object of it.
Navigate through the object using-> as required.
I have the following code that replaces all tags on a page and adds the nCode image resizer to it. The code is as follows:
function ncode_the_content($content) {
return preg_replace("/<img([^`|>]*)>/im", "<img onload=\"NcodeImageResizer.createOn(this);\"$1>", $content); }
}
What I need to do is make it so that if an image has the class of "noresize" it doesn't do the preg_match.
I have only managed to get it so that if there is the "noresize" class anywhere on the page it stops resizing all images instead of just the one with the correct class.
Any suggestions?
UPDATE:
Am I even remotely in the right ballpark with this?
function ncode_the_content($content) {
//Load the HTML page
$html = file_get_contents($content);
//Parse it. Here we use loadHTML as a static method
//to parse the HTML and create the DOM object in one go.
#$dom = DOMDocument::loadHTML($html);
//Init the XPath object
$xpath = new DOMXpath($dom);
//Query the DOM
$linksnoresize = $xpath->query( 'img[#class = "noresize"]' );
$links = $xpath->query( 'img[]' );
//Display the results as in the previous example
foreach($links as $link){
echo $link->getAttribute('onload'), 'NcodeImageResizer.createOn(this);';
}
foreach($linksnoresize as $link){
echo $link->getAttribute('onload'), '';
}
}
Here's some untested code:
$dom = DOMDocument::loadHTML($content);
$images = $dom->getElementsByTagName("img");
foreach ($images as $image) {
if (!strstr($image->getAttribute("class"), "noresize")) {
$image->setAttribute("onload", "NcodeImageResizer.createOn(this);");
}
}
But, if it were me, I would eschew any such inline event handler and instead just find the appropriate elements with Javascript.
I ended up just using pure CSS and adding a around the images I didn't want to be resized. Forced the width and height of that div back to auto and then removed the warning message that was displayed above them. Seems to work fine. Thanks for your help :)
I've searched around for solutions to this question, but each one i find, and try doesn't work.
I'm trying to grab the content of a div from a forum topic.
I've tried using preg_match and that only displayed "Array" then I tried using this method
$html = file_get_contents("http://www.lcs-server.co.uk/forum/index.php/topic,$id_topic");
$dom = new DOMDocument;
$dom->loadHTML($html);
$element = $dom->getElementById("msg_$id_msg");
var_dump($element);
This will show "object(DOMElement)#1 (0) { } "
The $id_topic and $id_msg are defined above this code, taken from the forum database. I did try taking the message from the forum database, but it displayed BB code tags, I'd like it to grab the post content, and display it in HTML, as it's displayed on the forum post itself.
This is the code I'm using now and giving me "Fatal error: Cannot redeclare DOMinnerHTML()"
$html = file_get_contents("http://www.lcs-server.co.uk/forum/index.php/topic,$id_topic");
$dom = new DOMDocument;
$dom->loadHTML($html);
$domelement = $dom->getElementById("msg_$id_msg");
foreach ($domelement as $element)
{
echo DOMinnerHTML($element);
}
function DOMinnerHTML($DOMelement)
{
$innerHTML = "";
$children = $DOMelement->childNodes;
foreach ($children as $child)
{
$tmp_dom = new DOMDocument();
$tmp_dom->appendChild($tmp_dom->importNode($child, true));
$innerHTML.=trim($tmp_dom->saveHTML());
}
return $innerHTML;
}
getElementById returns a DOM node object. It does not return the HTML of the node. For that, you have to get the node's "innerHTML". This properly is not officially supported by PHP's dom object for some reason, but can be faked using this answer: How to get innerHTML of DOMNode?