How to extract innerHTML using the PHP Dom [duplicate] - php

This question already has answers here:
How to get innerHTML of DOMNode?
(9 answers)
Closed 2 years ago.
I'm currently using nodeValue to give me HTML output, however it is stripping the HTML code and just giving me plain text. Does anyone know how I can modify my code to give me the inner HTML of an element by using it's ID?
function getContent($url, $id){
// This first section gets the HTML stuff using a URL
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
$html = curl_exec($ch);
curl_close($ch);
// This second section analyses the HTML and outputs it
$newDom = new domDocument;
$newDom->loadHTML($html);
$newDom->preserveWhiteSpace = false;
$newDom->validateOnParse = true;
$sections = $newDom->getElementById($id)->nodeValue;
echo $sections;
}

This works for me:
$sections = $newDom->saveXML($newDom->getElementById($id));
http://www.php.net/manual/en/domdocument.savexml.php
If you have PHP 5.3.6, this might also be an option:
$sections = $newDom->saveHTML($newDom->getElementById($id));
http://www.php.net/manual/en/domdocument.savehtml.php

I have modify the code, and it's working fine for me. Please find below the code
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
$html = curl_exec($ch);
curl_close($ch);
$newDom = new domDocument;
libxml_use_internal_errors(true);
$newDom->loadHTML($html);
libxml_use_internal_errors(false);
$newDom->preserveWhiteSpace = false;
$newDom->validateOnParse = true;
$sections = $newDom->saveHTML($newDom->getElementById('colophon'));
echo $sections;

Related

Error when trying to get Instagram Embed page HTML code

I'm trying to get the HTML Code of the Instagram's Embed pages for my API, but it returns me a strange error and I do not know what to do now, because I'm new to PHP. The code works on other websites.
I tried it already on other websites like apple.com and the strange thing is that when I call this function on the 'normal' post page it works, the error only appears when I call it on the '/embed' URL.
This is my PHP Code:
<?php
if (isset($_GET['url'])) {
$filename = $_GET['url'];
$file = file_get_contents($filename);
$dom = new DOMDocument;
libxml_use_internal_errors(true);
$dom->loadHTML($file);
libxml_use_internal_errors(false);
$bodies = $dom->getElementsByTagName('body');
assert($bodies->length === 1);
$body = $bodies->item(0);
for ($i = 0; $i < $body->children->length; $i++) {
$body->remove($body->children->item($i));
}
$stringbody = $dom->saveHTML($body);
echo $stringbody;
}
?>
I call the API like this:
https://api.com/get-website-body.php?url=http://instagr.am/p/BoLVWplBVFb/embed
My goal is to get the body of the website, like it is when I call this code on the https://apple.com URL for example.
You can use direct url to scrape the data if you use CURL and its faster than file_get_content. Here is the curl code for different urls and this will scrape the body data alone.
if (isset($_GET['url'])) {
// $website_url = 'https://www.instagram.com/instagram/?__a=1';
// $website_url = 'https://apple.com';
// $website_url = $_GET['url'];
$website_url = 'http://instagr.am/p/BoLVWplBVFb/embed';
$curl = curl_init();
//curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_URL, $website_url);
curl_setopt($curl, CURLOPT_REFERER, $website_url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0(Windows NT 6.1; rv:8.0) Gecko/20100101 Firefox/66.0');
$str = curl_exec($curl);
curl_close($curl);
$json = json_decode($str, true);
print_r($str); // Just taking tha page as it is
// Taking body part alone and play as your wish
$dom = new DOMDocument;
libxml_use_internal_errors(true);
$dom->loadHTML($str);
libxml_use_internal_errors(false);
$bodies = $dom->getElementsByTagName('body');
foreach ($bodies as $key => $value) {
print_r($value);// You will all content of body here
}
}
NOTE: Here you don't want to use https://api.com/get-website-body.php?url=....

extract specific data from webpage using php

I wants to create a php script for alerts from my work website when new notice is published, so following the page url
http://www.mahapwd.com/nit/ueviewnotice.asp?noticeid=1767
from this page i want a variable for Date & Time of Meeting (Date and time seperately two variables)
Place of Meeting and Published On
please help me to create a perfect php script.
I tried to create following script but it gives to many errors
<?php
$url1 = "http://www.mahapwd.com/nit/ueIndex.asp?district=12";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
preg_match("/href=(.*)\", $data, $urldata);
$url2 = "http://www.mahapwd.com/nit/$urldata[1];
curl_setopt($ch, CURLOPT_URL, $url2);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data2 = curl_exec($ch);
preg_match("/Published On:</b>(.*)<\/font>", $data, $pubDt);
$PubDate = $pubDt[1];
preg_match("/Time of Meeting:</b>(.*)&nbsp", $data, $MtDt);
$MeetDate = $MtDt[1];
preg_match("/Time of Meeting:</b>$MtDt[1]&nbsp(.*)</font>", $data, $MtTime);
$MeetTime = $MtTime[1];
preg_match("/Place of Meeting:</b>(.*)<\/font>", $data, $pubDt);
$PubDate = $pubDt[1];
?>
Hello i have done simple code for you. You can download simple_html_dom.php from http://simplehtmldom.sourceforge.net/
require_once "simple_html_dom.php";
$url='http://www.mahapwd.com/nit/ueviewnotice.asp?noticeid=1767';
//parse url
for ($i=0;$i<1;$i++) {
$html1 = file_get_html($url);
if(!$html1){ echo "no content"; }
else {
//here is parsed html
$string1 = $html1;
//now you need to find table
$element1=$html1->find('table');
//here is a table you need
$input=$element1[2];
//now you can select row from here
foreach($input->find('td') as $element) {
//in here you can find name than save it to database than check it
}
}
}

Font or Unicode issue on Scraping [duplicate]

This question already has answers here:
PHP DOMDocument failing to handle utf-8 characters (☆)
(3 answers)
Closed 7 years ago.
Am trying to scrape info from a site.
The site have like this
127 East Zhongshan No 2 Rd; 中山东二路127号
But when i try to scrap it & echo it then it will show
127 East Zhongshan No 2 Rd; 中山ä¸äºè·¯127å·
I also try UTF-8
There is my php code
now please help me for solve this problem.
function GrabPage($site){
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($ch, CURLOPT_TIMEOUT, 40);
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
curl_setopt($ch, CURLOPT_URL, $site);
ob_start();
return curl_exec ($ch);
ob_end_clean();
curl_close ($ch);
}
$GrabData = GrabPage($site);
$dom = new DOMDocument();
#$dom->loadHTML($GrabData);
$xpath = new DOMXpath($dom);
$mainElements = array();
$mainElements = $xpath->query("//div[#class='col--one-whole mv--col--one-half wv--col--one-whole'][1]/dl/dt");
foreach ($mainElements as $Names2) {
$Name2 = $Names2->nodeValue;
echo "$Name2";
}
First off, you need to set the charset before anything else on top of PHP file:
header('Content-Type: text/html; charset=utf-8');
You need to convert the html markup you got with mb_convert_encoding:
#$dom->loadHTML(mb_convert_encoding($GrabData, 'HTML-ENTITIES', 'UTF-8'));
Sample Output
First thing is to see if the captured HTML source is properly encoded. If yes try
utf8_decode($Name2)
This should get your string ready for saving as well as printing

How do I extract text data from a web page? [duplicate]

This question already has answers here:
How do you parse and process HTML/XML in PHP?
(31 answers)
Closed 9 years ago.
Okay, so I have the following function that grabs the web page I need:
function login2($url2) {
$fp = fopen("cookie.txt", "w");
fclose($fp);
$login2 = curl_init();
curl_setopt($login2, CURLOPT_COOKIEJAR, "cookies.txt");
curl_setopt($login2, CURLOPT_COOKIEFILE, "cookies.txt");
curl_setopt($login2, CURLOPT_TIMEOUT, 40000);
curl_setopt($login2, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($login2, CURLOPT_URL, $url2);
curl_setopt($login2, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($login2, CURLOPT_FOLLOWLOCATION, false);
[...]
I then issue this to use the function:
echo login2("https://example.com/clue/holes.aspx");
This echoes the page I am requesting but I only want it to echo a specific piece of data from the HTML source. Here's the specific markup:
<h4>
<label id="cooling percent" for="symbol">*</label>
4.50
</h4>
The only piece of information I want is the figure, which in this specific example is 4.50.
So how can I go about this and make my cURL grab this and echo it instead of echoing the entire page?
You can solve this with XPath:
$html = login2('https://example.com/clue/holes.aspx');
$dom = new DOMDocument();
#$dom->loadHTML($html);
$xpath = new DOMXPath($dom);
$value = $xpath->query('//label[#id="ctl00_ctl00_PageContainer_MyAccountContainer_symPound"]/following-sibling::text()')->item(0)->nodeValue;
echo $value;

how to get a list of links in a webpage in PHP? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Parse Website for URLs
How do I get all the links in a webpage using PHP?
I need to get a list of the links :-
Google
I want to fetch the href (http://www.google.com) and the text (Google)
-------------------situation is:-
I'm building a crawler and i want it to get all the links that exist in a database table.
There are a couple of ways to do this, but the way I would approach this is something like the following,
Use cURL to fetch the page, ie:
// $target_url has the url to be fetched, ie: "http://www.website.com"
// $userAgent should be set to a friendly agent, sneaky but hey...
$userAgent = 'Googlebot/2.1 (http://www.googlebot.com/bot.html)';
curl_setopt($ch, CURLOPT_USERAGENT, $userAgent);
$ch = curl_init();
curl_setopt($ch, CURLOPT_USERAGENT, $userAgent);
curl_setopt($ch, CURLOPT_URL,$target_url);
curl_setopt($ch, CURLOPT_FAILONERROR, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$html = curl_exec($ch);
if (!$html) {
echo "<br />cURL error number:" .curl_errno($ch);
echo "<br />cURL error:" . curl_error($ch);
exit;
}
If all goes well, page content is now all in $html.
Let's move on and load the page in a DOM Object:
$dom = new DOMDocument();
#$dom->loadHTML($html);
So far so good, XPath to the rescue to scrape the links out of the DOM object:
$xpath = new DOMXPath($dom);
$hrefs = $xpath->evaluate("/html/body//a");
Loop through the result and get the links:
for ($i = 0; $i < $hrefs->length; $i++) {
$href = $hrefs->item($i);
$link = $href->getAttribute('href');
$text = $href->nodeValue
// Do what you want with the link, print it out:
echo $text , ' -> ' , $link;
// Or save this in an array for later processing..
$links[$i]['href'] = $link;
$links[$i]['text'] = $text;
}
$hrefs is an object of type DOMNodeList and item() returns a DOMNode object for the specified index. So basically we’ve got a loop that retrieves each link as a DOMNode object.
This should pretty much do it for you.
The only part I am not 100% sure of is if the link is an image or an anchor, what would happen in those conditions, I have no idea so you would need to test and filter those out.
Hope this gives you an idea of how to scrape links, happy coding.

Categories