I'm trying to extract the properties from a link like facebook does only the page loses many seconds to load with a couple of links. Is there any way to speed it up? For example using caches?
libxml_use_internal_errors(true);
$c = file_get_contents('https://link-here');
$d = new DomDocument();
$d->loadHTML($c);
$xp = new domxpath($d);
foreach ($xp->query("//meta[#property='og:title']") as $el) {
$title = $el->getAttribute("content");
}
foreach ($xp->query("//meta[#property='og:description']") as $el) {
$content = $el->getAttribute("content");
}
foreach ($xp->query("//meta[#property='og:image']") as $el) {
$image = $el->getAttribute("content");
}
Related
I have this script to extract data from multiple pages of the same website. There are some 120 pages.
Here is the code I'm using to get for a single page.
$html = file_get_contents('https://www.example.com/product?page=1');
$dom = new DOMDocument;
#$dom->loadHTML($html);
$links = $dom->getElementsByTagName('div');
foreach ($links as $link){
file_put_contents('products.txt', $link->getAttribute('data-product-name') .PHP_EOL, FILE_APPEND);
}
How can I do it for multiple pages? The links for that specific pages are incremental like the next page will be https://www.example.com/product?page=2 and so on. How can I do it without creating different files for each link?
What about this :
function extractContent($page)
{
$html = file_get_contents('https://www.example.com/product?page='.$page);
$dom = new DOMDocument;
#$dom->loadHTML($html);
$links = $dom->getElementsByTagName('div');
foreach ($links as $link) {
// skip empty attributes
if (empty($link->getAttribute('data-product-name'))) {
continue;
}
file_put_contents('products.txt', $link->getAttribute('data-product-name') .PHP_EOL, FILE_APPEND);
}
}
for ($i=1; $i<=120; $i++) {
extractContent($i);
}
I'm trying to get data from HTML using DOM. I can get some data, but can't figure out how to get the rest. Here is an image highlighting the data I want.
http://i.imgur.com/Es51s5s.png
here is the code itself
http://pastebin.com/Re8qEivv
and here my PHP code
$html = file_get_contents('result.html');
$dom = new DOMDocument;
$dom->loadHTML($html);
$tr = $dom->getElementsByTagName('tr');
foreach ($tr as $row){
$td = $row->getElementsByTagName('td');
$td1 = $td->item(1);
$td2 = $td->item(2);
foreach ($td1->childNodes as $node){
$title = $node->textContent;
}
foreach ($td2->childNodes as $node){
$type = $node->textContent;
}
}
Figured it out
$html = file_get_contents('result.html');
$dom = new DOMDocument;
$dom->loadHTML($html);
$tr = $dom->getElementsByTagName('tr');
foreach ($tr as $row){
$td = $row->getElementsByTagName('td');
$td1 = $td->item(1);
$td2 = $td->item(2);
$title = $td1->childNodes->item(0)->textContent;
$firstURL = $td1->getElementsByTagName('a')->item(0)->getAttribute('href');
$type = $td2->childNodes->item(0)->textContent;
$imageURL = $td2->getElementsByTagName('img')->item(0)->getAttribute('src');
}
I have used following class.
http://sourceforge.net/projects/simplehtmldom/
This is very simple and easy to use class.
You can use
$html->find('#RosterReport > tbody', 0);
to find specific table
$html->find('tr')
$html->find('td')
to find table rows or columns
Note $html is variable have full html dom content.
I'm trying to extract 2 elements using PHP Curl and Xpath!
So far have the element separated in foreach but I would like to have them in the same time:
#$dom->loadHTML($html);
$xpath = new DOMXpath($dom);
$elements = $xpath->evaluate("//p[#class='row']/a/#href");
//$elements = $xpath->query("//p[#class='row']/a");
foreach ($elements as $element) {
$url = $element->nodeValue;
//$title = $element->nodeValue;
}
When I echo each one out of the foreach I only get 1 element and when its echoed inside the foreach i get all of them.
My question is how can I get them both at the same time (url and title ) and whats the best way to add them into myqsl using pdo.
thank you
There is no need, in this case, to use XPath twice. You could do one query and navigate to the associated other node(s).
For example, find all of the hrefs that you are interested in and get their ownerElement's (the <a>) node value.
$hrefs = $xpath->query("//p[#class='row']/a/#href");
foreach ($hrefs as $href) {
$url = $href->value;
$title = $href->ownerElement->nodeValue;
// Insert into db here
}
Or, find all of the <a>s that you are interested in and get their href attributes.
$anchors = $xpath->query("//p[#class='row']/a[#href]");
foreach ($anchors as $anchor) {
$url = $anchor->getAttribute("href");
$title = $anchor->nodeValue;
// Insert into db here
}
You're overwriting $url on each iteration. Maybe use an array?
#$dom->loadHTML($html);
$xpath = new DOMXpath($dom);
$elements = $xpath->evaluate("//p[#class='row']/a/#href");
//$elements = $xpath->query("//p[#class='row']/a");
$urls = array();
foreach ($elements as $element){
array_push($urls, $element->nodeValue);
//$title = $element->nodeValue;
}
I'm following a simplified version of the scraping tutorial by NetTuts here, which basically finds all divs with class=preview
http://net.tutsplus.com/tutorials/php/html-parsing-and-screen-scraping-with-the-simple-html-dom-library/comment-page-1/#comments
This is my code. The problem is that when I count $items I get only 1, so it's getting only the first div with class=preview, not all of them.
$articles = array();
$html = new simple_html_dom();
$html->load_file('http://net.tutsplus.com/page/76/');
$items = $html->find('div[class=preview]');
echo "count: " . count($items);
Try using DOMDocument and DOMXPath:
$file = file_get_contents('http://net.tutsplus.com/page/76/');
$dom = new DOMDocument();
#$dom->loadHTML($file);
$domx = new DOMXPath($dom);
$nodelist = $domx->evaluate("//div[#class='preview']");
foreach ($nodelist as $node) { print $node->nodeValue; }
what are the advantages and disadvantages of the following libraries?
PHP Simple HTML DOM Parser
QP
phpQuery
From the above i've used QP and it failed to parse invalid HTML, and simpleDomParser, that does a good job, but it kinda leaks memory because of the object model. But you may keep that under control by calling $object->clear(); unset($object); when you dont need an object anymore.
Are there any more scrapers? What are your experiences with them? I'm going to make this a community wiki, may we'll build a useful list of libraries that can be useful when scraping.
i did some tests based Byron's answer:
<?
include("lib/simplehtmldom/simple_html_dom.php");
include("lib/phpQuery/phpQuery/phpQuery.php");
echo "<pre>";
$html = file_get_contents("http://stackoverflow.com/search?q=favorite+programmer+cartoon");
$data['pq'] = $data['dom'] = $data['simple_dom'] = array();
$timer_start = microtime(true);
$dom = new DOMDocument();
#$dom->loadHTML($html);
$x = new DOMXPath($dom);
foreach($x->query("//a") as $node)
{
$data['dom'][] = $node->getAttribute("href");
}
foreach($x->query("//img") as $node)
{
$data['dom'][] = $node->getAttribute("src");
}
foreach($x->query("//input") as $node)
{
$data['dom'][] = $node->getAttribute("name");
}
$dom_time = microtime(true) - $timer_start;
echo "dom: \t\t $dom_time . Got ".count($data['dom'])." items \n";
$timer_start = microtime(true);
$doc = phpQuery::newDocument($html);
foreach( $doc->find("a") as $node)
{
$data['pq'][] = $node->href;
}
foreach( $doc->find("img") as $node)
{
$data['pq'][] = $node->src;
}
foreach( $doc->find("input") as $node)
{
$data['pq'][] = $node->name;
}
$time = microtime(true) - $timer_start;
echo "PQ: \t\t $time . Got ".count($data['pq'])." items \n";
$timer_start = microtime(true);
$simple_dom = new simple_html_dom();
$simple_dom->load($html);
foreach( $simple_dom->find("a") as $node)
{
$data['simple_dom'][] = $node->href;
}
foreach( $simple_dom->find("img") as $node)
{
$data['simple_dom'][] = $node->src;
}
foreach( $simple_dom->find("input") as $node)
{
$data['simple_dom'][] = $node->name;
}
$simple_dom_time = microtime(true) - $timer_start;
echo "simple_dom: \t $simple_dom_time . Got ".count($data['simple_dom'])." items \n";
echo "</pre>";
and got
dom: 0.00359296798706 . Got 115 items
PQ: 0.010568857193 . Got 115 items
simple_dom: 0.0770139694214 . Got 115 items
I used to use simple html dom exclusively until some bright SO'ers showed me the light hallelujah.
Just use the built in DOM functions. They are written in C and part of the PHP core. They are faster more efficient than any 3rd party solution. With firebug, getting an XPath query is muey simple. This simple change has made my php based scrapers run faster, while saving my precious time.
My scrapers used to take ~ 60 megabytes to scrape 10 sites asyncronously with curl. That was even with the simple html dom memory fix you mentioned.
Now my php processes never go above 8 megabytes.
Highly recommended.
EDIT
Okay I did some benchmarks. Built in dom is at least an order of magnitude faster.
Built in php DOM: 0.007061
Simple html DOM: 0.117781
<?
include("../lib/simple_html_dom.php");
$html = file_get_contents("http://stackoverflow.com/search?q=favorite+programmer+cartoon");
$data['dom'] = $data['simple_dom'] = array();
$timer_start = microtime(true);
$dom = new DOMDocument();
#$dom->loadHTML($html);
$x = new DOMXPath($dom);
foreach($x->query("//a") as $node)
{
$data['dom'][] = $node->getAttribute("href");
}
foreach($x->query("//img") as $node)
{
$data['dom'][] = $node->getAttribute("src");
}
foreach($x->query("//input") as $node)
{
$data['dom'][] = $node->getAttribute("name");
}
$dom_time = microtime(true) - $timer_start;
echo "built in php DOM : $dom_time\n";
$timer_start = microtime(true);
$simple_dom = new simple_html_dom();
$simple_dom->load($html);
foreach( $simple_dom->find("a") as $node)
{
$data['simple_dom'][] = $node->href;
}
foreach( $simple_dom->find("img") as $node)
{
$data['simple_dom'][] = $node->src;
}
foreach( $simple_dom->find("input") as $node)
{
$data['simple_dom'][] = $node->name;
}
$simple_dom_time = microtime(true) - $timer_start;
echo "simple html DOM : $simple_dom_time\n";