This question already has answers here:
How do you parse and process HTML/XML in PHP?
(31 answers)
Closed 7 years ago.
I need to extract all the links to news articles from the NY Times RSS feed to a MySQL database periodically. How do I go about doing this? Can I use some regular expression (in PHP) to match the links? Or is there some other alternative way? Thanks in advance.
UPDATE 2 I tested the code below and had to modify the
$links = $dom->getElementsByTagName('a');
and change it to:
$links = $dom->getElementsByTagName('link');
It successfully outputted the links. Good Luck
UPDATE Looks like there is a complete answer here: How do you parse and process HTML/XML in PHP.
I developed a solution so that I could recurse all the links in my website. I've removed the code which verified the domain was the same with each recursion (since the question didn't ask for this), but you can easily add one back in if you need it.
Using html5 DOMDocument, you can parse HTML or XML document to read links. It is better than using regex. Try something like this
<?php
//300 seconds = 5 minutes - or however long you need so php won't time out
ini_set('max_execution_time', 300);
// using a global to store the links in case there is recursion, it makes it easy.
// You could of course pass the array by reference for cleaner code.
$alinks = array();
// set the link to whatever you are reading
$link = "http://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml";
// do the search
linksearch($link, $alinks);
// show results
var_dump($alinks);
function linksearch($url, & $alinks) {
// use $queue if you want this fn to be recursive
$queue = array();
echo "<br>Searching: $url";
$href = array();
//Load the HTML page
$html = file_get_contents($url);
//Create a new DOM document
$dom = new DOMDocument;
//Parse the HTML. The # is used to suppress any parsing errors
//that will be thrown if the $html string isn't valid XHTML.
#$dom->loadHTML($html);
//Get all links. You could also use any other tag name here,
//like 'img' or 'table', to extract other tags.
$links = $dom->getElementsByTagName('link');
//Iterate over the extracted links and display their URLs
foreach ($links as $link){
//Extract and show the "href" attribute.
$href[] = $link->getAttribute('href');
}
foreach (array_unique($href) as $link) {
// add to list of links found
$queue[] = $link;
}
// remove duplicates
$queue = array_unique($queue);
// get links that haven't yet been processed
$queue = array_diff($queue, $alinks);
// update array passed by reference with new links found
$alinks = array_merge($alinks, $queue);
if (count($queue) > 0) {
foreach ($queue as $link) {
// recursive search - uncomment out if you use this
// remember to check that the domain is the same as the one starting from
// linksearch($link, $alinks);
}
}
}
DOM+Xpath allows you to fetch nodes using expressions.
RSS Item Links
To fetch the RSS link elements (the link for each item):
$xml = file_get_contents($url);
$document = new DOMDocument();
$document->loadXml($xml);
$xpath = new DOMXPath($document);
$expression = '//channel/item/link';
foreach ($xpath->evaluate($expression) as $link) {
var_dump($link->textContent);
}
Atom Links
The atom:link have a different semantic, they are part of the Atom namespace and used to describe relations. NYT uses the standout relation to mark featured stories. To fetch the Atom links you need to register a prefix for the namespace. Attributes are nodes, too so you can fetch them directly:
$xml = file_get_contents($url);
$document = new DOMDocument();
$document->loadXml($xml);
$xpath = new DOMXPath($document);
$xpath->registerNamespace('a', 'http://www.w3.org/2005/Atom');
$expression = '//channel/item/a:link[#rel="standout"]/#href';
foreach ($xpath->evaluate($expression) as $link) {
var_dump($link->value);
}
Here are other relations like prev and next.
HTML Links (a elements)
The description elements contain HTML fragments. To extract the links from them you have to load the HTML into a separate DOM document.
$xml = file_get_contents($url);
$document = new DOMDocument();
$document->loadXml($xml);
$xpath = new DOMXPath($document);
$xpath->registerNamespace('a', 'http://www.w3.org/2005/Atom');
$expression = '//channel/item/description';
foreach ($xpath->evaluate($expression) as $description) {
$fragment = new DOMDocument();
$fragment->loadHtml($description->textContent);
$fragmentXpath = new DOMXpath($fragment);
foreach ($fragmentXpath->evaluate('//a[#href]/#href') as $link) {
var_dump($link->value);
}
}
Related
Here is the code snipet being used:
$urlContent = file_get_contents('http://www.techeblog.com/');
$dom = new DOMDocument();
#$dom->loadHTML($urlContent);
$domPath=new DOMXpath($dom);
$linkList = $domPath->evaluate("/html/body/a/img");
foreach ($linkList as $link)
{
echo $link->getAttribute("src")."<br />";
}
Need to extract all the links in which the child node is an image tag.
Your XPath expression will only return image tags that are inside links that are direct children of the body tag. If you want all link tags that contain images anywhere in the document, use the expression //a[img]
That being said, you may want to be more specific about which images you pull. This expression will limit the results to links containing images that are inside the blog entries //div[#class="entry"]//a[img].
Here is a great XPath cheat sheet.
<?php
$urlContent = file_get_contents('http://www.techeblog.com/');
$dom = new DOMDocument();
#$dom->loadHTML($urlContent);
$domPath=new DOMXpath($dom);
$linkList = $domPath->evaluate('//div[#class="entry"]//a[img]');
foreach ($linkList as $link)
{
echo $link->getAttribute("href").PHP_EOL;
}
Also, your echo is looking for an attribute calles src, which will not be present in the links.
I am using YQL (https://developer.yahoo.com/yql/) but Per application limit (identified by your Access Key): 100,000 calls per day and Per IP limits: /v1/public/: 2,000 calls per hour; /v1/yql/: 20,000 calls per hour .
I need unlimited query. How to Extract HTML using XPath like YQL using php.
$homepage = file_get_contents('https://google.com');
$dom = new DOMDocument();
$dom->loadHTML($homepage);
$xpath = new DOMXPath($dom);
$result = '';
foreach($xpath->evaluate('div') as $childNode) {
$result .= $dom->saveHtml($childNode);
}
var_dump($result);
I just found this example from web but not working.
Edit
$homepage = file_get_contents('https://google.com');
$dom = new DOMDocument();
$dom->loadHTML($homepage);
$xpath = new DOMXPath($dom);
$result = '';
foreach($xpath->query('//a[#class="touch"]') as $childNode) {
// if output <a class="touch" href="url"><span alt="demo1" title="title2">Content</span> some</a> , How to get href/url and child tag span attribute alt/title ?
$result .= $dom->saveHtml($childNode);
}
var_dump($result);
If possible then how to extract full HTML to json/xml like yql using php?
There are several ways you can do further processing, one is by doing another query. To get the span node, use can use this query:
$span = $xpath->query('./span', $childNode); // all spans
$span->item(0)->attributes->getNamedItem("alt")->nodeValue; // first span
What you are doing is searching under the given node.
p.s. don't use attributes property as an array (attributes["attributeName"]) because it doesn't work in some versions of PHP.
$url = 'http://www.test.com/';
$dom = new DOMDocument;
#$dom->loadHTMLFile($url);
$links = $dom->getElementsByTagName('a');
foreach ($links as $link) {
I am currently using the above script the capture links on a page, however what I found was there are always duplicate links. On the page, there is a picture which is linked, followed by a text link which goes to the same link. Is there an easy way to capture just the text link, not the image link?
As I was saying, I might take the approach of cleaning up the dupes in my result set. Not sure on what you are scraping but what if the link is only used with an image?
You could even count the occurrences.
$url = 'http://www.test.com/';
$dom = new DOMDocument;
#$dom->loadHTMLFile($url);
$links = $dom->getElementsByTagName('a');
$distinctLinks = [];
foreach ($links as $link) {
$distinctLinks[$link] = (int) $distinctLinks[$link] + 1;
}
This question already has answers here:
How do you parse and process HTML/XML in PHP?
(31 answers)
Closed 9 years ago.
Hi i am using cURL to get data from a website i need to get multiple items but cannot get it by tag name or id. I have managed to put together some code that will get one item using a class name by passing it through a loop i then pass it through another loop to get the text from the element.
I have a few problems here the first is i can see there must be a more convenient way of doing this. The second i will need to get multiple elements and stack together ie title, desciption, tags and a url link.
# Create a DOM parser object and load HTML
$dom = new DOMDocument();
$result = $dom->loadHTML($html);
$finder = new DomXPath($dom);
$nodes = $finder->query("//*[contains(concat(' ', normalize-space(#class), ' '), 'classname')]");
$tmp_dom = new DOMDocument();
foreach ($nodes as $node)
{
$tmp_dom->appendChild($tmp_dom->importNode($node,true));
}
$innerHTML = trim($tmp_dom->saveHTML());
$buffdom = new DOMDocument();
$buffdom->loadHTML($innerHTML);
# Iterate over all the <a> tags
foreach ($buffdom->getElementsByTagName('a') as $link)
{
# Show the <a href>
echo $link->nodeValue, "<br />", PHP_EOL;
}
I want to stick with PHP only.
I wonder if your problem is in the line:
$nodes = $finder->query("//*[contains(concat(' ', normalize-space(#class), ' '), 'classname')]");
As it stands, this literally looks for nodes that belong to the class with the name 'classname' - where 'classname' is not a variable, it's the actual name. This looks like you might have copied an example from somewhere - or did you literally name your class that?
I imagine that the data you are looking may not be in such nodes. If you could post a short piece of the actual HTML you are trying to parse, it should be possible to do a better job guiding you to a solution.
As an example, I just made the following complete code (based on yours, but adding code to open the stackoverflow.com home page, and changing 'classname' to 'question', since there seemed to be a lot of classes with question in the name, so I figured I should get a good harvest. I was not disappointed.
<?php
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, "http://stackoverflow.com");
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
// close curl resource to free up system resources
curl_close($ch);
//print_r($output);
$dom = new DOMDocument();
#$dom->loadHTML($output);
$finder = new DomXPath($dom);
$nodes = $finder->query("//*[contains(concat(' ', normalize-space(#class), ' '), 'question')]");
print_r($nodes);
$tmp_dom = new DOMDocument();
foreach ($nodes as $node)
{
$tmp_dom->appendChild($tmp_dom->importNode($node,true));
}
$innerHTML.=trim($tmp_dom->saveHTML());
$buffdom = new DOMDocument();
#$buffdom->loadHTML($innerHTML);
# Iterate over all the <a> tags
foreach($buffdom->getElementsByTagName('a') as $link) {
# Show the <a href>
echo $link->nodeValue, PHP_EOL;
echo "<br />";
}
?>
Resulted in many many lines of output. Try it - the page is at http://www.floris.us/SO/scraper.php
(or paste the above code into a page of your own). You were very, very close!
NOTE - this doesn't produce all the output you want - you need to include other properties of the node, not just print out the nodeValue, to get everything. But I figure you can take it from here (again, without actual samples of your HTML it's impossible for anyone else to get much further than this in helping you...)
I have a page scraped with curl and am looking to grab all of the links with a certain id. As far as I can tell the best way to do this is with dom and xpath. The bellow code grabs a large number of the urls, but cuts many of them off and grabs text that is not a url.
$curl_scraped_page is the page scraped with curl.
$dom = new DOMDocument();
#$dom->loadHTML($curl_scraped_page);
$xpath = new DOMXPath($dom);
$hrefs = $xpath->evaluate("/html/body//a");
Am I on the right track? Do I just need to mess with the "/html/body//a" xpath syntax or do I need to add more to capture the id element?
You can also do it this way and you'll have onyl a tags which have an id and href :
$doc = new DOMDocument();
$doc->loadHTML($curl_scraped_page);
$xpath = new DOMXPath($doc);
$hrefs = $xpath->query('//a[#href][#id]');
$dom = new DOMDocument();
$dom->loadHTML($curl_scraped_page);
$links = $dom->getElementsByTagName('a');
$processed_links = array();
foreach ($links as $link)
{
if ($link->hasAttribute('id') && $link->hasAttribute('href'))
{
$processed_links[$link->getAttribute('id')] = $link->getAttribute('href');
}
}
This is the solution regarding your question.
http://simplehtmldom.sourceforge.net/
include('simple_html_dom.php');
$html = file_get_html('http://www.google.com/');
foreach($html->find('#www-core-css') as $e) echo $e->outertext . '<br>';
I think that the easiest way is combining 2 following classes to pull information from another website:
Pull info from any HTML tag, contents or tag attribute: http://simplehtmldom.sourceforge.net/
Easy to handle curl, supports POST requests: https://github.com/php-curl-class/php-curl-class
Example:
include('path/to/curl.php');
include('path/to/simple_html_dom.php');
$url = 'http://www.example.com';
$curl = new Curl;
$html = str_get_html($curl->get($url)); //full HTML of website
$linksWithSpecificID = $html->find('a[id=foo]'); //returns array of elements
Check Simple HTML DOM Parser Manual from the upper link for the manipulation with HTML data.