Is it possible to use a foreach loop to scrape multiple URL's from an array? I've been trying but for some reason it will only pull from the first URL in the array and the show the results.
include_once('../../simple_html_dom.php');
$link = array (
'http://www.amazon.com/dp/B0038JDEOO/',
'http://www.amazon.com/dp/B0038JDEM6/',
'http://www.amazon.com/dp/B004CYX17O/'
);
foreach ($link as $links) {
function scraping_IMDB($links) {
// create HTML DOM
$html = file_get_html($links);
$values = array();
foreach($html->find('input') as $element) {
$values[$element->id=='ASIN'] = $element->value; }
// get title
$ret['ASIN'] = end($values);
// get rating
$ret['Name'] = $html->find('h1[class="parseasinTitle"]', 0)->innertext;
$ret['Retail'] =$html->find('b[class="priceLarge"]', 0)->innertext;
// clean up memory
//$html->clear();
// unset($html);
return $ret;
}
// -----------------------------------------------------------------------------
// test it!
$ret = scraping_IMDB($links);
foreach($ret as $k=>$v)
echo '<strong>'.$k.'</strong>'.$v.'<br />';
}
Here is the code since the comment part didn't work. :) It's very dirty because I just edited one of the examples to play with it to see if I could get it to do what I wanted.
include_once('../../simple_html_dom.php');
function scraping_IMDB($links) {
// create HTML DOM
$html = file_get_html($links);
// What is this spaghetti code good for?
/*
$values = array();
foreach($html->find('input') as $element) {
$values[$element->id=='ASIN'] = $element->value;
}
// get title
$ret['ASIN'] = end($values);
*/
foreach($html->find('input') as $element) {
if($element->id == 'ASIN') {
$ret['ASIN'] = $element->value;
}
}
// Our you could use the following instead of the whole foreach loop above
//
// $ret['ASIN'] = $html->find('input[id="ASIN"]', 0)->value;
//
// if the 0 means, return first found or something similar,
// I just had a look at Amazons source code, and it contains
// 2 HTML tags with id='ASIN'. If they were following html-regulations
// then there should only be ONE element with a specific id.
// get rating
$ret['Name'] = $html->find('h1[class="parseasinTitle"]', 0)->innertext;
$ret['Retail'] = $html->find('b[class="priceLarge"]', 0)->innertext;
// clean up memory
//$html->clear();
// unset($html);
return $ret;
}
// -----------------------------------------------------------------------------
// test it!
$links = array (
'http://www.amazon.com/dp/B0038JDEOO/',
'http://www.amazon.com/dp/B0038JDEM6/',
'http://www.amazon.com/dp/B004CYX17O/'
);
foreach ($links as $link) {
$ret = scraping_IMDB($link);
foreach($ret as $k=>$v) {
echo '<strong>'.$k.'</strong>'.$v.'<br />';
}
}
This should do the trick
I have renamed the array to 'links' instead of 'link'. It's an array of links, containing link(s), therefore, foreach($link as $links) seemed wrong, and I changed it to foreach($links as $link)
I really need to ask this question as it will answer way more questions after the world reads this thread. What if ... you used articles like the simple html dom site.
$ret['Name'] = $html->find('h1[class="parseasinTitle"]', 0)->innertext;
$ret['Retail'] = $html->find('b[class="priceLarge"]', 0)->innertext;
return $ret;
}
$links = array (
'http://www.amazon.com/dp/B0038JDEOO/',
'http://www.amazon.com/dp/B0038JDEM6/',
'http://www.amazon.com/dp/B004CYX17O/'
);
foreach ($links as $link) {
$ret = scraping_IMDB($link);
foreach($ret as $k=>$v) {
echo '<strong>'.$k.'</strong>'.$v.'<br />';
}
}
what if its $articles?
$articles[] = $item;
}
//print_r($articles);
$links = array (
'http://link1.com',
'http://link2.com',
'http://link3.com'
);
what would this area look like?
foreach ($links as $link) {
$ret = scraping_IMDB($link);
foreach($ret as $k=>$v) {
echo '<strong>'.$k.'</strong>'.$v.'<br />';
}
}
Ive seen this multiple links all over stackoverflow for past 2 years, and I still cannot figure it out. Would be great to get the basic handle on it to how the simple html dom examples are.
thx.
First time postin im sure I broke a bunch of rules and didnt do the code section right. I just had to ask this question badly.
Related
Now I am trying to write a PhP parser and I don't why my code return an empty array. I am using PHP Simple HTML DOM. I know my code is't perfect, but it's only for testing.
I will be appreciate for any help
public function getData() {
// get url form urls.txt
foreach ($this->list_url as $i => $url) {
// create a DOM object from a HTML file
$this->html = file_get_html($url);
// find array all elements with class="name" because every products having name
$products = $this->html->find(".name");
foreach ($products as $number => $product) {
// get value attr a=href product
$href = $this->html->find("div.name a", $number)->attr['href'];
// create a DOM object form a HTML file
$html = file_get_html($href);
if($html && is_object($html) && isset($html->nodes)){
echo "TRUE - all goodly";
} else{
echo "FALSE - all badly";
}
// get all elements class="description"
// $nodes is empty WHY? Tough web-page having content and div.description?
$nodes = $html->find('.description');
if (count($nodes) > 0) {
$needle = "Производитель:";
foreach ($nodes as $short_description) {
if (stripos($short_description->plaintext, $needle) !== FALSE) {
echo "TRUE";
$this->data[] = $short_description->plaintext;
} else {
echo "FALSE";
}
}
} else {
$this->data[] = '';
}
$html->clear();
unset($html);
}
$this->html->clear();
unset($html);
}
return $this->data;
}
hi you should inspect the element and copy->copy selector and use it in find method to getting the object
I am somewhat new with PHP, but can't really wrap my head around what I am doing wrong here given my situation.
Problem: I am trying to get the href of a certain HTML element within a string of characters inside an XML object/element via Reddit (if you visit this page, it would be the actual link of the video - not the reddit link but the external youtube link or whatever - nothing else).
Here is my code so far (code updated):
Update: Loop-mania! Got all of the hrefs, but am now trying to store them inside a global array to access a random one outside of this function.
function getXMLFeed() {
echo "<h2>Reddit Items</h2><hr><br><br>";
//$feedURL = file_get_contents('https://www.reddit.com/r/videos/.xml?limit=200');
$feedURL = 'https://www.reddit.com/r/videos/.xml?limit=200';
$xml = simplexml_load_file($feedURL);
//define each xml entry from reddit as an item
foreach ($xml -> entry as $item ) {
foreach ($item -> content as $content) {
$newContent = (string)$content;
$html = str_get_html($newContent);
foreach($html->find('table') as $table) {
$links = $table->find('span', '0');
//echo $links;
foreach($links->find('a') as $link) {
echo $link->href;
}
}
}
}
}
XML Code:
http://pasted.co/0bcf49e8
I've also included JSON if it can be done this way; I just preferred XML:
http://pasted.co/f02180db
That is pretty much all of the code. Though, here is another piece I tried to use with DOMDocument (scrapped it).
foreach ($item -> content as $content) {
$dom = new DOMDocument();
$dom -> loadHTML($content);
$xpath = new DOMXPath($dom);
$classname = "/html/body/table[1]/tbody/tr/td[2]/span[1]/a";
foreach ($dom->getElementsByTagName('table') as $node) {
echo $dom->saveHtml($node), PHP_EOL;
//$originalURL = $node->getAttribute('href');
}
//$html = $dom->saveHTML();
}
I can parse the table fine, but when it comes to getting certain element's values (nothing has an ID or class), I can only seem to get ALL anchor tags or ALL table rows, etc.
Can anyone point me in the right direction? Let me know if there is anything else I can add here. Thanks!
Added HTML:
I am specifically trying to extract <span>[link]</span> from each table/item.
http://pastebin.com/QXa2i6qz
The following code can extract you all the youtube links from each content.
function extract_youtube_link($xml) {
$entries = $xml['entry'];
$videos = [];
foreach($entries as $entry) {
$content = html_entity_decode($entry['content']);
preg_match_all('/<span><a href="(.*)">\[link\]/', $content, $matches);
if(!empty($matches[1][0])) {
$videos[] = array(
'entry_title' => $entry['title'],
'author' => preg_replace('/\/(.*)\//', '', $entry['author']['name']),
'author_reddit_url' => $entry['author']['uri'],
'video_url' => $matches[1][0]
);
}
}
return $videos;
}
$xml = simplexml_load_file('reddit.xml');
$xml = json_decode(json_encode($xml), true);
$videos = extract_youtube_link($xml);
foreach($videos as $video) {
echo "<p>Entry Title: {$video['entry_title']}</p>";
echo "<p>Author: {$video['author']}</p>";
echo "<p>Author URL: {$video['author_reddit_url']}</p>";
echo "<p>Video URL: {$video['video_url']}</p>";
echo "<br><br>";
}
The code outputs in the multidimensional format of array with the elements inside are entry_title, author, author_reddit_url and video_url. Hope it helps you!
If you're looking for a specific element you don't need to parse the whole thing. One way of doing it could be to use the DOMXPath class and query directly the xml. The documentation should guide you through.
http://php.net/manual/es/class.domxpath.php .
This is my entire code
// include the scrapper
include('simple_html_dom.php');
// connect the page for scrapping
$html = file_get_html('http://www.niagarafallsreview.ca/news/local');
// make empty arrays
$headlines = array();
$links = array();
// look for 'h' headings on page
foreach($html->find('h1') as $header) {
$headlines[] = $header->plaintext;
}
// look for 'a' links that start with 'http://www.niagarafallsreview.ca/2016/04/'
foreach($html->find('a[href^="http://www.niagarafallsreview.ca/2016/04/"]') as $link) {
$links[] = $link->href;
}
// trim the headlines because one on top and bottom were not needed
$output = array_slice($headlines, 1, -1);
// for each header output a nice list of the headers
foreach ($output as $headers){
echo "< a href='#'>$headers</a>" . "<br />";
}
// make sure the links are unique and no doubles are found
$result = array_unique($links);
// for each link output it in a nice list
foreach ($result as $linkk){
echo "<a href='$linkk'>$linkk</a>" . "<br />";
}
this code will produce the headings in a nice list, and will also produce a nice list of the links.
My problem is that i need to combine them, i would like the $header to be the text of the href, and the link in the href to be the $linkk
like this..
< a href ='$linkk'>$headers</a>
I dont know how to do this as i have two foreach statements. I tried to combine them but i was unsuccessful.
Any help will be greatly appreciated.
Thanks.
Try this:
// include the scrapper
include('simple_html_dom.php');
// connect the page for scrapping
$html = file_get_html('http://www.niagarafallsreview.ca/news/local');
// make empty arrays
$headlines = array();
$links = array();
// look for 'h' headings on page
foreach($html->find('h1') as $header) {
$headlines[] = $header->plaintext;
}
// look for 'a' links that start with 'http://www.niagarafallsreview.ca/2016/04/'
foreach($html->find('a[href^="http://www.niagarafallsreview.ca/2016/04/"]') as $link) {
$links[] = $link->href;
}
// trim the headlines because one on top and bottom were not needed
$output = array_slice($headlines, 1, -1);
// make sure the links are unique and no doubles are found
$result = array_unique($links);
// for each link output it in a nice list
foreach ($result as $i=>$linkk) {
$headline = isset($output[$i]) ? $output[$i] : '(empty)';
echo "<a href='$linkk'>$headline</a>" . "<br />";
}
Here is the foreach you are looking for:
foreach($output as $i=>$headers) {
$linkk = $result[$i];
echo "< a href='$linkk'>$headers</a>" . "<br />";
}
This assumes the arrays have the same length and also the correct order.
I am trying to parse the table shown here into a multi-dimensional php array. I am using the following code but for some reason its returning an empty array. After searching around on the web, I found this site which is where I got the parseTable() function from. From reading the comments on that website, I see that the function works perfectly. So I'm assuming there is something wrong with the way I'm getting the HTML code from file_get_contents(). Any thoughts on what I'm doing wrong?
<?php
$data = file_get_contents('http://flow935.com/playlist/flowhis.HTM');
function parseTable($html)
{
// Find the table
preg_match("/<table.*?>.*?<\/[\s]*table>/s", $html, $table_html);
// Get title for each row
preg_match_all("/<th.*?>(.*?)<\/[\s]*th>/", $table_html[0], $matches);
$row_headers = $matches[1];
// Iterate each row
preg_match_all("/<tr.*?>(.*?)<\/[\s]*tr>/s", $table_html[0], $matches);
$table = array();
foreach($matches[1] as $row_html)
{
preg_match_all("/<td.*?>(.*?)<\/[\s]*td>/", $row_html, $td_matches);
$row = array();
for($i=0; $i<count($td_matches[1]); $i++)
{
$td = strip_tags(html_entity_decode($td_matches[1][$i]));
$row[$row_headers[$i]] = $td;
}
if(count($row) > 0)
$table[] = $row;
}
return $table;
}
$output = parseTable($data);
print_r($output);
?>
I want my output array to look something like this:
1
--> 11:33AM
--> DEV
--> IN THE DARK
2
--> 11:29AM
--> LIL' WAYNE
--> SHE WILL
3
--> 11:26AM
--> KARDINAL OFFISHALL
--> NUMBA 1 (TIDE IS HIGH)
Don't cripple yourself parsing HTML with regexps! Instead, let an HTML parser library worry about the structure of the markup for you.
I suggest you to check out Simple HTML DOM (http://simplehtmldom.sourceforge.net/). It is a library specifically written to aid in solving this kind of web scraping problems in PHP. By using such a library, you can write your scraping in much less lines of code without worrying about creating working regexps.
In principle, with Simple HTML DOM you just write something like:
$html = file_get_html('http://flow935.com/playlist/flowhis.HTM');
foreach($html->find('tr') as $row) {
// Parse table row here
}
This can be then extended to capture your data in some format, for instance to create an array of artists and corresponding titles as:
<?php
require('simple_html_dom.php');
$table = array();
$html = file_get_html('http://flow935.com/playlist/flowhis.HTM');
foreach($html->find('tr') as $row) {
$time = $row->find('td',0)->plaintext;
$artist = $row->find('td',1)->plaintext;
$title = $row->find('td',2)->plaintext;
$table[$artist][$title] = true;
}
echo '<pre>';
print_r($table);
echo '</pre>';
?>
We can see that this code can be (trivially) changed to reformat the data in any other way as well.
I tried simple_html_dom but on larger files and on repeat calls to the function I am getting zend_mm_heap_corrupted on php 5.3 (GAH). I have also tried preg_match_all (but this has been failing on a larger file (5000) lines of html, which was only about 400 rows of my HTML table.
I am using this and its working fast and not spitting errors.
$dom = new DOMDocument();
//load the html
$html = $dom->loadHTMLFile("htmltable.html");
//discard white space
$dom->preserveWhiteSpace = false;
//the table by its tag name
$tables = $dom->getElementsByTagName('table');
//get all rows from the table
$rows = $tables->item(0)->getElementsByTagName('tr');
// get each column by tag name
$cols = $rows->item(0)->getElementsByTagName('th');
$row_headers = NULL;
foreach ($cols as $node) {
//print $node->nodeValue."\n";
$row_headers[] = $node->nodeValue;
}
$table = array();
//get all rows from the table
$rows = $tables->item(0)->getElementsByTagName('tr');
foreach ($rows as $row)
{
// get each column by tag name
$cols = $row->getElementsByTagName('td');
$row = array();
$i=0;
foreach ($cols as $node) {
# code...
//print $node->nodeValue."\n";
if($row_headers==NULL)
$row[] = $node->nodeValue;
else
$row[$row_headers[$i]] = $node->nodeValue;
$i++;
}
$table[] = $row;
}
var_dump($table);
This code worked well for me.
Example of original code is here.
http://techgossipz.blogspot.co.nz/2010/02/how-to-parse-html-using-dom-with-php.html
I'm creating a "Madlibs" page where visitors can create funny story things online. The original files are in XML format with the blanks enclosed in XML tags
(Such as blablabla <PluralNoun></PluralNoun> blablabla <Verb></Verb> ).
The form data is created using XSL and the results are saved using a $_POST array. How do I post the $_POST array between the matching XML tags and then display the result to the page? I'm sure it uses a "foreach" statement, but I'm just not familiar enough with PHP to figure out what functions to use. Any help would be great.
Thanks,
E
I'm not sure if I understood your problem quite well, but I think this might help:
// mocking some $_POST variables
$_POST['Verb'] = 'spam';
$_POST['PluralNoun'] = 'eggs';
// original template with blanks (should be loaded from a valid XML file)
$xml = 'blablabla <PluralNoun></PluralNoun> blablabla <Verb></Verb>';
$valid_xml = '<?xml version="1.0"?><xml>' . $xml . '</xml>';
$doc = DOMDocument::loadXML($valid_xml, LIBXML_NOERROR);
if ($doc !== FALSE) {
$text = ''; // used to accumulate output while walking XML tree
foreach ($doc->documentElement->childNodes as $child) {
if ($child->nodeType == XML_TEXT_NODE) { // keep text nodes
$text .= $child->wholeText;
} else if (array_key_exists($child->tagName, $_POST)) {
// replace nodes whose tag matches a POST variable
$text .= $_POST[$child->tagName];
} else { // keep other nodes
$text .= $doc->saveXML($child);
}
}
echo $text . "\n";
} else {
echo "Failed to parse XML\n";
}
Here is PHP foreach syntax. Hope it helps
$arr = array('fruit1' => 'apple', 'fruit2' => 'orange');
foreach ($arr as $key => $val) {
echo "$key = $val\n";
}
and here is the code to loop thru your $_POST variables:
foreach ($_POST as $key => $val) {
echo "$key = $val\n";
// then you can fill each POST var to your XML
// maybe you want to use PHP str_replace function too
}