Array only printing last value [duplicate] - php

This question already has answers here:
How to store values from foreach loop into an array?
(9 answers)
Closed 1 year ago.
Consider the following php code which is scraping a clients old static website for his customers emails...
$urls = explode(PHP_EOL, file_get_contents('urls.txt'));
print '<pre>'; print_r($urls); print '</pre>';
print '<strong>Results:</strong><br>';
function get_emails($url) {
$html = file_get_contents($url);
$dom = new DOMDocument;
#$dom->loadHTML($html);
$links = $dom->getElementsByTagName('a');
foreach ($links as $link){
$href = $link->getAttribute('href');
if (strpos($href, 'mailto') !== false) {
return str_replace("mailto:","",$href) . '<br>';
}
}
}
foreach ($urls as $key => $url) {
print get_emails($url);
}
I am reading a list of urls from urls.txt but the result is only the one of the last url in the file. All of the others are ignored. I had hoped it would return a nice list of all his customers urls so we can import them into the new site.
Can someone help diagnose the issue?

It's because of:-
return str_replace("mailto:","",$href) . '<br>';
It will terminate the execution of loop.
1. Either do:-
$urls = explode(PHP_EOL, file_get_contents('urls.txt'));
print '<pre>'; print_r($urls); print '</pre>';
print '<strong>Results:</strong><br>';
function get_emails($url) {
$html = file_get_contents($url);
$dom = new DOMDocument;
#$dom->loadHTML($html);
$links = $dom->getElementsByTagName('a');
foreach ($links as $link){
$href = $link->getAttribute('href');
echo str_replace("mailto:","",$href) . '<br>';
}
}
foreach ($urls as $key => $url) {
get_emails($url);
}
2. OR do like below:-
$urls = explode(PHP_EOL, file_get_contents('urls.txt'));
print '<pre>'; print_r($urls); print '</pre>';
print '<strong>Results:</strong><br>';
function get_emails($url) {
$html = file_get_contents($url);
$data = array(); //define array
$dom = new DOMDocument;
#$dom->loadHTML($html);
$links = $dom->getElementsByTagName('a');
foreach ($links as $link){
$href = $link->getAttribute('href');
$data[] = str_replace("mailto:","",$href) . '<br>'; //assign each value to the array
}
return $data;
}
foreach ($urls as $key => $url) {
print_r(get_emails($url));
}

Related

How to scrape data from HTML Table in PHP

Hey I've been trying to scrape data from an html table and I'm not having much luck.
Website: https://www.dnr.state.mn.us/hunting/seasons.html
What I'm trying to do: I want to grab the contents of the table and encode it into json like
['event_title' 'Waterfowl'] and ['event_date' '09/25/21']
but I don't know how to do this, I've tried a couple different things but in the end I can't get it to work.
Code Example (Closest I got):
<?php
$dom = new DOMDocument;
$page = file_get_contents('https://www.dnr.state.mn.us/hunting/seasons.html');
$dom->loadHTML($page);
$xpath = new DOMXPath($dom);
foreach ($xpath->query('//tbody/tr') as $tr) {
$tmp = []; // reset the temporary array so previous entries are removed
foreach ($xpath->query("td[#class]", $tr) as $td) {
$key = preg_match('~[a-z]+$~', $td->getAttribute('class'), $out) ? $out[0] : 'no_class';
if ($key === "event-title") {
$tmp['event_title'] = $xpath->query("a", $td);
}
$tmp[$key] = trim($td->textContent);
}
//$tmp['event_date'] = date("M. dS 'y", strtotime(preg_replace('~\.|\d+[ap]m *~', '', $tmp['date'])));
//$result[] = $tmp;
$marray[] = array_unique($tmp);
print_r($marray);
}
//$array2 = var_export($result);
//print_r($array2[1]);
//var_export($result);
//echo "\n----\n";
//echo json_encode($result);
?>

Scraping specific text from a webpage using xpath

I've searched and tried multiple ways to get this but I'm not sure why it won't find most of the information on the webpage.
Page to scrape:
https://m.safeguardproperties.com/
Info needed:
Version number for PhotoDirect for Apple (currently 4.4.0)
Xpath to text needed (I think) : /html/body/div[1]/div[2]/div[1]/div[4]/div[3]/a
Attempts:
<?php
$file = "https://m.safeguardproperties.com/";
$doc = new DOMDocument();
$doc->loadHTMLFile($file);
$xpath = new DOMXpath($doc);
$elements = $xpath->query("/html/body/div[1]/div[2]/div[1]/div[4]/div[3]/a");
echo "<PRE>";
if (!is_null($elements)) {
foreach ($elements as $element) {
var_dump ($element);
echo "<br/>[". $element->nodeName. "]";
$nodes = $element->childNodes;
foreach ($nodes as $node) {
echo $node->nodeValue. "\n";
}
}
}
echo "</PRE>";
?>
Second Attempt:
<?PHP
$file = "https://m.safeguardproperties.com/";
$doc = new DOMDocument();
$doc->loadHTMLFile($file);
echo '<pre>';
// trying to find all links in document to see if I can see the correct one
$links = [];
$arr = $doc->getElementsByTagName("a");
foreach($arr as $item) {
$href = $item->getAttribute("href");
$text = trim(preg_replace("/[\r\n]+/", " ", $item->nodeValue));
$links[] = [
'href' => $href,
'text' => $text
];
}
var_dump($links);
echo '</pre>';
?>
For that particular website, the versions are being loaded from JSON data client side, you won't find them in the base document.
http://m.safeguardproperties.com/js/photodirect.json
This was located by comparing the original document source to the finished DOM and inspecting the network activity in the developer console.
$url = 'https://m.safeguardproperties.com/js/photodirect.json';
$json = file_get_contents( $url );
$object = json_decode( $json );
echo $object->ios->version; //4.4.0
Please respect other websites and cache your GET request.

PHP foreach with two 'as'?

Hi there i am trying to combine two loops of foreach but i have a problem.
The problem is that the <a href='$link'> is same to all results but they must be different.
Here is the code that i am using:
<?php
$feed = file_get_contents('http://grabo.bg/rss/?city=&affid=16090');
$rss = simplexml_load_string($feed);
$doc = new DOMDocument();
#$doc->loadHTML($feed);
$tags = $doc->getElementsByTagName('link');
foreach ($tags as $tag) {
foreach($rss as $r){
$title = $r->title;
$content = $r->content;
$link = $tag->getAttribute('href');
echo "<a href='$link'>$title</a> <br> $content";
}
}
?>
Where i my mistake? Why it's not working and how i make it work properly?
Thanks in advance!
Both loops were going through different resources so you are just simply cross joining all records in them.
This should work to get the data you need:
<?php
$feed = file_get_contents('http://grabo.bg/rss/?city=&affid=16090');
$rss = simplexml_load_string($feed);
foreach ($rss as $key => $entry) {
if ($key == "entry")
{
$title = (string) $entry->title;
$content = (string) $entry->content;
$link = (string) $entry->link["href"];
echo "<a href='$link'>$title</a><br />" . $content;
}
}

PHP: DOM get url and anchors (but not IMG)

I want to select all URL's from a HTML page into an array like:
This is a webpage with
different kinds of <img src="someimg.png">
The output i would like is:
with => http://somesite.se/link1.php
Now i get:
<img src="someimg.png"> => http://somesite.com/link1.php
with => http://somesite.com/link1.php
I do not want the urls/links that does contain a image between the start and end . Only the ones with text.
My current code is:
<?php
function innerHTML($node) {
$ret = '';
foreach ($node->childNodes as $node) {
$ret .= $node->ownerDocument->saveHTML($node);
}
return $ret;
}
$html = file_get_contents('http://somesite.com/'.$_GET['apt']);
$dom = new DOMDocument;
#$dom->loadHTML($html); // # = Removes errors from the HTML...
$links = $dom->getElementsByTagName('a');
$result = array();
foreach ($links as $link) {
//$node = $link->nodeValue;
$node = innerHTML($link);
$href = $link->getAttribute('href');
if (preg_match('/\.pdf$/i', $href))
$result[$node] = $href;
}
print_r($result);
?>
Add a second preg_match to your conditional:
if(preg_match('/\.pdf$/i',$href) && !preg_match('/<img .*>/i',$node)) $result[$node] = $href;

Extract all urls Href php

How do I convert these links to sha1? and then return to the html already applied with sha1
$dom = new DOMDocument;
$dom->loadHTML($html);
$links = $dom->getElementsByTagName('a');
foreach ($links as $link) {
if (preg_match("/globo.com/i", $link->getAttribute('href'))) {
$v = $link->getAttribute('href');
$str = str_replace($v,'http://www.globo.com/?id='.sha1($v),$v);
$str2 = str_replace($v,$str,$html);
echo $str2."";
}
}
You can just put the href back into the element:
$dom = new DOMDocument;
$dom->loadHTML($html);
$links = $dom->getElementsByTagName('a');
foreach ($links as $link) {
$href = $link->getAttribute('href');
if (preg_match("/globo.com/i", $href)) {
$newHref = 'http://www.globo.com/?id=' . sha1($v);
$link->setAttribute('href', $newHref);
}
}
And then export the finished HTML using saveHTML().
echo $dom->saveHTML();

Categories