I'm trying to extract data from anchor urls of a webpage i.e. :
require 'simple_html_dom.php';
$html = file_get_html('http://www.example.com');
foreach($html->find('a') as $element)
{
$href= $element->href;
$name=$surname=$id=0;
parse_str($href);
echo $name;
}
Now, the problem with this is that it doesn't work for some reason. All urls are in the following form:
name=James&surname=Smith&id=2311245
Now, the strange thing is, if i execute
echo $href;
I get the desired output. However, that string won't parse for some reason and also has a length of 43 accroding to strlen() function. If , however, i pass 'name=James&surname=Smith&id=2311245' as parse_srt() function argument, it works just fine. What could be the problem?
I'm gonna take a guess that your target page is actually one of the rare pages that correctly encodes & in its links. Example:
<a href="somepage.php?name=James&surname=Smith&id=3211245">
To parse this string, you first need to unescape the &s. You can do this with a simple str_replace if you like.
Presuming the links are absolute, you just need the query string. You can use parse_url and use an out parameter with parse_str access an array;
$html = file_get_html('http://www.example.com');
foreach($html->find('a') as $element)
{
$href= $element->href;
$url_components = parse_url($href);
parse_str($url_components['query'], $out);
var_dump($out)
}
Related
I'm looking for a solution to extract only one URL from a specific webpage using PHP.
Here's a simple example of what I need:
I have a URL with many links (https://apkpure.com/mi-home/com.xiaomi.smarthome/download?from=details)
I want to scrape the link under the anchor click here from the current page.
Then the code must return this result https://download.apkpure.com/b/XAPK/Y29tLnhpYW9taS5zbWFydGhvbWVfNjMwNjdfYWU1M2FmOWU?_fn=TWkgSG9tZV92NS44LjdfYXBrcHVyZS5jb20ueGFwaw&as=4c5e64f6f957edac834f3631fe4e09715f2e35f6&ai=-1070628217&at=1596863870&_sa=ai%2Cat&k=24cb20f95fbf333deb01c145ce7b982b5f30d87e&_p=Y29tLnhpYW9taS5zbWFydGhvbWU&c=1%7CLIFESTYLE%7CZGV2PVhpYW9taSUyMEluYy4mdD14YXBrJnM9MTI5OTAzMTM4JnZuPTUuOC43JnZjPTYzMDY3.
I tried this:
$sourceURL="https://apkpure.com/mi-home/com.xiaomi.smarthome/download?from=details";
$htmlSource=htmlentities(file_get_contents($sourceURL));
echo strip_tags($htmlSource, "<a>");
I get the result with all links including the one I need
I need your help to extract the href value of the link I want.
Thanks in advance.
If you look at the required URL, you can see it has a pattern https://download.apkpure.com at start of each Click Here URL, so, we can use regex to find it.
preg_match_all will return an array of strings that will match our regex. Then I have used implode to convert the first index to a string.
Here is the complete working code:
$sourceURL="https://apkpure.com/mi-home/com.xiaomi.smarthome/download?from=details";
$content=file_get_contents($sourceURL);
$content = strip_tags($content,"<a>");
preg_match_all('#\bhttps?://download.apkpure.com[^,\s()<>]+(?:\([\w\d]+\)|([^,[:punct:]\s]|/))#', $content, $match);
echo implode(', ', $match[0]);
Most elegant way is to use a DOM parser.
Iterate thru anchors
Check if anchor ID is 'download_link' (which is in the 'click here' link)
Extract the href attribute value
$html = file_get_contents('https://apkpure.com/mi-home/com.xiaomi.smarthome/download?from=details');
libxml_use_internal_errors(true);
$doc = new DOMDocument();
$doc->loadHTML($html);
$href = '';
foreach($doc->getElementsByTagName('a') as $item) {
if($item->getAttribute('id') == 'download_link') {
$href = $item->getAttribute('href');
break;
}
}
echo $href;
https://download.apkpure.com/b/XAPK/Y29tLnhpYW9taS5zbWFydGhvbWVfNjMwNjdfYWU1M2FmOWU?_fn=TWkgSG9tZV92NS44LjdfYXBrcHVyZS5jb20ueGFwaw&as=6a7de2cb660007a32e4b3d61a0d3c41e5f2e7102&ai=1946881098&at=1596878986&_sa=ai%2Cat&k=9e912b1007d50d2be9af8e78bcdea86c5f31138a&_p=Y29tLnhpYW9taS5zbWFydGhvbWU&c=1%7CLIFESTYLE%7CZGV2PVhpYW9taSUyMEluYy4mdD14YXBrJnM9MTI5OTAzMTM4JnZuPTUuOC43JnZjPTYzMDY3
I am trying to display a website to a user, having downloaded it using php.
This is the script I am using:
<?php
$url = 'http://stackoverflow.com/pagecalledjohn.php';
//Download page
$site = file_get_contents($url);
//Fix relative URLs
$site = str_replace('src="','src="' . $url,$site);
$site = str_replace('url(','url(' . $url,$site);
//Display to user
echo $site;
?>
So far this script works a treat except for a few major problems with the str_replace function. The problem comes with relative urls. If we use an image on our made up pagecalledjohn.php of a cat (Something like this: ). It is a png and as I see it it can be placed on the page using 6 different urls:
1. src="//www.stackoverflow.com/cat.png"
2. src="http://www.stackoverflow.com/cat.png"
3. src="https://www.stackoverflow.com/cat.png"
4. src="somedirectory/cat.png"
4 is not applicable in this case but added anyway!
5. src="/cat.png"
6. src="cat.png"
Is there a way, using php, I can search for src=" and replace it with the url (filename removed) of the page being downloaded, but without sticking url in there if it is options 1,2 or 3 and change procedure slightly for 4,5 and 6?
Rather than trying to change every path reference in the source code, why don't you simply inject a <base> tag in your header to specifically indicate the base URL upon which all relative URL's should be calculated?
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/base
This can be achieved using your DOM manipulation tool of choice. The example below would show how to do this using DOMDocument and related classes.
$target_domain = 'http://stackoverflow.com/';
$url = $target_domain . 'pagecalledjohn.php';
//Download page
$site = file_get_contents($url);
$dom = DOMDocument::loadHTML($site);
if($dom instanceof DOMDocument === false) {
// something went wrong in loading HTML to DOM Document
// provide error messaging and exit
}
// find <head> tag
$head_tag_list = $dom->getElementsByTagName('head');
// there should only be one <head> tag
if($head_tag_list->length !== 1) {
throw new Exception('Wow! The HTML is malformed without single head tag.');
}
$head_tag = $head_tag_list->item(0);
// find first child of head tag to later use in insertion
$head_has_children = $head_tag->hasChildNodes();
if($head_has_children) {
$head_tag_first_child = $head_tag->firstChild;
}
// create new <base> tag
$base_element = $dom->createElement('base');
$base_element->setAttribute('href', $target_domain);
// insert new base tag as first child to head tag
if($head_has_children) {
$base_node = $head_tag->insertBefore($base_element, $head_tag_first_child);
} else {
$base_node = $head_tag->appendChild($base_element);
}
echo $dom->saveHTML();
At the very minimum, it you truly want to modify all path references in the source code, I would HIGHLY recommend doing so with DOM manipulation tools (DOMDOcument, DOMXPath, etc.) rather than regex. I think you will find it a much more stable solution.
I don't know if I get your question completely right, if you want to deal with all text-sequences enclosed in src=" and ", the following pattern could make it:
~(\ssrc=")([^"]+)(")~
It has three capturing groups of which the second one contains the data you're interested in. The first and last are useful to change the whole match.
Now you can replace all instances with a callback function that is changing the places. I've created a simple string with all the 6 cases you've got:
$site = <<<BUFFER
1. src="//www.stackoverflow.com/cat.png"
2. src="http://www.stackoverflow.com/cat.png"
3. src="https://www.stackoverflow.com/cat.png"
4. src="somedirectory/cat.png"
5. src="/cat.png"
6. src="cat.png"
BUFFER;
Let's ignore for a moment that there are no surrounding HTML tags, you're not parsing HTML anyway I'm sure as you haven't asked for a HTML parser but for a regular expression. In the following example, the match in the middle (the URL) will be enclosed so that it's clear it matched:
So now to replace each of the links let's start lightly by just highlighting them in the string.
$pattern = '~(\ssrc=")([^"]+)(")~';
echo preg_replace_callback($pattern, function ($matches) {
return $matches[1] . ">>>" . $matches[2] . "<<<" . $matches[3];
}, $site);
The output for the example given then is:
1. src=">>>//www.stackoverflow.com/cat.png<<<"
2. src=">>>http://www.stackoverflow.com/cat.png<<<"
3. src=">>>https://www.stackoverflow.com/cat.png<<<"
4. src=">>>somedirectory/cat.png<<<"
5. src=">>>/cat.png<<<"
6. src=">>>cat.png<<<"
As the way of replacing the string is to be changed, it can be extracted, so it is easier to change:
$callback = function($method) {
return function ($matches) use ($method) {
return $matches[1] . $method($matches[2]) . $matches[3];
};
};
This function creates the replace callback based on a method of replacing you pass as parameter.
Such a replacement function could be:
$highlight = function($string) {
return ">>>$string<<<";
};
And it's called like the following:
$pattern = '~(\ssrc=")([^"]+)(")~';
echo preg_replace_callback($pattern, $callback($highlight), $site);
The output remains the same, this was just to illustrate how the extraction worked:
1. src=">>>//www.stackoverflow.com/cat.png<<<"
2. src=">>>http://www.stackoverflow.com/cat.png<<<"
3. src=">>>https://www.stackoverflow.com/cat.png<<<"
4. src=">>>somedirectory/cat.png<<<"
5. src=">>>/cat.png<<<"
6. src=">>>cat.png<<<"
The benefit of this is that for the replacement function, you only need to deal with the URL match as single string, not with regular expression matches array for the different groups.
Now to your second half of your question: How to replace this with the specific URL handling like removing the filename. This can be done by parsing the URL itself and remove the filename (basename) from the path component. Thanks to the extraction, you can put this into a simple function:
$removeFilename = function ($url) {
$url = new Net_URL2($url);
$base = basename($path = $url->getPath());
$url->setPath(substr($path, 0, -strlen($base)));
return $url;
};
This code makes use of Pear's Net_URL2 URL component (also available via Packagist and Github, your OS packages might have it, too). It can parse and modify URLs easily, so is nice to have for the job.
So now the replacement done with the new URL filename replacement function:
$pattern = '~(\ssrc=")([^"]+)(")~';
echo preg_replace_callback($pattern, $callback($removeFilename), $site);
And the result then is:
1. src="//www.stackoverflow.com/"
2. src="http://www.stackoverflow.com/"
3. src="https://www.stackoverflow.com/"
4. src="somedirectory/"
5. src="/"
6. src=""
Please note that this is exemplary. It shows how you can to it with regular expressions. You can however to it as well with a HTML parser. Let's make this an actual HTML fragment:
1. <img src="//www.stackoverflow.com/cat.png"/>
2. <img src="http://www.stackoverflow.com/cat.png"/>
3. <img src="https://www.stackoverflow.com/cat.png"/>
4. <img src="somedirectory/cat.png"/>
5. <img src="/cat.png"/>
6. <img src="cat.png"/>
And then process all <img> "src" attributes with the created replacement filter function:
$doc = new DOMDocument();
$saved = libxml_use_internal_errors(true);
$doc->loadHTML($site, LIBXML_HTML_NOIMPLIED | LIBXML_HTML_NODEFDTD);
libxml_use_internal_errors($saved);
$srcs = (new DOMXPath($doc))->query('//img/#hsrc') ?: [];
foreach ($srcs as $src) {
$src->nodeValue = $removeFilename($src->nodeValue);
}
echo $doc->saveHTML();
The result then again is:
1. <img src="//www.stackoverflow.com/cat.png">
2. <img src="http://www.stackoverflow.com/cat.png">
3. <img src="https://www.stackoverflow.com/cat.png">
4. <img src="somedirectory/cat.png">
5. <img src="/cat.png">
6. <img src="cat.png">
Just a different way of parsing has been used - the replacement still is the same. Just to offer two different ways that are also the same in part.
I suggest doing it in more steps.
In order to not complicate the solution, let's assume that any src value is always an image (it could as well be something else, e.g. a script).
Also, let's assume that there are no spaces, between equals sign and quotes (this can be fixed easily if there are). Finally, let's assume that the file name does not contain any escaped quotes (if it did, regexp would be more complicated).
So you'd use the following regexp to find all image references:
src="([^"]*)". (Also, this does not cover the case, where src is enclosed into single quotes. But it is easy to create a similar regexp for that.)
However, the processing logic could be done with preg_replace_callback function, instead of str_replace. You can provide a callback to this function, where each url can be processed, based on its contents.
So you could do something like this (not tested!):
$site = preg_replace_callback(
'src="([^"]*)"',
function ($src) {
$url = $src[1];
$ret = "";
if (preg_match("^//", $url)) {
// case 1.
$ret = "src='" . $url . '"';
}
else if (preg_match("^https?://", $url)) {
// case 2. and 3.
$ret = "src='" . $url . '"';
}
else {
// case 4., 5., 6.
$ret = "src='http://your.site.com.com/" . $url . '"';
}
return $ret;
},
$site
);
I want to find all href tags that include my URL in any html source.
I used this code:
preg_match_all("'<a.*?href=\"(http[s]*://[^>\"]*?)\"[^>]*?>(.*?)</a>'si", $target_source, $matches);
Example, I try to find a href tags that include http://www.emrekadan.com
How can I do it ?
I'd simply use PHP's DOM Parser for this purpose. This may seem harder than regex, but it's actually a lot more easier and is the correct way to parse HTML.
$url = 'WEBSITE_TO_SEARCH_FOR';
$searchstring = 'YOUR_SEARCH_STRING';
$dom = new DOMDocument();
#$dom->loadHTMLFile($url);
$result = array();
foreach($dom->getElementsByTagName('a') as $link) {
$href = $link->getAttribute('href');
if(stripos($href, $searchstring) !== FALSE) {
$result[] = $href;
}
}
if(!empty($result)) print_r($result);
Explanation:
Loads the given URL using loadHTMLfile() method
Finds all <a> tags and loops through them
Uses stripos() to case-insensitively check if the href contains the given search term
If it does, it's pushed into the $result array
Note: If an empty string is passed as the filename or an empty file is named, a warning will be generated. I've used # to hide that message, but it's generally regarded as a bad practice. You can add additional checks to make sure the URL exists before trying to load it.
I try to retrieve info from a webpage using simple_html_dom, like this:
<?PHP
include_once('dom/simple_html_dom.php');
$urlpart="http://w2.brreg.no/motorvogn/";
$url = "http://w2.brreg.no/motorvogn/heftelser_motorvogn.jsp?regnr=BR15597";
$html = file_get_html($url);
foreach($html->find('a') as $element)
if(preg_match('*dagb*',$element)) {
$result=$urlpart.$element->href;
$resultcontent=file_get_contents($result);
echo $resultcontent;
}
?>
The $result variable first gives me this URL:
http://w2.brreg.no/motorvogn/dagbokutskrift.jsp?dgbnr=2011365320&embnr=0®nr=BR15597
When accessing the above URL with my browser, i get the content i expect.
When retrieving the content with $resultcontent, i get a different result, where it says in norwegian "Invalid input".
Any ideas why?
foreach($html->find('a') as $element)
if(preg_match('*dagb*',$element)) {
$result=$urlpart.$element->href;
$resultcontent=file_get_contents(html_entity_decode($result));
echo $resultcontent;
}
This should do the trick.
The problem is with your URL query parameter.
http://w2.brreg.no/motorvogn/dagbokutskrift.jsp?dgbnr=2011365320&embnr=0®nr=BR15597
The string '®' in the URL will be converted to Symbol ® in file_get_contents function which stops you from getting the actual result.
You can use html_entity_decode function in line #11
$resultcontent=file_get_contents(html_entity_decode($result));
Ahoy there!
I can't "guess" witch syntax should I use to be able to extract the source of an image but simply the web address not the src= neither the quotes?
Here is my piece of code:
function get_all_images_src() {
$content = get_the_content();
preg_match_all('|src="(.*?)"|i', $content, $matches, PREG_SET_ORDER);
foreach($matches as $path) {
echo $path[0];
}
}
When I use it I got this printed:
src="http://project.bechade.fr/wp-content/uploads/2009/09/mer-300x225.jpg"
And I wish to get only this:
http://project.bechade.fr/wp-content/uploads/2009/09/mer-300x225.jpg
Any idea?
Thanks for your help.
Not exactly an answer to your question, but when parsing html, consider using a proper html parser:
foreach($html->find('img') as $element) {
echo $element->src . '<br />';
}
See: http://simplehtmldom.sourceforge.net/
$path[1] instead of $path[0]
echo $path[1];
$path[0] is the full string matched. $path[1] is the first grouping.
You could explode the string using " as a delimeter and then the second item in the array you get would be the right string:
$array = explode('"',$full_src);
$bit_you_want = $array[1];
Reworking your original function, it would be:
function get_all_images_src() {
$content = get_the_content();
preg_match_all('|src="(.*?)"|i', $content, $matches, PREG_SET_ORDER);
foreach($matches as $path) {
$src = explode('"', $path);
echo $src[1];
}
}
Thanks Ithcy for his right answer.
I guess I've been too long to respond because he deleted it, I just don't know where his answer's gone...
So here is the one I've received by mail:
'|src="(.*?)"|i' makes no sense as a
regex. try '|src="([^"]+)"|i' instead.
(Which still isn't the most robust
solution but is better than what
you've got.)
Also, what everyone else said. You
want $path1, NOT $path[0]. You're
already extracting all the src
attributes into $matches[]. That has
nothing to do with $path[0]. If you're
not getting all of the src attributes
in the text, there is a problem
somewhere else in your code.
One more thing - you should use a real
HTML parser for this, because img tags
are not the only tags with src
attributes. If you're using this code
on raw HTML source, it's going to
match not just but
tags, etc.
— ithcy
I did everything he told me to do including using a HTML parser from Bart (2nd answer).
It works like a charm ! Thank you mate...