I am trying to display a website to a user, having downloaded it using php.
This is the script I am using:
<?php
$url = 'http://stackoverflow.com/pagecalledjohn.php';
//Download page
$site = file_get_contents($url);
//Fix relative URLs
$site = str_replace('src="','src="' . $url,$site);
$site = str_replace('url(','url(' . $url,$site);
//Display to user
echo $site;
?>
So far this script works a treat except for a few major problems with the str_replace function. The problem comes with relative urls. If we use an image on our made up pagecalledjohn.php of a cat (Something like this: ). It is a png and as I see it it can be placed on the page using 6 different urls:
1. src="//www.stackoverflow.com/cat.png"
2. src="http://www.stackoverflow.com/cat.png"
3. src="https://www.stackoverflow.com/cat.png"
4. src="somedirectory/cat.png"
4 is not applicable in this case but added anyway!
5. src="/cat.png"
6. src="cat.png"
Is there a way, using php, I can search for src=" and replace it with the url (filename removed) of the page being downloaded, but without sticking url in there if it is options 1,2 or 3 and change procedure slightly for 4,5 and 6?
Rather than trying to change every path reference in the source code, why don't you simply inject a <base> tag in your header to specifically indicate the base URL upon which all relative URL's should be calculated?
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/base
This can be achieved using your DOM manipulation tool of choice. The example below would show how to do this using DOMDocument and related classes.
$target_domain = 'http://stackoverflow.com/';
$url = $target_domain . 'pagecalledjohn.php';
//Download page
$site = file_get_contents($url);
$dom = DOMDocument::loadHTML($site);
if($dom instanceof DOMDocument === false) {
// something went wrong in loading HTML to DOM Document
// provide error messaging and exit
}
// find <head> tag
$head_tag_list = $dom->getElementsByTagName('head');
// there should only be one <head> tag
if($head_tag_list->length !== 1) {
throw new Exception('Wow! The HTML is malformed without single head tag.');
}
$head_tag = $head_tag_list->item(0);
// find first child of head tag to later use in insertion
$head_has_children = $head_tag->hasChildNodes();
if($head_has_children) {
$head_tag_first_child = $head_tag->firstChild;
}
// create new <base> tag
$base_element = $dom->createElement('base');
$base_element->setAttribute('href', $target_domain);
// insert new base tag as first child to head tag
if($head_has_children) {
$base_node = $head_tag->insertBefore($base_element, $head_tag_first_child);
} else {
$base_node = $head_tag->appendChild($base_element);
}
echo $dom->saveHTML();
At the very minimum, it you truly want to modify all path references in the source code, I would HIGHLY recommend doing so with DOM manipulation tools (DOMDOcument, DOMXPath, etc.) rather than regex. I think you will find it a much more stable solution.
I don't know if I get your question completely right, if you want to deal with all text-sequences enclosed in src=" and ", the following pattern could make it:
~(\ssrc=")([^"]+)(")~
It has three capturing groups of which the second one contains the data you're interested in. The first and last are useful to change the whole match.
Now you can replace all instances with a callback function that is changing the places. I've created a simple string with all the 6 cases you've got:
$site = <<<BUFFER
1. src="//www.stackoverflow.com/cat.png"
2. src="http://www.stackoverflow.com/cat.png"
3. src="https://www.stackoverflow.com/cat.png"
4. src="somedirectory/cat.png"
5. src="/cat.png"
6. src="cat.png"
BUFFER;
Let's ignore for a moment that there are no surrounding HTML tags, you're not parsing HTML anyway I'm sure as you haven't asked for a HTML parser but for a regular expression. In the following example, the match in the middle (the URL) will be enclosed so that it's clear it matched:
So now to replace each of the links let's start lightly by just highlighting them in the string.
$pattern = '~(\ssrc=")([^"]+)(")~';
echo preg_replace_callback($pattern, function ($matches) {
return $matches[1] . ">>>" . $matches[2] . "<<<" . $matches[3];
}, $site);
The output for the example given then is:
1. src=">>>//www.stackoverflow.com/cat.png<<<"
2. src=">>>http://www.stackoverflow.com/cat.png<<<"
3. src=">>>https://www.stackoverflow.com/cat.png<<<"
4. src=">>>somedirectory/cat.png<<<"
5. src=">>>/cat.png<<<"
6. src=">>>cat.png<<<"
As the way of replacing the string is to be changed, it can be extracted, so it is easier to change:
$callback = function($method) {
return function ($matches) use ($method) {
return $matches[1] . $method($matches[2]) . $matches[3];
};
};
This function creates the replace callback based on a method of replacing you pass as parameter.
Such a replacement function could be:
$highlight = function($string) {
return ">>>$string<<<";
};
And it's called like the following:
$pattern = '~(\ssrc=")([^"]+)(")~';
echo preg_replace_callback($pattern, $callback($highlight), $site);
The output remains the same, this was just to illustrate how the extraction worked:
1. src=">>>//www.stackoverflow.com/cat.png<<<"
2. src=">>>http://www.stackoverflow.com/cat.png<<<"
3. src=">>>https://www.stackoverflow.com/cat.png<<<"
4. src=">>>somedirectory/cat.png<<<"
5. src=">>>/cat.png<<<"
6. src=">>>cat.png<<<"
The benefit of this is that for the replacement function, you only need to deal with the URL match as single string, not with regular expression matches array for the different groups.
Now to your second half of your question: How to replace this with the specific URL handling like removing the filename. This can be done by parsing the URL itself and remove the filename (basename) from the path component. Thanks to the extraction, you can put this into a simple function:
$removeFilename = function ($url) {
$url = new Net_URL2($url);
$base = basename($path = $url->getPath());
$url->setPath(substr($path, 0, -strlen($base)));
return $url;
};
This code makes use of Pear's Net_URL2 URL component (also available via Packagist and Github, your OS packages might have it, too). It can parse and modify URLs easily, so is nice to have for the job.
So now the replacement done with the new URL filename replacement function:
$pattern = '~(\ssrc=")([^"]+)(")~';
echo preg_replace_callback($pattern, $callback($removeFilename), $site);
And the result then is:
1. src="//www.stackoverflow.com/"
2. src="http://www.stackoverflow.com/"
3. src="https://www.stackoverflow.com/"
4. src="somedirectory/"
5. src="/"
6. src=""
Please note that this is exemplary. It shows how you can to it with regular expressions. You can however to it as well with a HTML parser. Let's make this an actual HTML fragment:
1. <img src="//www.stackoverflow.com/cat.png"/>
2. <img src="http://www.stackoverflow.com/cat.png"/>
3. <img src="https://www.stackoverflow.com/cat.png"/>
4. <img src="somedirectory/cat.png"/>
5. <img src="/cat.png"/>
6. <img src="cat.png"/>
And then process all <img> "src" attributes with the created replacement filter function:
$doc = new DOMDocument();
$saved = libxml_use_internal_errors(true);
$doc->loadHTML($site, LIBXML_HTML_NOIMPLIED | LIBXML_HTML_NODEFDTD);
libxml_use_internal_errors($saved);
$srcs = (new DOMXPath($doc))->query('//img/#hsrc') ?: [];
foreach ($srcs as $src) {
$src->nodeValue = $removeFilename($src->nodeValue);
}
echo $doc->saveHTML();
The result then again is:
1. <img src="//www.stackoverflow.com/cat.png">
2. <img src="http://www.stackoverflow.com/cat.png">
3. <img src="https://www.stackoverflow.com/cat.png">
4. <img src="somedirectory/cat.png">
5. <img src="/cat.png">
6. <img src="cat.png">
Just a different way of parsing has been used - the replacement still is the same. Just to offer two different ways that are also the same in part.
I suggest doing it in more steps.
In order to not complicate the solution, let's assume that any src value is always an image (it could as well be something else, e.g. a script).
Also, let's assume that there are no spaces, between equals sign and quotes (this can be fixed easily if there are). Finally, let's assume that the file name does not contain any escaped quotes (if it did, regexp would be more complicated).
So you'd use the following regexp to find all image references:
src="([^"]*)". (Also, this does not cover the case, where src is enclosed into single quotes. But it is easy to create a similar regexp for that.)
However, the processing logic could be done with preg_replace_callback function, instead of str_replace. You can provide a callback to this function, where each url can be processed, based on its contents.
So you could do something like this (not tested!):
$site = preg_replace_callback(
'src="([^"]*)"',
function ($src) {
$url = $src[1];
$ret = "";
if (preg_match("^//", $url)) {
// case 1.
$ret = "src='" . $url . '"';
}
else if (preg_match("^https?://", $url)) {
// case 2. and 3.
$ret = "src='" . $url . '"';
}
else {
// case 4., 5., 6.
$ret = "src='http://your.site.com.com/" . $url . '"';
}
return $ret;
},
$site
);
Related
I have a web-page, for example, http://example.com/some-page. If I pass this URL to my PHP function, it should grab the title and content of the page. I've tried to grab the title like this:
function page_title($url) {
$page = #file_get_contents($url);
if (preg_match('~<h1 class="page-title">(.*)<\/h1>~is', $page, $matches)) {
return $matches[0];
}
}
echo page_title('http://example.com/some-page');
What is my mistake?
Your function actually works almost. I would propose the DOM parser solution (see below), but before doing that I will point out a few weaknesses in the regular expression and code:
the (.*) capture group is greedy, i.e. it will catch a string that is as long as possible before a closing </h1>, even across line breaks (because of the s modifier). So if your document has multiple h1 tags it would capture up until the last one! You could fix this, by making it a lazy capture: (.*?)
the actual page may have other tags, like a span, inside the title. You might want to improve the regular expression to exclude any tags that surround your title, but PHP has a function strip_tags for that purpose.
Ensure that the file contents were actually retrieved; an error might have prevented correct retrieval, or your server might not allow such retrieval. And as you suppress errors using the # prefix, you will maybe miss them. I would suggest removing the #. You could also check the return value for false.
are you sure you want the H1 tag contents? A page has often a specific title tag.
The above improvements will give you this code:
function page_title($url) {
$page = file_get_contents($url);
if ($page===false) {
echo "Failed to retrieve $url";
}
if (preg_match('~<h1 class="page-title">(.*?)<\/h1>~is', $page, $matches)) {
return strip_tags($matches[0]);
}
}
Although this works, you will sooner or later bump into a document that has an extra space in the h1 tag, or has another attribute before class, or has more than one css style, etc... making the match fail. The following regular expression will deal with some of these problems:
~<h1\s+class\s*=\s*"([^" ]* )?page-title( [^"]*)?"[^>]*>(.*?)<\/h1\s*>~is
... but still the class attribute has to come before any other attributes, and its value must be enclosed in double quotes. Also that could be solved, but the regular expression will become a monster.
The DOM way
Regular expressions are not the ideal way to extract content from HTML. Here is an alternative function based on DOM parsing:
function xpage_title($url) {
// Create a new DOM Document to hold our webpage structure
$xml = new DOMDocument();
// Load the url's contents into the DOM, ignore warnings
libxml_use_internal_errors(true);
$success = $xml->loadHTMLFile($url);
libxml_use_internal_errors(false);
if (!$success) {
echo "Failed to open $url.";
return;
}
// Find first h1 with class 'page-title' and return it's text contents
foreach($xml->getElementsByTagName('h1') as $h1) {
// Does it have the desired class?
if (in_array('page-title', explode(" ", $h1->getAttribute('class')))) {
return $h1->textContent;
}
}
}
The above could be still improved by making use of DOMXpath.
EDIT
You mentioned in comments you actually don't want the contents of the H1 tag because it contains more text than you want.
Then you could read the title tag and the article tag contents:
function page_title_and_content($url) {
$page = file_get_contents($url);
if ($page===false) {
echo "Failed to retrieve $url";
}
// PHP 5.4: $result = (object) ["title" => null, "content" => null];
$result = new stdClass();
$result->title = null;
$result->content = null;
if (preg_match('~\<title\>(.*?)\<\/title\>~is', $page, $matches)) {
$result->title = $matches[1];
}
if (preg_match('~<article>(.*)<\/article>~is', $page, $matches)) {
$result->content = $matches[1];
}
return $result;
}
$result = page_title_and_content('http://www.example.com/example');
echo "title: " . $result->title . "<br>";
echo "content: <br>" . $result->content . "<br>";
The above code will return an object with two properties: title and content. Note that the content property will have HTML tags, with potentially images and such. If you don't want tags, then apply strip_tags.
I am trying to implement a simple function that given a text input, returns the text modified with xhp_a when a link is detected, within a paragraph xhp_p.
Consider this class
class Urlifier {
protected static $reg_exUrl = "/(http|https|ftp|ftps)\:\/\/[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}(\/\S*)?/";
public static function convertParagraphWithLink(?string $input):xhp_p{
if (!$input)
return <p></p>;
else
{
if (preg_match(self::$reg_exUrl,$input,$url_match)) //match found
{
return <p>{preg_replace($reg_exUrl, '<a href="'.$url_match[0].'>'.$url_match[0].'</a>', $input)}<p>;
}else{//no link inside
<p>{$input}</p>
}
}
}
The problem here is that xhp escapes html and links are not shown as expected. I suppose that this happens because a do not create a dom hierarchy as expected (with appendChild method for example) and thus everything regex replaces is a string.
So my other approach to this problem was to use preg_match_callback with a callback function that would create xhp_a and add to hierarchy under xhp_p but that did not work either.
Am i wrong somewhere ? If not would there by any security risk / bigger overhead by just finding and replacing on load the html on client side instead of server ?
Thanks for your time !
Since XHP maintains object hierarchy that maps to DOM, simply replacing parts of a string won't create any new objects. To manipulate XHP objects corresponding methods should be used, e.g. appendChild.
Here's an example of how what you need can be achieved with XHP manipulation.
class Urlifier {
public static function convertParagraphWithLink(
?string $input,
): xhp_p {
$url_pattern = re"/(http|https|ftp|ftps)\:\/\/[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}(\/\S*)?/";
if (HH\Lib\Str\is_empty($input)) {
return <p/>;
}
$input = $input as nonnull;
// Extract links
$link_matches = HH\Lib\Regex\every_match($input, $url_pattern);
$links = HH\Lib\Vec\map($link_matches, $m ==> $m[0]);
$a_elements = HH\Lib\Vec\map($links, $link ==> <a href={$link}>{$link}</a>);
// Extract all pieces between matches
$texts = HH\Lib\Regex\split($input, $url_pattern);
$p_elements = HH\Lib\Vec\map($texts, $text ==> <p>{$text}</p>);
// Merge texts and links
$pairs = HH\Lib\Vec\zip($p_elements, $a_elements);
$elements = HH\Lib\Vec\flatten($pairs);
// Because there's one more p element than a element, append last p
$elements[] = HH\Lib\C\last($p_elements);
$result = <p/>;
$result->appendChild($elements);
return $result;
}
I'm tring to get a string between 2 strings with preg_match
The string is something like this, this is just an example
<source src='http://website.com/384238/dsjfjsd.jpg' type='image/jpg' data-res='43543' lang='English'/>
I want the link, the "data-res=" is the one that varies so:
I'm doing something like this:
preg_match("<source src='(.*)' type='image/jpg' data-res='43543",$input,$output);
I also tried this way
$output = trim(cut_str($input, '<source src='', ' type='image/jpg' data-res='43543'));
I think the problem is not knowing how do I represent the spaces or special chars, I also wanted an advice for whats the best function to solve this
While you can do this with a regular expression. I would encourage you to use DOMDocument.
From there it would be simple to grab all source tags using getElementByTagName():
$dom = new DOMDocument;
$dom->loadHTML($html);
$source_tags = $dom->getElementsByTagName('source');
foreach ($source_tags as $source_tag) {
echo 'Link: ' . $source_tag->attributes->getNamedItem('src')->nodeValue;
}
This question might also help if you are interested in source tags with the data-res attribute.
Here is a code you could try:
// The Regular Expression filter
$reg_exSRC = "/(src)\:\/\/[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}(\/\S*)?/";
// The text you want to filter for urls
$text = "<source src='http://website.com/384238/dsjfjsd.jpg' type='image/jpg' data-res='43543' lang='English'/>";
// apply expression to the text
preg_match($reg_exSRC, $text, $url);
echo $url[0];
Why not parsing it like this ? It's faster then REGEX and easier to use.
$dom = new DOMDocument;
$dom->loadHTML('<source src="http://website.com/384238/dsjfjsd.jpg" type="image/jpg" data-res="43543" lang="English" />');
// We read it
$dataSource = $dom->getElementsByTagName('source');
// We loop on it
$dataRes = FALSE;
foreach($dataSource as $data){
# We read the wanted field
if(($dataAttr = $data->attributes->getNamedItem('data-res')->nodeValue) == "43543"){
# We assign it
$dataRes&= $dataAttr;
# Done - We end the loop here
break;
}
}
# We found it ?
if($dataRes !== FALSE){
# Yes
var_dump($dataRes);
} else {
# No
exit('Failed');
}
Warning: I didn't not test this code but it should work.
I want to find all href tags that include my URL in any html source.
I used this code:
preg_match_all("'<a.*?href=\"(http[s]*://[^>\"]*?)\"[^>]*?>(.*?)</a>'si", $target_source, $matches);
Example, I try to find a href tags that include http://www.emrekadan.com
How can I do it ?
I'd simply use PHP's DOM Parser for this purpose. This may seem harder than regex, but it's actually a lot more easier and is the correct way to parse HTML.
$url = 'WEBSITE_TO_SEARCH_FOR';
$searchstring = 'YOUR_SEARCH_STRING';
$dom = new DOMDocument();
#$dom->loadHTMLFile($url);
$result = array();
foreach($dom->getElementsByTagName('a') as $link) {
$href = $link->getAttribute('href');
if(stripos($href, $searchstring) !== FALSE) {
$result[] = $href;
}
}
if(!empty($result)) print_r($result);
Explanation:
Loads the given URL using loadHTMLfile() method
Finds all <a> tags and loops through them
Uses stripos() to case-insensitively check if the href contains the given search term
If it does, it's pushed into the $result array
Note: If an empty string is passed as the filename or an empty file is named, a warning will be generated. I've used # to hide that message, but it's generally regarded as a bad practice. You can add additional checks to make sure the URL exists before trying to load it.
Below is a link crawler that gets the urls of a page in a given depth. At the end of it I added a regular expression to match all the emails of the url that is just crawled. As you can see in the second part, it file_get_content the same page it just downloaded, meaning twice the execution time, bandwidth etc.
The question is how can I merge those two parts to use the first downloaded page, to avoid getting it again? Thank you.
function crawler($url, $depth = 2) {
$dom = new DOMDocument('1.0');
if (!$parts || !#$dom->loadHTMLFile($url)) {
return;
}
.
.
.
//this is where the second part starts
$text = file_get_contents($url);
$res = preg_match_all("/[a-z0-9]+([_\\.-][a-z0-9]+)*#([a-z0-9]+([\.-][a-z0-9]+)*)+\\.[a-z]{2,}/i", $text, $matches);
}
Replace:
$text = file_get_contents($url);
with:
$text = $dom->saveHTML();
http://www.php.net/manual/en/domdocument.savehtml.php
Alternatively, in the first part of your function, you could save the HTML into a variable using file_get_contents, then pass it to $dom->loadHTML. That way you can then reuse the variable with your regex.
http://www.php.net/manual/en/domdocument.loadhtml.php