I want to remove URLs of certain sites within a string
I used this:
<?php
$URLContent = '<p>Google</p><p>AnotherSite</p>';
$LinksToRemove = array('google.com', 'yahoo.com', 'msn.com');
$LinksToCheck = in_array('google.com' , $LinksToRemove);
if (strpos($URLContent, $LinksToCheck) !== 0) {
$URLContent = preg_replace('#<a.*?>([^>]*)</a>#i', '$1', $URLContent);
}
echo $URLContent;
?>
In this example, I want to remove URLs of google.com, yahoo.com and msn.com websites only if any of them found in string $URLContent, but keep any other links.
The result of the previous code is:
<p>Google</p><p>AnotherSite</p>
but I want it to be:
<p>Google</p><p>AnotherSite</p>
One solution would be to explode your $URLContent and compare for each value in $LinksToCheck.
It could be like this :
<?php
$URLContent = '<p>Google</p><p>AnotherSite</p>';
$urlList = explode('</p>', $URLContent);
$LinksToRemove = array('google.com', 'yahoo.com', 'msn.com');
$urlFormat = [];
foreach ($urlList as $url) {
foreach ($LinksToRemove as $link) {
if (str_contains($url, $link)) {
$url = '<p>' . ucfirst(str_replace('.com', '', $link)) . '</p>';
break;
}
}
$urlFormat[] = $url;
}
$result = implode('', $urlFormat);
I'm using a PHP script to create a dynamic breadcrumb navigation. I found it several years ago in another forum where it was originally posted in 2006. I updated it by removing some deprecated functions and it worked well so far. It creates the crumbs from the page URL:
$path = $this->_homepath;
$html ='';
$html .= 'Start';
$parts = explode('/', $_SERVER['PHP_SELF']);
$partscount = count($parts);
for($i = 1; $i < ($partscount - 1); $i++) {
$path .= $parts[$i] . '/';
$title = $parts[$i];
if ($this->_usereplacements == true) {
reset($this->_replacements);
foreach($this->_replacements as $search => $replace) {
$title = str_replace($search, $replace, $title);
}
}
if ($this->_replaceunderlines == true) {
$title = str_replace('_', ' ', $title);
}
if ($this->_ucfirst == true) {
$title = ucfirst($title);
}
if ($this->_ucwords == true) {
$title = ucwords($title);
}
$html .= $this->_separator . '' . $title . '';
}
The current page title, i.e. the last element of the bradcrumb, is generated as follows:
$title = '';
if ($title == '') {
$title = $parts[$partscount-1];
if ($this->_usereplacements == true) {
reset($this->_replacements);
foreach($this->_replacements as $search => $replace) {
$title = str_replace($search, $replace, $title);
}
}
if ($this->_removeextension == true) {
$title = substr($title, 0, strrpos($title, '.'));
}
if ($this->_replaceunderlines == true) {
$title = str_replace('_', ' ', $title);
}
if ($this->_ucfirst == true) {
$title = ucfirst($title);
}
if ($this->_ucwords == true) {
$title = ucwords($title);
}
}
$html .= $this->_separator . '<b>' . $title . '</b>';
I only copied relevant parts of the script and spared eg. the function declarations.
After switching from PHP 7.4 to 8.0 I found that the script is messing up a bit. Consider a page with this url: https://www.myhomepage.com/site1/site2/site3
For the crumbs of site1 the script is now generating the URL https://www.myhomepage.com/site1/site2/site1 insted of https://www.myhomepage.com/site1/, whereas for site2 it shows https://www.myhomepage.com/site1/site2/site1/site2 instead of https://www.myhomepage.com/site1/site2/. As you can see, the whole path https://www.myhomepage.com/site1/site2/ is always added as a prefix before the actual path of the crumb.
I haven't found a solution yet despite looking over this thing for two days. I suppose there have been some changes in PHP 8.0 which causes this behaviour, but I haven't found any clues in the incompatibility list (https://www.php.net/manual/de/migration80.php). I printed §path and $title and they look like they should. When echoing $html after each iteration in the for loop it shows already the wrong URLs. That's why I think that probably the for loop is the problem here. Do you have any suggestions?
Any help on this would be very much appreciated.
I finally was able to figure out what causes the strange behaviour of the script. In the declaration of the breadcrumb constructing function there is $this->_homepath = '/';. Later on I assign $path = $this->_homepath;. When I echo $path its empty instead of showing the aforementioned "/". So I directly set $path = '/'; and now everything works again as it should.
I have a string containing HTML and some placeholders.
The placeholders always start with {{ and end with }}.
I'm trying to encode the contents of places holders and the decode them later.
While they're encoded the ideally need to be valid HTML as I want to use DOMDocument on the string and the problem I'm having is that it ends up being a mess because the places holders are usually something like:
<img src="{{image url="mediadir/someimage.jpg"}}"/>
Sometimes they're something like this though:
<p>Some text</p>
{{widget type="pagelink" pageid="1"}}
<div class="whatever">Content</div>
I was wondering what the best way of doing this, thanks!
UPDATE: CONTEXT
The overall problem is that I have Magento site with a bunch of static links like:
Link text
And I need to replace them with widgets to the page so that if the URL changes the links update. So replace the above with something like this.
{{widget type="Magento\Cms\Block\Widget\Page\Link" anchor_text="Link Text" template="widget/link/link_block.phtml" page_id="123"}}
I have something which does this using the PHP DOMDocument functionality. It looks up CMS page through their URL, finds the ID and replaces the anchor node with the widget text. This works fine if the page doesn't already contain any widgets or URL placeholders.
However if it does then the placeholders come out broken when processed through the DOMDocument saveHTML() function.
My idea of a solution to this was to encode the widgets and URL placeholders before passing it toe DOMDocument loadHTML() function and decode them after the saveHTML() function when it is string again.
UPDATE: CODE
This is a cut down version of what I've got currently. It's messy but it does work in replacing pages with widgets.
$pageCollection = $this->pageCollectionFactory->create();
$collection = $pageCollection->load();
$findarray = array('http', 'mailto', '.pdf', '{', '}');
$findarray2 = array('mailto', '.pdf', '{', '}');
$specialurl = 'https://www.example.com';
$relative_links = 0;
$missing_pages = 0;
$fixed_links = 0;
try {
foreach ($collection as $page) {
$dom = new \DOMDocument();
$content = $this->cleanMagentoCode( $page->getContent() );
libxml_use_internal_errors(true); // Surpress warnings created by reading bad HTML
$dom->loadHTML( $content, LIBXML_HTML_NOIMPLIED | LIBXML_HTML_NODEFDTD ); // Load HTML without doctype or html containing elements
$elements = $dom->getElementsByTagName("a");
for ($i = $elements->length - 1; $i >= 0; $i --) {
$link = $elements->item($i);
$found = false;
// To clean up later
if ( strpos($link->getAttribute('href'), $specialurl) !== FALSE ) {
foreach ($findarray2 as $find) {
if (stripos($link->getAttribute('href'), $find) !== FALSE) {
$found = true;
break;
}
}
} else {
foreach ($findarray as $find) {
if (stripos($link->getAttribute('href'), $find) !== FALSE) {
$found = true;
break;
}
}
}
if ( strpos($link->getAttribute('href'), '#') === 0 ) {
$found = true;
}
if ( $link->getAttribute('href') == '' ) {
$found = true;
}
if ( !$found ) {
$url = parse_url($link->getAttribute('href'));
if ( isset( $url['path'] ) ) {
$identifier = rtrim( ltrim($url['path'], '/'), '/' );
try {
$pagelink = $this->pageRepository->getById($identifier);
// Fix link
if ($this->fixLinksFlag($input)) {
if ( stripos( $link->getAttribute('class'), "btn" ) !== FALSE ) {
$link_template = "widget/link/link_block.phtml";
} else {
$link_template = "widget/link/link_inline.phtml";
}
$widgetcode = '{{widget type="Magento\Cms\Block\Widget\Page\Link" anchor_text="' . $link->nodeValue . '" template="' . $link_template . '" page_id="' . $pagelink->getId() . '"}}';
$widget = $dom->createTextNode($widgetcode);
$link->parentNode->replaceChild($widget, $link);
}
}
}
}
}
$page->setContent( $this->dirtyMagentoCode( $dom->saveHTML() ) );
$page->save();
}
}
Currently working on something where i need to add the UTM tag to all links, got 1/2 minor issues i cant figure out
This is the code im am using, the issue is if a link got a parameter like ?test=test then this refuses to add the utm tags.
The other issue is a minor issue that im not sure would make sence to change, insted of me having to add a url, it could be neat if it added utm tags to ALL a href's by default with out knowing the domain name.
Hope someone can help me out and push me in the right direction.
$url_modifier_domain = preg_quote('add-link.com');
$html_text = preg_replace_callback(
'#((?:https?:)?//'.$url_modifier_domain.'(/[^\'"\#]*)?)(?=[\'"\#])#i',
function($matches){
$url_modifier = 'utm=some&medium=stuff';
if (!isset($matches[2])) return $matches[1]."/?$url_modifier";
$q = strpos($matches[2],'?');
if ($q===false) return $matches[1]."?$url_modifier";
if ($q==strlen($matches[2])-1) return $matches[1].$url_modifier;
return $matches[1]."&$url_modifier";
},
$html);
once detected the urls you can use parse_url() and parse_str() to elaborate the url, add utm and medium and rebuild it without caring too much about the content of the get parameters or the hash:
$url_modifier_domain = preg_quote('add-link.com');
$html_text = preg_replace_callback(
'#((?:https?:)?//'.$url_modifier_domain.'(/[^\'"\#]*)?)(?=[\'"\#])#i',
function ($matches) {
$link = $matches[0];
if (strpos($link, '#') !== false) {
list($link, $hash) = explode('#', $link);
}
$res = parse_url($link);
$result = '';
if (isset($res['scheme'])) {
$result .= $res['scheme'].'://';
}
if (isset($res['host'])) {
$result .= $res['host'];
}
if (isset($res['path'])) {
$result .= $res['path'];
}
if (isset($res['query'])) {
parse_str($res['query'], $res['query']);
} else {
$res['query'] = [];
}
$res['query']['utm'] = 'some';
$res['query']['medium'] = 'stuff';
if (count($res['query']) > 0) {
$result .= '?'.http_build_query($res['query']);
}
if (isset($hash)) {
$result .= '#'.$hash;
}
return $result;
},
$html
);
As you can see, the code is longer but simpler
Edit
I made some change, searching for every href="xxx" inside the text. If the link is not from add-link.com the script will skip it, otherwise he will try to print it in the best way possible
$html = 'blabla a
a
a
a
a
a
a
a
a
a
a
';
$url_modifier_domain = preg_quote('add-link.com');
$html_text = preg_replace_callback(
'/href="([^"]+)"/i',
function ($matches) {
$link = $matches[1];
// ignoring outer links
if(strpos($link,'add-link.com') === false) return 'href="'.$link.'"';
if (strpos($link, '#') !== false) {
list($link, $hash) = explode('#', $link);
}
$res = parse_url($link);
$result = '';
if (isset($res['scheme'])) {
$result .= $res['scheme'].'://';
} else if(isset($res['host'])) {
$result .= '//';
}
if (isset($res['host'])) {
$result .= $res['host'];
}
if (isset($res['path'])) {
$result .= $res['path'];
} else {
$result .= '/';
}
if (isset($res['query'])) {
parse_str($res['query'], $res['query']);
} else {
$res['query'] = [];
}
$res['query']['utm'] = 'some';
$res['query']['medium'] = 'stuff';
if (count($res['query']) > 0) {
$result .= '?'.http_build_query($res['query']);
}
if (isset($hash)) {
$result .= '#'.$hash;
}
return 'href="'.$result.'"';
},
$html
);
var_dump($html_text);
The function below is designed to apply rel="nofollow" attributes to all external links and no internal links unless the path matches a predefined root URL defined as $my_folder below.
So given the variables...
$my_folder = 'http://localhost/mytest/go/';
$blog_url = 'http://localhost/mytest';
And the content...
internal
internal cloaked link
external
The end result, after replacement should be...
internal
internal cloaked link
external
Notice that the first link is not altered, since its an internal link.
The link on the second line is also an internal link, but since it matches our $my_folder string, it gets the nofollow too.
The third link is the easiest, since it does not match the blog_url, its obviously an external link.
However, in the script below, ALL of my links are getting nofollow. How can I fix the script to do what I want?
function save_rseo_nofollow($content) {
$my_folder = $rseo['nofollow_folder'];
$blog_url = get_bloginfo('url');
preg_match_all('~<a.*>~isU',$content["post_content"],$matches);
for ( $i = 0; $i <= sizeof($matches[0]); $i++){
if ( !preg_match( '~nofollow~is',$matches[0][$i])
&& (preg_match('~' . $my_folder . '~', $matches[0][$i])
|| !preg_match( '~'.$blog_url.'~',$matches[0][$i]))){
$result = trim($matches[0][$i],">");
$result .= ' rel="nofollow">';
$content["post_content"] = str_replace($matches[0][$i], $result, $content["post_content"]);
}
}
return $content;
}
Here is the DOMDocument solution...
$str = 'internal
internal cloaked link
external
external
external
external
';
$dom = new DOMDocument();
$dom->preserveWhitespace = FALSE;
$dom->loadHTML($str);
$a = $dom->getElementsByTagName('a');
$host = strtok($_SERVER['HTTP_HOST'], ':');
foreach($a as $anchor) {
$href = $anchor->attributes->getNamedItem('href')->nodeValue;
if (preg_match('/^https?:\/\/' . preg_quote($host, '/') . '/', $href)) {
continue;
}
$noFollowRel = 'nofollow';
$oldRelAtt = $anchor->attributes->getNamedItem('rel');
if ($oldRelAtt == NULL) {
$newRel = $noFollowRel;
} else {
$oldRel = $oldRelAtt->nodeValue;
$oldRel = explode(' ', $oldRel);
if (in_array($noFollowRel, $oldRel)) {
continue;
}
$oldRel[] = $noFollowRel;
$newRel = implode($oldRel, ' ');
}
$newRelAtt = $dom->createAttribute('rel');
$noFollowNode = $dom->createTextNode($newRel);
$newRelAtt->appendChild($noFollowNode);
$anchor->appendChild($newRelAtt);
}
var_dump($dom->saveHTML());
Output
string(509) "<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
internal
internal cloaked link
external
external
external
external
</body></html>
"
Try to make it more readable first, and only afterwards make your if rules more complex:
function save_rseo_nofollow($content) {
$content["post_content"] =
preg_replace_callback('~<(a\s[^>]+)>~isU', "cb2", $content["post_content"]);
return $content;
}
function cb2($match) {
list($original, $tag) = $match; // regex match groups
$my_folder = "/hostgator"; // re-add quirky config here
$blog_url = "http://localhost/";
if (strpos($tag, "nofollow")) {
return $original;
}
elseif (strpos($tag, $blog_url) && (!$my_folder || !strpos($tag, $my_folder))) {
return $original;
}
else {
return "<$tag rel='nofollow'>";
}
}
Gives following output:
[post_content] =>
internal
<a href="http://localhost/mytest/go/hostgator" rel=nofollow>internal cloaked link</a>
<a href="http://cnn.com" rel=nofollow>external</a>
The problem in your original code might have been $rseo which wasn't declared anywhere.
Try this one (PHP 5.3+):
skip selected address
allow manually set rel parameter
and code:
function nofollow($html, $skip = null) {
return preg_replace_callback(
"#(<a[^>]+?)>#is", function ($mach) use ($skip) {
return (
!($skip && strpos($mach[1], $skip) !== false) &&
strpos($mach[1], 'rel=') === false
) ? $mach[1] . ' rel="nofollow">' : $mach[0];
},
$html
);
}
Examples:
echo nofollow('something');
// will be same because it's already contains rel parameter
echo nofollow('something'); // ad
// add rel="nofollow" parameter to anchor
echo nofollow('something', 'localhost');
// skip this link as internall link
Using regular expressions to do this job properly would be quite complicated. It would be easier to use an actual parser, such as the one from the DOM extension. DOM isn't very beginner-friendly, so what you can do is load the HTML with DOM then run the modifications with SimpleXML. They're backed by the same library, so it's easy to use one with the other.
Here's how it can look like:
$my_folder = 'http://localhost/mytest/go/';
$blog_url = 'http://localhost/mytest';
$html = '<html><body>
internal
internal cloaked link
external
</body></html>';
$dom = new DOMDocument;
$dom->loadHTML($html);
$sxe = simplexml_import_dom($dom);
// grab all <a> nodes with an href attribute
foreach ($sxe->xpath('//a[#href]') as $a)
{
if (substr($a['href'], 0, strlen($blog_url)) === $blog_url
&& substr($a['href'], 0, strlen($my_folder)) !== $my_folder)
{
// skip all links that start with the URL in $blog_url, as long as they
// don't start with the URL from $my_folder;
continue;
}
if (empty($a['rel']))
{
$a['rel'] = 'nofollow';
}
else
{
$a['rel'] .= ' nofollow';
}
}
$new_html = $dom->saveHTML();
echo $new_html;
As you can see, it's really short and simple. Depending on your needs, you may want to use preg_match() in place of the strpos() stuff, for example:
// change the regexp to your own rules, here we match everything under
// "http://localhost/mytest/" as long as it's not followed by "go"
if (preg_match('#^http://localhost/mytest/(?!go)#', $a['href']))
{
continue;
}
Note
I missed the last code block in the OP when I first read the question. The code I posted (and basically any solution based on DOM) is better suited at processing a whole page rather than a HTML block. Otherwise, DOM will attempt to "fix" your HTML and may add a <body> tag, a DOCTYPE, etc...
Thanks #alex for your nice solution. But, I was having a problem with Japanese text. I have fixed it as following way. Also, this code can skip multiple domains with the $whiteList array.
public function addRelNoFollow($html, $whiteList = [])
{
$dom = new \DOMDocument();
$dom->preserveWhiteSpace = false;
$dom->loadHTML(mb_convert_encoding($html, 'HTML-ENTITIES', 'UTF-8'));
$a = $dom->getElementsByTagName('a');
/** #var \DOMElement $anchor */
foreach ($a as $anchor) {
$href = $anchor->attributes->getNamedItem('href')->nodeValue;
$domain = parse_url($href, PHP_URL_HOST);
// Skip whiteList domains
if (in_array($domain, $whiteList, true)) {
continue;
}
// Check & get existing rel attribute values
$noFollow = 'nofollow';
$rel = $anchor->attributes->getNamedItem('rel');
if ($rel) {
$values = explode(' ', $rel->nodeValue);
if (in_array($noFollow, $values, true)) {
continue;
}
$values[] = $noFollow;
$newValue = implode($values, ' ');
} else {
$newValue = $noFollow;
}
// Create new rel attribute
$rel = $dom->createAttribute('rel');
$node = $dom->createTextNode($newValue);
$rel->appendChild($node);
$anchor->appendChild($rel);
}
// There is a problem with saveHTML() and saveXML(), both of them do not work correctly in Unix.
// They do not save UTF-8 characters correctly when used in Unix, but they work in Windows.
// So we need to do as follows. #see https://stackoverflow.com/a/20675396/1710782
return $dom->saveHTML($dom->documentElement);
}
<?
$str='internal
internal cloaked link
external';
function test($x){
if (preg_match('#localhost/mytest/(?!go/)#i',$x[0])>0) return $x[0];
return 'rel="nofollow" '.$x[0];
}
echo preg_replace_callback('/href=[\'"][^\'"]+/i', 'test', $str);
?>
Here is the another solution which has whitelist option and add tagret Blank attribute.
And also it check if there already a rel attribute before add a new one.
function Add_Nofollow_Attr($Content, $Whitelist = [], $Add_Target_Blank = true)
{
$Whitelist[] = $_SERVER['HTTP_HOST'];
foreach ($Whitelist as $Key => $Link)
{
$Host = preg_replace('#^https?://#', '', $Link);
$Host = "https?://". preg_quote($Host, '/');
$Whitelist[$Key] = $Host;
}
if(preg_match_all("/<a .*?>/", $Content, $matches, PREG_SET_ORDER))
{
foreach ($matches as $Anchor_Tag)
{
$IS_Rel_Exist = $IS_Follow_Exist = $IS_Target_Blank_Exist = $Is_Valid_Tag = false;
if(preg_match_all("/(\w+)\s*=\s*['|\"](.*?)['|\"]/",$Anchor_Tag[0],$All_matches2))
{
foreach ($All_matches2[1] as $Key => $Attr_Name)
{
if($Attr_Name == 'href')
{
$Is_Valid_Tag = true;
$Url = $All_matches2[2][$Key];
// bypass #.. or internal links like "/"
if(preg_match('/^\s*[#|\/].*/', $Url))
{
continue 2;
}
foreach ($Whitelist as $Link)
{
if (preg_match("#$Link#", $Url)) {
continue 3;
}
}
}
else if($Attr_Name == 'rel')
{
$IS_Rel_Exist = true;
$Rel = $All_matches2[2][$Key];
preg_match("/[n|d]ofollow/", $Rel, $match, PREG_OFFSET_CAPTURE);
if( count($match) > 0 )
{
$IS_Follow_Exist = true;
}
else
{
$New_Rel = 'rel="'. $Rel . ' nofollow"';
}
}
else if($Attr_Name == 'target')
{
$IS_Target_Blank_Exist = true;
}
}
}
$New_Anchor_Tag = $Anchor_Tag;
if(!$IS_Rel_Exist)
{
$New_Anchor_Tag = str_replace(">",' rel="nofollow">',$Anchor_Tag);
}
else if(!$IS_Follow_Exist)
{
$New_Anchor_Tag = preg_replace("/rel=[\"|'].*?[\"|']/",$New_Rel,$Anchor_Tag);
}
if($Add_Target_Blank && !$IS_Target_Blank_Exist)
{
$New_Anchor_Tag = str_replace(">",' target="_blank">',$New_Anchor_Tag);
}
$Content = str_replace($Anchor_Tag,$New_Anchor_Tag,$Content);
}
}
return $Content;
}
To use it:
$Page_Content = 'internal
internal
google
example
stackoverflow';
$Whitelist = ["http://yoursite.com","http://localhost"];
echo Add_Nofollow_Attr($Page_Content,$Whitelist,true);
WordPress decision:
function replace__method($match) {
list($original, $tag) = $match; // regex match groups
$my_folder = "/articles"; // re-add quirky config here
$blog_url = 'https://'.$_SERVER['SERVER_NAME'];
if (strpos($tag, "nofollow")) {
return $original;
}
elseif (strpos($tag, $blog_url) && (!$my_folder || !strpos($tag, $my_folder))) {
return $original;
}
else {
return "<$tag rel='nofollow'>";
}
}
add_filter( 'the_content', 'add_nofollow_to_external_links', 1 );
function add_nofollow_to_external_links( $content ) {
$content = preg_replace_callback('~<(a\s[^>]+)>~isU', "replace__method", $content);
return $content;
}
a good script which allows to add nofollow automatically and to keep the other attributes
function nofollow(string $html, string $baseUrl = null) {
return preg_replace_callback(
'#<a([^>]*)>(.+)</a>#isU', function ($mach) use ($baseUrl) {
list ($a, $attr, $text) = $mach;
if (preg_match('#href=["\']([^"\']*)["\']#', $attr, $url)) {
$url = $url[1];
if (is_null($baseUrl) || !str_starts_with($url, $baseUrl)) {
if (preg_match('#rel=["\']([^"\']*)["\']#', $attr, $rel)) {
$relAttr = $rel[0];
$rel = $rel[1];
}
$rel = 'rel="' . ($rel ? (strpos($rel, 'nofollow') ? $rel : $rel . ' nofollow') : 'nofollow') . '"';
$attr = isset($relAttr) ? str_replace($relAttr, $rel, $attr) : $attr . ' ' . $rel;
$a = '<a ' . $attr . '>' . $text . '</a>';
}
}
return $a;
},
$html
);
}