In PHP, I'm trying to validate the path of an Url with regex.
The current regex that I have tested is this one:
^(\/\w+)+\.\w+(\?(\w+=[\w\d]+(&\w+=[\w\d]+)+)+)*$
public function isValidPath($urlPath)
{
if (!preg_match("#^(\/\w+)+\.\w+(\?(\w+=[\w\d]+(&\w+=[\w\d]+)+)+)*$#i", $urlPath)) { return false; }
else { return true; }
}
$arrUrl = parse_url($url);
$urlPath = $arrUrl['path'];
// valid path ?
if(isValidPath($urlPath)) { echo "OK"; }
else { echo "Invalid Path URL"; }
But it doesn't work with path that just start with /.
- / -> valid path
- /aaa -> valid path
- /aaa/bbb -> valid path
- /aaa?q=x -> valid path
- aaa -> Not valid path
- /asd/asd./jsp -> Not valid path
- /asd/asd.jsp/ -> Not valid path
- /asd./asd.jsp -> Not valid path
- /asd///asd.js -> Not valid path
- /asd/asd.jsp&bar=baz?inga=42?quux -> Not valid path
I'm not a regex expert and I'm breaking my head trying to do one that seems very simple.
Here you go:
^\/(?!.*\/$)(?!.*[\/]{2,})(?!.*\?.*\?)(?!.*\.\/).*
Sample function:
function validateUrl($url){
if (preg_match('%^/(?!.*\/$)(?!.*[\/]{2,})(?!.*\?.*\?)(?!.*\.\/).*%im', $url)) {
return true;
} else {
return false;
}
}
I've used some negative look-ahead that exclude certain patterns.
Its matches only the "valid paths" you specified.
Regex101Demo
I use #cmorrissey's approach, which actually does not require regex:
$result = filter_var('http://www.example.com' . $path, FILTER_VALIDATE_URL);
if ($result !== false) {
$result = true;
}
result is then true or false depending on path validity. Note that paths should always start with a / else they are part of a path rather than a complete path.
Related
I know there is a LOT of info on the web regarding to this subject but I can't seem to figure it out the way I want.
I'm trying to build a function which strips the domain name from a url:
http://blabla.com blabla
www.blabla.net blabla
http://www.blabla.eu blabla
Only the plain name of the domain is needed.
With parse_url I get the domain filtered but that is not enough.
I have 3 functions that stips the domain but still I get some wrong outputs
function prepare_array($domains)
{
$prep_domains = explode("\n", str_replace("\r", "", $domains));
$domain_array = array_map('trim', $prep_domains);
return $domain_array;
}
function test($domain)
{
$domain = explode(".", $domain);
return $domain[1];
}
function strip($url)
{
$url = trim($url);
$url = preg_replace("/^(http:\/\/)*(www.)*/is", "", $url);
$url = preg_replace("/\/.*$/is" , "" ,$url);
return $url;
}
Every possible domain, url and extension is allowed. After the function is finished, it must return a array of only the domain names itself.
UPDATE:
Thanks for all the suggestions!
I figured it out with the help from you all.
function test($url)
{
// Check if the url begins with http:// www. or both
// If so, replace it
if (preg_match("/^(http:\/\/|www.)/i", $url))
{
$domain = preg_replace("/^(http:\/\/)*(www.)*/is", "", $url);
}
else
{
$domain = $url;
}
// Now all thats left is the domain and the extension
// Only return the needed first part without the extension
$domain = explode(".", $domain);
return $domain[0];
}
How about
$wsArray = explode(".",$domain); //Break it up into an array.
$extension = array_pop($wsArray); //Get the Extension (last entry)
$domain = array_pop($wsArray); // Get the domain
http://php.net/manual/en/function.array-pop.php
Ah, your problem lies in the fact that TLDs can be either in one or two parts e.g .com vs .co.uk.
What I would do is maintain a list of TLDs. With the result after parse_url, go over the list and look for a match. Strip out the TLD, explode on '.' and the last part will be in the format you want it.
This does not seem as efficient as it could be but, with TLDs being added all the time, I cannot see any other deterministic way.
Ok...this is messy and you should spend some time optimizing and caching previously derived domains. You should also have a friendly NameServer and the last catch is the domain must have a "A" record in their DNS.
This attempts to assemble the domain name in reverse order until it can resolve to a DNS "A" record.
At anyrate, this was bugging me, so I hope this answer helps :
<?php
$wsHostNames = array(
"test.com",
"http://www.bbc.com/news/uk-34276525",
"google.uk.co"
);
foreach ($wsHostNames as $hostName) {
echo "checking $hostName" . PHP_EOL;
$wsWork = $hostName;
//attempt to strip out full paths to just host
$wsWork = parse_url($hostName, PHP_URL_HOST);
if ($wsWork != "") {
echo "Was able to cleanup $wsWork" . PHP_EOL;
$hostName = $wsWork;
} else {
//Probably had no path info or malformed URL
//Try to check it anyway
echo "No path to strip from $hostName" . PHP_EOL;
}
$wsArray = explode(".", $hostName); //Break it up into an array.
$wsHostName = "";
//Build domain one segment a time probably
//Code should be modified not to check for the first segment (.com)
while (!empty($wsArray)) {
$newSegment = array_pop($wsArray);
$wsHostName = $newSegment . $wsHostName;
echo "Checking $wsHostName" . PHP_EOL;
if (checkdnsrr($wsHostName, "A")) {
echo "host found $wsHostName" . PHP_EOL;
echo "Domain is $newSegment" . PHP_EOL;
continue(2);
} else {
//This segment didn't resolve - keep building
echo "No Valid A Record for $wsHostName" . PHP_EOL;
$wsHostName = "." . $wsHostName;
}
}
//if you get to here in the loop it could not resolve the host name
}
?>
try with preg_replace.
something like
$domain = preg_replace($regex, '$1', $url);
regex
function test($url)
{
// Check if the url begins with http:// www. or both
// If so, replace it
if (preg_match("/^(http:\/\/|www.)/i", $url))
{
$domain = preg_replace("/^(http:\/\/)*(www.)*/is", "", $url);
}
else
{
$domain = $url;
}
// Now all thats left is the domain and the extension
// Only return the needed first part without the extension
$domain = explode(".", $domain);
return $domain[0];
}
I have the following seems simple code in php; but the ptoblem is that it shows all valid links as "not valid"; any help appreciated:
<?php
$m = "urllist.txt";
$n = fopen($m, "r");
while (!feof($n)) {
$l = fgets($n);
if (filter_var($l, FILTER_VALIDATE_URL) === FALSE) {
echo "NOT VALID - $l<br>";
} else {
echo "VALID - $l<br>";
}
}
fclose($n);
?>
The string returned by fgets() contains a trailing newline character that needs to be trimmed before you can validate it. Try out following code, I hope this will help you:
<?php
$m = "urllist.txt";
$n = fopen($m, "r");
while (!feof($n)) {
$l = fgets($n);
if(filter_var(trim($l), FILTER_VALIDATE_URL)) {
echo "VALID - $l<br>";
} else {
echo "NOT VALID - $l<br>";
}
}
fclose($n);
?>
I have tried with following urls:
http://stackoverflow.com/
https://www.google.co.in/
https://www.google.co.in/?gfe_rd=cr&ei=bf4HVLOmF8XFoAOg_4HoCg&gws_rd=ssl
www.google.com
http://www.example.com
example.php?name=Peter&age=37
and get following result:
VALID - http://stackoverflow.com/
VALID - https://www.google.co.in/
VALID - https://www.google.co.in/?gfe_rd=cr&ei=bf4HVLOmF8XFoAOg_4HoCg&gws_rd=ssl
NOT VALID - www.google.com
VALID - http://www.example.com
NOT VALID - example.php?name=Peter&age=37
maybe you have some symbols at end of each line '\n'
I think you can just use trim function before validate the $l like this:
filter_var(trim($l), FILTER_VALIDATE_URL) === TRUE
maybe this will help you.
Please try with the different filters available to see where it fails:
FILTER_FLAG_SCHEME_REQUIRED - Requires URL to be an RFC compliant URL
(like http:// example)
FILTER_FLAG_HOST_REQUIRED - Requires URL to
include host name (like http:// www.example.com)
FILTER_FLAG_PATH_REQUIRED - Requires URL to have a path after the
domain name (like www. example.com/example1/test2/)
FILTER_FLAG_QUERY_REQUIRED - Requires URL to have a query string
(like "example.php?name=Peter&age=37")
(cc of http://www.w3schools.com/php/filter_validate_url.asp)
You can try the good old regex too:
if (!preg_match("/\b(?:(?:https?|ftp):\/\/|www\.)[-a-z0-9+&##\/%?=~_|!:,.;]*[-a-z0-9+&##\/%=~_|]/i",$url))
Try this code. It must be helpful. I have tested it and its working.
<?php
$m = "urllist.txt";
$n = fopen($m, "r");
while (!feof($n)) {
$l = fgets($n);
if(filter_var(trim($l), FILTER_VALIDATE_URL)) {
echo "URL is not valid";
}
else{
echo "URL is valid";
}
}
fclose($n);
?>
Here is the DEMO
I need to detect if a provided URL matches the one currently navigated to. Mind you the following are all valid, yet semantically equivalent URLs:
https://www.example.com/path/to/page/index.php?parameter=value
https://www.example.com/path/to/page/index.php
https://www.example.com/path/to/page/
https://www.example.com/path/to/page
http://www.example.com/path/to/page
//www.example.com/path/to/page
//www/path/to/page
../../../path/to/page
../../to/page
../page
./
The final function must return true if the given URL points back to the current page, or false if it does not. I do not have a list of expected URLs; this will be used for a client who just wants links to be disabled when they link to the current page. Note that I wish to ignore parameters, as these do not indicate the current page on this site. I got as far as using the following regex:
/^((https?:)?\/\/www(\.example\.com)\/path\/to\/page\/?(index.php)?(\?.+=.*(\&.+=.*)*)?)|(\.\/)$/i
where https?, www, \.example\.com, \/path\/to\/page, and index.php are dynamically detected with $_SERVER["PHP_SELF"] and made into regex form, but that doesn't match the relative URLs like ../../to/page.
EDIT: I got a bit farther with the regex: refiddle.com/gv8
now I'd just need PHP to dynamically create the regex for any given page.
First off, there is no way to predict the total list of valid URLs that will result in display of the current page, since you can't predict (or control) external links that might link back to the page. What if someone uses TinyURL or bit.ly? A regex will not cut the mustard.
If what you need is to insure that a link does not result in the same page, then you need to TEST it. Here's a basic concept:
Every page has a unique ID. Call it a serial number. It should be persistent. The serial number should be embedded somewhere predictable (though perhaps invisibly) within the page.
As the page is created, your PHP will need to walk through all the links for each page, visit each one, and determine whether the link resolves to a page with a serial number that matches the calling page's serial number.
If the serial number does not match, display the link as a link. Otherwise, display something else.
Obviously, this will be an arduous, resource-intensive process for page production. You really don't want to solve your problem this way.
With your "ultimate goal" comment in mind, I suspect your best approach is to be approximate. Here are some strategies...
First option is also the simplest. If you're building a content management system that USUALLY creates links in one format, just support that format. Wikipedia's approach works because a [[link]] is something THEY generate, so THEY know how it's formatted.
Second is more the direction you've gone with your question. The elements of a URL are "protocol", "host", "path" and "query string". You can break them out into a regex, and possibly get it right. You've already stated that you intend to ignore the query string. So ... start with '((https?:)?//(www\.)?example\.com)?' . $_SERVER['SCRIPT_NAME'] and add endings to suit. Other answers are already helping you with this.
Third option is quite a bit more complex, but gives you more fine-grained control over your test. As with the last option, you have the various URL elements. You can test for the validity of each without using a regex. For example:
$a = array(); // init array for valid URLs
// Step through each variation of our path...
foreach([$_SERVER['SCRIPT_NAME'], $_SERVER['REQUEST_URI']] as $path) {
// Step through each variation of our host...
foreach ([$_SERVER['HTTP_HOST'], explode(".", $_SERVER['HTTP_HOST'])[0]] as $server) {
// Step through each variation of our protocol...
foreach (['https://','http://','//'] as $protocol) {
// Set the URL as a key.
$a[ $protocol . $server . $path ] = 1;
}
}
// Also for each path, step through directories and parents...
$apath=explode('/', $path); // turn the path into an array
unset($apath[0]); // strip the leading slash
for( $i = 1; $i <= count($apath); $i++ ) {
if (strlen($apath[$i])) {
$a[ str_repeat("../", 1+count($apath)-$i) . implode("/", $apath) ] = 1;
// add relative paths
}
unset($apath[$i]);
}
$a[ "./" . implode("/", $apath) ] = 1; // add current directory
}
Then simply test whether the link (minus its query string) is an index within the array. Or adjust to suit; I'm sure you get the idea.
I like this third solution the best.
A regex isn't actually necessary to strip off all the query parameters. You could use strok():
$url = strtok($url, '?');
And, to check the output for your URL array:
$url_list = <<<URL
https://www.example.com/path/to/page/index.php?parameter=value
https://www.example.com/path/to/page/index.php
...
./?parameter=value
./
URL;
$urls = explode("\n", $url_list);
foreach ($urls as $url) {
$url = strtok($url, '?'); // remove everything after ?
echo $url."\n";
}
As a function (could be improved):
function checkURLMatch($url, $url_array) {
$url = strtok($url, '?'); // remove everything after ?
if( in_array($url, $url_array)) {
// url exists array
return True;
} else {
// url not in array
return False;
}
}
See it live!
You can use this approach:
function checkURL($me, $s) {
$dir = dirname($me) . '/';
// you may need to refine this
$s = preg_filter(array('~^//~', '~/$~', '~\?.*$~', '~\.\./~'),
array('', '', '', $dir), $s);
// parse resulting URL
$url = parse_url($s);
var_dump($url);
// match parsed URL's path with self
return ($url['path'] === $me);
}
// your page's URL with stripped out .php
$me = str_replace('.php', '', $_SERVER['PHP_SELF']);
// assume this is the URL you are matching against
$s = '../page/';
// compare $me with $s
$ret = checkURL($me, $s);
var_dump($ret);
Live Demo: http://ideone.com/OZZM53
As I have been paid to work on this for the last couple days, I wasn't just sitting around waiting for an answer. I've come up with one that works in my test platform; what does everyone else think? It feels a little bloated, but also feels bulletproof.
Debug echoes left in in case you wanna echo out some stuffs.
global $debug;$debug = false; // toggle debug echoes and var_dumps
/**
* Returns a boolean indicating whether the given URL is the current one.
*
* #param $otherURL the other URL, as a string. Can be any URL, relative or canonical. Invalid URLs will not match.
*
* #return true iff the given URL points to the same place as the current one
*/
function isCurrentURL($otherURL)
{global $debug;
if($debug)echo"<!--\r\nisCurrentURL($otherURL)\r\n{\r\n";
if ($thisURL == $otherURL) // unlikely, but possible. Might as well check.
return true;
// BEGIN Parse other URL
$otherProtocol = parse_url($otherURL);
$otherHost = $otherProtocol["host"] or null; // if $otherProtocol["host"] is set and is not null, use it. Else, use null.
$otherDomain = explode(".", $otherHost) or $otherDomain;
$otherSubdomain = array_shift($otherDomain); // subdom only
$otherDomain = implode(".", $otherDomain); // domain only
$otherFilepath = $otherProtocol["path"] or null;
$otherProtocol = $otherProtocol["scheme"] or null;
// END Parse other URL
// BEGIN Get current URL
#if($debug){echo '$_SERVER == '; var_dump($_SERVER);}
$thisProtocol = $_SERVER["HTTP_X_FORWARDED_PROTO"]; // http or https
$thisHost = $_SERVER["HTTP_HOST"]; // subdom or subdom.domain.tld
$thisDomain = explode(".", $thisHost);
$thisSubdomain = array_shift($thisDomain); // subdom only
$thisDomain = implode(".", $thisDomain); // domain only
if ($thisDomain == "")
$thisDomain = $otherDomain;
$thisFilepath = $_SERVER["PHP_SELF"]; // /path/to/file.php
$thisURL = "$thisProtocol://$thisHost$thisFilepath";
// END Get current URL
if($debug)echo"Current URL is $thisURL ($thisProtocol, $thisSubdomain, $thisDomain, $thisFilepath).\r\n";
if($debug)echo"Other URL is $otherURL ($otherProtocol, $otherHost, $otherFilepath).\r\n";
$thisDomainRegexed = isset($thisDomain) && $thisDomain != null && $thisDomain != "" ? "(\." . str_replace(".","\.",$thisDomain) . ")?" : ""; // prepare domain for insertion into regex
// v this makes the last slash before index.php optional
$regex = "/^(($thisProtocol:)?\/\/$thisSubdomain$thisDomainRegexed)?" . preg_replace('/index\\\..+$/i','?(index\..+)?', str_replace(array(".", "/"), array("\.", "\/"), $thisFilepath)) . '$/i';
if($debug)echo "\r\nregex is $regex\r\nComparing regex against $otherURL";
if (preg_match($regex, $otherURL))
{
if($debug)echo"\r\n\tIt's a match! Returning true...\r\n}\r\n-->";
return true;
}
else
{
if($debug)echo"\r\n\tOther URL is NOT a fully-qualified URL in this subdomain. Checking if it is relative...";
if($otherURL == $thisFilepath) // somewhat likely
{
if($debug)echo"\r\n\t\tOhter URL and this filepath are an exact match! Returning true...\r\n}\r\n-->";
return true;
}
else
{
if($debug)echo"\r\n\t\tFilepath is not an exact match. Testing against regex...";
$regex = regexFilepath($thisFilepath);
if($debug)echo"\r\n\t\tNew Regex is $regex";
if($debug)echo"\r\n\t\tComparing regex against $otherFilepath...";
if (preg_match($regex, $otherFilepath))
{
if($debug)echo"\r\n\t\t\tIt's a match! Returning true...\r\n}\r\n-->";
return true;
}
}
}
if($debug)echo"\r\nI tried my hardest, but couldn't match $otherURL to $thisURL. Returning false...\r\n}\r\n-->";
return false;
}
/**
* Uses the given filepath to create a regex that will match it in any of its relative representations.
*
* #param $path the filepath to be converted
*
* #return a regex that matches a all relative forms of the given filepath
*/
function regexFilepath($path)
{global $debug;
if($debug)echo"\r\nregexFilepath($path)\r\n{\r\n";
$filepathArray = explode("/", $path);
if (count($filepathArray) == 0)
throw new Exception("given parameter not a filepath: $path");
if ($filepathArray[0] == "") // this can happen if the path starts with a "/"
array_shift($filepathArray); // strip the first element off the array
$isIndex = preg_match("/^index\..+$/i", end($filepathArray));
$filename = array_pop($filepathArray);
if($debug){var_dump($filepathArray);}
$ret = '';
foreach($filepathArray as $i)
$ret = "(\.\.\/$ret$i\/)?"; // make a pseudo-recursive relative filepath
if($debug)echo "\r\n$ret";
$ret = preg_replace('/\)\?$/', '?)', $ret); // remove the last '?' and add one before the last '\/'
if($debug)echo "\r\n$ret";
$ret = '/^' . ($ret == '' ? '\.\/' : "((\.\/)|$ret)") . ($isIndex ? '(index\..+)?' : str_replace('.', '\.', $filename)) . '$/i'; // if this filepath leads to an index.php (etc.), then that filename is implied and irrelevant.
if($debug)echo'\r\n}\r\n';
}
This seems to match everything I need it to match, and not what I don't need it to.
I am attempting to create a php function which will check if the passes URL is a short URL. Something like this:
/**
* Check if a URL is a short URL
*
* #param string $url
* return bool
*/
function _is_short_url($url){
// Code goes here
}
I know that a simpler and a sure shot way would be to check a 301 redirect, but this function aims at saving an external request just for checking. Neither should the function check against a list of URL shortners as that would be a less scale-able approach.
So are a few possible checks I was thinking:
Overall URL length - May be a max of 30 charecters
URL length after last '/' - May be a max of 10 characters
Number of '/' after protocol (http://) - Max 2
Max length of host
Any thoughts on a possible approach or a more exhaustive checklist for this?
EDIT: This function is just an attempt to save an external request, so its ok to return true for a non-short url (but a real short one). Post passing through this function, I would anyways expand all short URLs by checking 301 redirects. This is just to eliminate the obvious ones.
I would not recommend to use regex, as it will be too complex and difficult to understand. Here is a PHP code to check all your constraints:
function _is_short_url($url){
// 1. Overall URL length - May be a max of 30 charecters
if (strlen($url) > 30) return false;
$parts = parse_url($url);
// No query string & no fragment
if ($parts["query"] || $parts["fragment"]) return false;
$path = $parts["path"];
$pathParts = explode("/", $path);
// 3. Number of '/' after protocol (http://) - Max 2
if (count($pathParts) > 2) return false;
// 2. URL length after last '/' - May be a max of 10 characters
$lastPath = array_pop($pathParts);
if (strlen($lastPath) > 10) return false;
// 4. Max length of host
if (strlen($parts["host"]) > 10) return false;
return true;
}
Here is a small function which checks for all your requirements. I was able to check it without using a complex regex,... only preg_split. You should adapt it yourself easily.
<?php
var_dump(_isShortUrl('http://bit.ly/foo'));
function _isShortUrl($url)
{
// Check for max URL length (30)
if (strlen($url) > 30) {
return false;
}
// Check, if there are more than two URL parts/slashes (5 splitted values)
$parts = preg_split('/\//', $url);
if (count($parts) > 5) {
return false;
}
// Check for max host length (10)
$host = $parts[2];
if (strlen($host) > 10) {
return false;
}
// Check for max length of last URL part (after last slash)
$lastPart = array_pop($parts);
if (strlen($lastPart) > 10) {
return false;
}
return true;
}
If I was you I would test if the url shows a 301 redirect, and then test if the redirect redirects to another website:
function _is_short_url($url) {
$options['http']['method'] = 'HEAD';
stream_context_set_default($options); # don't fetch the full page
$headers = get_headers($url,1);
if ( isset($headers[0]) ) {
if (strpos($headers[0],'301')!==false && isset($headers['Location'])) {
$location = $headers['Location'];
$url = parse_url($url);
$location = parse_url($location);
if ($url['host'] != $location['host'])
return true;
}
}
return false;
}
echo (int)_is_short_url('http://bit.ly/1GoNYa');
I have an input box that tells uers to enter a link from imgur.com
I want a script to check the link is for the specified site but I'm not sue how to do it?
The links are as follows: http://i.imgur.com/He9hD.jpg
Please note that after the /, the text may vary e.g. not be a jpg but the main domain is always http://i.imgur.com/.
Any help appreciated.
Thanks, Josh.(Novice)
Try parse_url()
try {
if (!preg_match('/^(https?|ftp)://', $_POST['url']) AND !substr_count($_POST['url'], '://')) {
// Handle URLs that do not have a scheme
$url = sprintf("%s://%s", 'http', $_POST['url']);
} else {
$url = $_POST['url'];
}
$input = parse_url($url);
if (!$input OR !isset($input['host'])) {
// Either the parsing has failed, or the URL was not absolute
throw new Exception("Invalid URL");
} elseif ($input['host'] != 'i.imgur.com') {
// The host does not match
throw new Exception("Invalid domain");
}
// Prepend URL with scheme, e.g. http://domain.tld
$host = sprintf("%s://%s", $input['scheme'], $input['host']);
} catch (Exception $e) {
// Handle error
}
substr($input, 0, strlen('http://i.imgur.com/')) === 'http://i.imgur.com/'
Check this, using stripos
if(stripos(trim($url), "http://i.imgur.com")===0){
// the link is from imgur.com
}
Try this:
<?php
if(preg_match('#^http\:\/\/i\.imgur.com\/#', $_POST['url']))
echo 'Valid img!';
else
echo 'Img not valid...';
?>
Where $_POST['url'] is the user input.
I haven't tested this code.
$url_input = $_POST['input_box_name'];
if ( strpos($url_input, 'http://i.imgur.com/') !== 0 )
...
Several ways of doing it.. Here's one:
if ('http://i.imgur.com/' == substr($link, 0, 19)) {
...
}