I need to match the "base" url, what I mean is:
Not match --> http://google.com
Not match --> http://www.google.com
Not match --> www.google.com
Match! --> google.com
I was trying to use a negative look behind to make sure there was no http:// or www, but it didn't seem to work correctly.
Do this has to be with only one regex?
You could have the first regex that will match all URLs found. Something like that:
\b.+?\.\w{2,4}\b
And then filter all matches and keep the ones that do not match the following:
^(http://|www)
although to be honest, I wouldn't use Regex unless it is strictly necessary for that.
Note:
You can always find a better regex to match the URLs. The thing here is that they may not start with http:// or www, so we can't restrict the regex so much. Be ready to have other matches that are not urls at all, like:
yesterday.but in I was there yesterday.but no one saw me
Related
I always have I kinda rough time working with regexes. I'm trying to make a regex that matches routes, when the route has parameters set:
For instance:
/post/1 matches /post/{id}
/post/5/ doesn't match /post/{id}
/post/6/comments/4 matches /post/{id}/comments/{comment}
/post/a-random-slug matches /post/{id} or /post/{slug} (whatever you want to name the param)
/user matches /user, but not /user/
What I currently did is create a regex for every route, and then match the current URI against that route regex.
What I currently have is:
My regex
In this example I try to make a regex for the route: /post/{param1}/{param2}. Meaning it should match /post/ then a parameter and another parameter, but nothing after that parameter.
As you can see: ^(\/post\b)(\/.{1,}\/)(.{1,}\b)$ matches /post/what-is-your-name/5, and when I add another / it doesnt match anymore. However if you add characters after that regex again it starts to match again.
Meaning that:
/post/what-is-your-name/5/ doesn't match
/post/what-is-your-name/5/more does match
Does anyone have an idea how I can accomplish the first example?
I'm by far someone who knows a lot about regexes, if someone sees a better way to match URIs against routes then please let me know.
Hope this will help you out
Regex: ^(?:\/post\b)(?:\/[\w]+){2}$
Regex demo
I'm trying to write a regexp.
some background info: I am try to see if the REQUEST_URI of my website's URL contains another URL. like these:
http://mywebsite.com/google.com/search=xyz
However, the url wont always contain the 'http' or the 'www'. so the pattern should also match strings like:
http://mywebsite.com/yahoo.org/search=xyz
http://mywebsite.com/www.yahoo.org/search=xyz
http://mywebsite.com/msn.co.uk'
http://mywebsite.com/http://msn.co.uk'
there are a bunch of regexps out there to match urls but none I have found do an optional match on the http and www.
i'm wondering if the pattern to match could be something like:
^([a-z]).(com|ca|org|etc)(.)
I thought maybe another option was to perhaps just match any string that had a dot (.) in it. (as the other REQUEST_URI's in my application typically won't contain dots)
Does this make sense to anyone?
I'd really appreciate some help with this its been blocking my project for weeks.
Thanks you very much
-Tim
I suggest using a simple approach, essentially building on what you said, just anything with a dot in it, but working with the forward slashes too. To capture everything and not miss unusual URLs. So something like:
^((?:https?:\/\/)?[^./]+(?:\.[^./]+)+(?:\/.*)?)$
It reads as:
optional http:// or https://
non-dot-or-forward-slash characters
one or more sets of a dot followed by non-dot-or-forward-slash characters
optional forward slash and anything after it
Capturing the whole thing to the first grouping.
It would match, for example:
nic.uk
nic.uk/
http://nic.uk
http://nic.uk/
https://example.com/test/?a=bcd
Verifying they are valid URLs is another story! It would also match:
index.php
It would not match:
directory/index.php
The minimal match is basically something.something, with no forward slash in it, unless it comes at least one character past the dot. So just be sure not to use that format for anything else.
To match an optional part, you use a question mark ?, see Optional Items.
For example to match an optional www., capture the domain and the search term, the regular expression could be
(www\.)?(.+?)/search=(.+)
Although, the question mark in .+? is a non-greedy quantifier, see http://www.regular-expressions.info/repeat.html.
You might try starting your regex with
^(http://)?(www\.)?
And then the rules to match the rest of a URL.
$re = '/http:\/\/mywebsite\.com\/((?:http:\/\/)?[0-9A-Za-z]+(?:-+[0-9A-Za-z]+)*(?:\.[0-9A-Za-z]+(?:-+[0-9A-Za-z]+)*)+(?:\/.*)?)/';
https://regex101.com/r/x6vUvp/1
Obeys the DNS rule that hyphens must be surrounded. Replace http with https? to allow https URLs as well.
According to the list of TLDs at Wikipedia there are at least 1519 of them and it's not constant so you may want to give the domain its own capture group so it can be verified with an online API or a file listing them all.
Here is my two cents :
$regex = "/http:\/\/mywebsite\.com\/((http:\/\/|www\.)?[a-z]*(\.org|\.co\.uk|\.com).*)/";
See the working exemple
But I'm sure you can do better !
Hope it helps.
I'm trying to create a snippet of regex that will match a URL route.
Basically, if I have this route /users/:id I want /users/100 to match, but /users/100/edit not to match.
This is what I'm using now: users/(.*)/ but because of the greedy match it's matching regardless of what's after the user ID. I need some way of "breaking" the match if there's an /edit or something else on the end of the route.
I've looked into the Regex NOT operator but with no luck.
Any advice?
Are you just trying to collect digits?
You could use users/(\d*)/
And this one is how you would do it if you wanted to collect everything until a /, and it uses a NOT, ^/users/[^/]*$
You can use negative lookahead:
users/(.*)/(?!edit)
This will always require a trailing slash however. Maybe a better solution would be:
users/(\d+)(?!/edit)
See this post for more information.
I'm using a pattern as described by John Gruber in this daringfireball article to auto link URLs in user comments.
I'm using it with PHP to match URLs, and want it to match a single TLD with no www and no trailing slash, but it doesn't seem to be working.
Here's the pattern (and can be seen in more detail at the article above):
$pattern = '#(?i)\b((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4})(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:\'".,<>?«»“”‘’]))#';
Specifically I'm looking at this particular subpattern: [a-z0-9.\-]+[.][a-z]{2,4}
This subpattern works separately, but as a part of the larger pattern, it doesn't match google.com.
[a-z0-9.\-]+[.][a-z]{2,4} works as you expect, but the rest of the pattern requires at least 1 following character:
google.com/
google.com?lang=en-us
google.com#!foo/bar
etc.
You can try allowing the tail to be optional, but it may in turn give you false-positives rather than excluding false-negatives:
$pattern = '#...“”‘’])?)#'; # '...' for brevity
# ^
Works for me:
http://regexr.com?2uica
Are you sure there is nothing in you php that is tripping you up?
EDIT
It's because the full pattern expects to find something before the domain name, like http:// or www
I have a load of user-submitted content. It is HTML, and may contain URLs. Some of them will be <a>'s already (if the user is good) but sometimes users are lazy and just type www.something.com or at best http://www.something.com.
I can't find a decent regex to capture URLs but ignore ones that are immediately to the right of either a double quote or '>'. Anyone got one?
Jan Goyvaerts, creator of RegexBuddy, has written a response to Jeff Atwood's blog that addresses the issues Jeff had and provides a nice solution.
\b(?:(?:https?|ftp|file)://|www\.|ftp\.)[-A-Z0-9+&##/%=~_|$?!:,.]*[A-Z0-9+&##/%=~_|$]
In order to ignore matches that occur right next to a " or >, you could add (?<![">]) to the start of the regex, so you get
(?<![">])\b(?:(?:https?|ftp|file)://|www\.|ftp\.)[-A-Z0-9+&##/%=~_|$?!:,.]*[A-Z0-9+&##/%=~_|$]
This will match full addresses (http://...) and addresses that start with www. or ftp. - you're out of luck with addresses like ars.userfriendly.org...
This thread is old as the hills, but I came across it while working on my own problem: That is, convert any urls into links, but leave alone any that are already within anchor tags. After a while, this is what has popped out:
(?!(?!.*?<a)[^<]*<\/a>)(?:(?:https?|ftp|file)://|www\.|ftp\.)[-A-Z0-9+&#/%=~_|$?!:,.]*[A-Z0-9+&#/%=~_|$]
With the following input:
http://www.google.com
http://google.com
www.google.com
<p>http://www.google.com<p>
this is a normal sentence. let's hope it's ok.
www.google.com
This is the output of a preg_replace:
http://www.google.com
http://google.com
www.google.com
<p>http://www.google.com<p>
this is a normal sentence. let's hope it's ok.
www.google.com
Just wanted to contribute back to save somebody some time.
I made a slight modification to the Regex contained in the original answer:
(?<![.*">])\b(?:(?:https?|ftp|file)://|[a-z]\.)[-A-Z0-9+&#/%=~_|$?!:,.]*[A-Z0-9+&#/%=~_|$]
which allows for more subdomains, and also runs a more full check on tags. To apply this to PHP's preg replace, you can use:
$convertedText = preg_replace( '#(?<![.*">])\b(?:(?:https?|ftp|file)://|[a-z]\.)[-A-Z0-9+&#/%=~_|$?!:,.]*[A-Z0-9+&#/%=~_|$]#i', '\0', $originalText );
Note, I removed # from the regex, in order to use it as a delimiter for preg_replace. It's pretty rare that # would be used in a URL anyway.
Obviously, you can modify the replacement text, and remove target="_blank", or add rel="nofollow" etc.
Hope that helps.
To skip existing ones just use a look-behind - add (?<!href=") to the beginning of your regular expression, so it would look something like this:
/(?<!href=")http://\S*/
Obviously this isn't a complete solution for finding all types of URLs, but this should solve your problem of messing with existing ones.
if (preg_match('/\b(?<!=")(https?|ftp|file):\/\/[-A-Z0-9+&##\/%?=~_|!:,.;]*[A-Z0-9+&##\/%=~_|](?!.*".*>)(?!.*<\/a>)/i', $subject)) {
# Successful match
} else {
# Match attempt failed
}
Shameless plug: You can look here (regular expression replace a word by a link) for inspiration.
The question asked to replace some word with a certain link, unless there already was a link. So the problem you have is more or less the same thing.
All you need is a regex that matches a URL (in place of the word). The simplest assumption would be like this: An URL (optionally) starts with "http://", "ftp://" or "mailto:" and lasts as long as there are no white-space characters, line breaks, tag brackets or quotes).
Beware, long regex ahead. Apply case-insensitively.
(href\s*=\s*['"]?)?((?:http://|ftp://|mailto:)?[^.,<>"'\s\r\n\t]+(?:\.(?![.<>"'\s\r\n])[^.,!<>"'\s\r\n\t]+)+)
Be warned - this will also match URLs that are technically invalid, and it will recognize things.formatted.like.this as an URL. It depends on your data if it is too insensitive. I can fine-tune the regex if you have examples where it returns false positives.
The regex will produce two match groups. Group 2 will contain the matched thing, which is most likely an URL. Group 1 will either contain an empty string or an 'href="'. You can use it as an indicator that this match occurred inside a href parameter of an existing link and you don't have to do touch that one.
Once you confirm that this does the right thing for you most of the time (with user supplied data, you can never be sure), you can do the rest in two steps, as I proposed it in the other question:
Make a link around every URL there is (unless there is something in match group 1!) This will produce double nested <a> tags for things that have a link already.
Scan for incorrectly nested <a> tags, removing the innermost one