PHP Auto-correcting URLs - php

I dont wan't reinvent wheel, but i couldnt find any library that would do this perfectly.
In my script users can save URLs, i want when they give me list like:
google.com
www.msn.com
http://bing.com/
and so on...
I want to be able to save in database in "correct format".
Thing i do is I check is it there protocol, and if it's not present i add it and then validate URL against RegExp.
For PHP parse_url any URL that contains protocol is valid, so it didnt help a lot.
How guys you are doing this, do you have some idea you would like to share with me?
Edit:
I want to filter out invalid URLs from user input (list of URLs). And more important, to try auto correct URLs that are invalid (ex. doesn't contains protocol). Ones user enter list, it should be validated immediately (no time to open URLs to check those they really exist).
It would be great to extract parts from URL, like parse_url do, but problem with parse_url is, it doesn't work well with invalid URLs. I tried to parse URL with it, and for parts that are missing (and are required) to add default ones (ex. no protocol, add http). But parse_url for "google.com" wont return "google.com" as hostname but as path.
This looks like really common problem to me, but i could not find available solution on internet (found some libraries that will standardize URL, but they wont fix URL if it is invalid).
Is there some "smart" solution to this, or I should stick with my current:
Find first occurrence of :// and validate if it's text before is valid protocol, and add protocol if missing
Found next occurrence of / and validate is hostname is in valid format
For good measure validate once more via RegExp whole URL
I just have feeling I will reject some valid URLs with this, and for me is better to have false positive, that false negative.

I had the same problem with parse_url as OP, this is my quick and dirty solution to auto-correct urls(keep in mind that the code in no way are perfect or cover all cases):
Results:
http:/wwww.example.com/lorum.html => http://www.example.com/lorum.html
gopher:/ww.example.com => gopher://www.example.com
http:/www3.example.com/?q=asd&f=#asd =>http://www3.example.com/?q=asd&f=#asd
asd://.example.com/folder/folder/ =>http://example.com/folder/folder/
.example.com/ => http://example.com/
example.com =>http://example.com
subdomain.example.com => http://subdomain.example.com
function url_parser($url) {
// multiple /// messes up parse_url, replace 2+ with 2
$url = preg_replace('/(\/{2,})/','//',$url);
$parse_url = parse_url($url);
if(empty($parse_url["scheme"])) {
$parse_url["scheme"] = "http";
}
if(empty($parse_url["host"]) && !empty($parse_url["path"])) {
// Strip slash from the beginning of path
$parse_url["host"] = ltrim($parse_url["path"], '\/');
$parse_url["path"] = "";
}
$return_url = "";
// Check if scheme is correct
if(!in_array($parse_url["scheme"], array("http", "https", "gopher"))) {
$return_url .= 'http'.'://';
} else {
$return_url .= $parse_url["scheme"].'://';
}
// Check if the right amount of "www" is set.
$explode_host = explode(".", $parse_url["host"]);
// Remove empty entries
$explode_host = array_filter($explode_host);
// And reassign indexes
$explode_host = array_values($explode_host);
// Contains subdomain
if(count($explode_host) > 2) {
// Check if subdomain only contains the letter w(then not any other subdomain).
if(substr_count($explode_host[0], 'w') == strlen($explode_host[0])) {
// Replace with "www" to avoid "ww" or "wwww", etc.
$explode_host[0] = "www";
}
}
$return_url .= implode(".",$explode_host);
if(!empty($parse_url["port"])) {
$return_url .= ":".$parse_url["port"];
}
if(!empty($parse_url["path"])) {
$return_url .= $parse_url["path"];
}
if(!empty($parse_url["query"])) {
$return_url .= '?'.$parse_url["query"];
}
if(!empty($parse_url["fragment"])) {
$return_url .= '#'.$parse_url["fragment"];
}
return $return_url;
}
echo url_parser('http:/wwww.example.com/lorum.html'); // http://www.example.com/lorum.html
echo url_parser('gopher:/ww.example.com'); // gopher://www.example.com
echo url_parser('http:/www3.example.com/?q=asd&f=#asd'); // http://www3.example.com/?q=asd&f=#asd
echo url_parser('asd://.example.com/folder/folder/'); // http://example.com/folder/folder/
echo url_parser('.example.com/'); // http://example.com/
echo url_parser('example.com'); // http://example.com
echo url_parser('subdomain.example.com'); // http://subdomain.example.com

It's not 100% foolproof, but a 1 liner.
$URL = (((strpos($URL,'https://') === false) && (strpos($URL,'http://') === false))?'http://':'' ).$URL;
EDIT
There was apparently a problem with my initial version if the hostname contain http.
Thanks Trent

Related

if else on variable link input

I have a method of pulling Youtube video data from API links. I use Wordpress and ran into a snag.
In order to pull the thumbnail, views, uploader and video title I need the user to input the 11 character code at the end of watch?v=_______. This is documented with specific instructions for the user, but what if they ignore it and paste the whole url?
// the url 'code' the user should input.
_gXp4hdd2pk
// the wrong way, when the user pastes the whole url.
https://www.youtube.com/watch?v=_gXp4hdd2pk
If the user accidentally pastes the entire URL and not the 11 character code then is there a way I can use PHP to grab either the code or whats at the end of this url (11 characters after 'watch?v='?
Here is my PHP code to pull the data:
// $url is the code at the end of 'watch?v=' that the user inputs
$url = get_post_meta ($post->ID, 'youtube_url', $single = true);
// $code is a variable for placing the $url in a youtube link so I can output it to an API link
$code = 'http://www.youtube.com/watch?v=' . $url;
// $code is called at the end of this oembed code, allowing me to decode json data and pull elements from json to echo in my html
// echoed output returns json file. example: http://www.youtube.com/oembed?url=http://www.youtube.com/watch?v=_gXp4hdd2pk
$json = file_get_contents('http://www.youtube.com/oembed?url='.urlencode($code));
Im looking for something like...
"if user inputs code, use this block of code, else if user inputs whole url use a different block of code, else throw error."
Or... if they use the whole URL can PHP only use a specific section of that url...?
EDIT: Thank you for all the answers! I am new to PHP, so thank you all for your patience. It is difficult for graphic designers to learn PHP, even reading the PHP manual can give us headaches. All of your answers were great and the ones ive tested have worked. Thank you so much :)
Try this,
$code = 'https://www.youtube.com/watch?v=_gXp4hdd2pk';
if (filter_var($code, FILTER_VALIDATE_URL) == TRUE) {
// if `$code` is valid url
$code_arr = explode('?v=', $code);
$query_str = explode('&', $code_arr[1]);
$new_code = $query_str[0];
} else {
// if `$code` is not a valid url like '_gXp4hdd2pk'
$new_code = $code;
}
echo $new_code;
Here's a simple option for you to do, unless you want to use regex like Nisse Engström's Answer.
Using the function parse_url() you could do something like this:
$url = 'https://www.youtube.com/watch?v=_gXp4hdd2pk&list=RD_gXp4hdd2pk#t=184';
$split = parse_url('https://www.youtube.com/watch?v=_gXp4hdd2pk&list=RD_gXp4hdd2pk#t=184');
$params = explode('&', $split['query']);
$video_id = str_replace('v=', '', $params[0]);
now $video_id would return:
_gXp4hdd2pk
from the $url supplied in the above code.
I suggest you read the parse_url() documentation to ensure you understand and grasp it all :-)
Update
for your comment.
You'd use something like this to make sure the parsed value is a valid URL:
// this will check if valid url
if (filter_var($code, FILTER_VALIDATE_URL)) {
// its valid as it returned true
// so run the code
$url = 'https://www.youtube.com/watch?v=_gXp4hdd2pk&list=RD_gXp4hdd2pk#t=184';
$split = parse_url('https://www.youtube.com/watch?v=_gXp4hdd2pk&list=RD_gXp4hdd2pk#t=184');
$params = explode('&', $split['query']);
$video_id = str_replace('v=', '', $params[0]);
} else {
// they must have posted the video code as the if check returned false.
$video_id = $url;
}
Just try as follows ..
$url =" https://www.youtube.com/watch?v=_gXp4hdd2pk";
$url= explode('?v=', $url);
$endofurl = end($url);
echo $endofurl;
Replace $url variable with input .
I instruct my users to copy and paste the whole youtube url.
Then, I do this:
$video_url = 'https://www.youtube.com/watch?v=_gXp4hdd2pk'; // this is from user input
$parsed_url = parse_url($video_url);
parse_str($parsed_url['query'], $query);
$vidID = isset($query['v']) ? $query['v'] : NULL;
$url = "http://gdata.youtube.com/feeds/api/videos/". $vidID; // this is used for the Api
$m = array();
if (preg_match ('#^(https?://www.youtube.com/watch\\?v=)?(.+)$#', $url, $m)) {
$code = $m[2];
} else {
/* No match */
}
The code uses a Regular Expression to match the user input (the subject) against a pattern. The pattern is enclosed in a pair of delimiters (#) of your choice. The rest of the pattern works like this:
^ matches the beginning of the string.
(...) creates a subpattern.
? matches 0 or 1 of the preceeding character or subpattern.
https? matches "http" or "https".
\? matches "?".
(.+) matches 1 or more arbitrary charactes. The . matches any character (except newline). + matches 1 or more of the preceeding character or subpattern.
$ matches the end of the string.
In other words, optionally match an http or https base URL, followed by the video code.
The matches are then written to $m. $m[0] contains the entire string, $m[1] contains the first subpattern (base URL) and $m[2] contains the second subpattern (code).

Get subdomain if any

Is there any predefined method in PHP to get sub-domain from url if any?
url pattern may be:
http://www.sd.domain.com
http://domain.com
http://sd.domain.com
http://domain.com
where sd stands for sub-doamin.
Now method must return different values for every case:
case 1 -> return sd
case 2 -> return false or empty
case 3 -> return sd
case 4 -> return false or empty
I found some good links
PHP function to get the subdomain of a URL
Get subdomain from url?
but not specifically apply on my cases.
Any help will be most appreciable.
Thanks
Okay, here I create a script :)
$url = $_SERVER['HTTP_HOST'];
$host = explode('.', $url);
if( !empty($host[0]) && $host[0] != 'www' && $host[0] != 'localhost' ){
$domain = $host[0];
}else{
$domain = 'home';
}
So, there are several possibilities...
First, regular expressions of course:
(http://)?(www\.)?([^\.]*?)\.?([^\.]+)\.([^\.]+)
The entry in the third parenthesis will be your subdomain. Of course, if your url would be https:// or www2 (seen it all...) the regex would break. So this is just a first draft to start working with.
My second idea is, just as yours, explodeing the url. I thought of something like this:
function getSubdomain($url) {
$parts = explode('.', str_replace('http://', '', $url));
if(count($parts) >= 3) {
return $parts[count($parts) - 3];
}
return null;
}
My idea behind this function was, that if an url is splitted by . the subdomain will almost always be the third last entry in the resulting array. The protocol has to be stripped first (see case 3). Of course, this certainly can be done more elegant.
I hope I could give you some ideas.
Try this.
[update] We have a constant defined _SITE_ADDRESS such as www.mysite.com you could use a literal for this.
It works well in our system for what seems like that exact purpose.
public static function getSubDomain()
{
if($_SERVER["SERVER_NAME"] == str_ireplace('http://','',_SITE_ADDRESS)) return ''; //base domain
$host = str_ireplace(array("www.", _SITE_ADDRESS), "", strtolower(trim($_SERVER["HTTP_HOST"])));
$sub = preg_replace('/\..*/', '', $host);
if($sub == $host) return ''; //this is likely an ip address
return $sub;
}
There is an external note on that function but no link, So sorry to any original developer who's code this is based on.

Getting part of a string using REGEX

I have an Amazon link:
http://www.amazon.com/Pampers-Softcare-Fresh-Wipes-Count/dp/B007KXO998/ref=pd_zg_rss_ts_165796011_165796011_7?ie=UTF8&tag=elson06-20
I'm trying to get the product ID B007FHX9OK that is after dp/ and before ?ref=pd_zg_rss_ts_165796011_165796011_7
I want to get that using a regex or anything that can extract it.
The link of the url is static, it will not changed.
$string = 'http://www.amazon.com/iOttie-Windshield-INCREDIBLE-BlackBerry-Revolution/dp/B007FHX9OK?SubscriptionId=AKIAJJPPYQPVMQLOYLKQ&tag=elson06-20&linkCode=sp1&camp=2025&creative=165953&creativeASIN=B007FHX9OK';
//$string = 'http://www.amazon.com/Pampers-Softcare-Fresh-Wipes-Count/dp/B007KXO998/ref=pd_zg_rss_ts_165796011_165796011_7?ie=UTF8&tag=elson06-20';
$pid = basename((false !== strpos($string, '/ref='))
? pathinfo($string, PATHINFO_DIRNAME)
: parse_url($string, PHP_URL_PATH));
echo $pid; // Outputs B007KXO998 or B007FHX9OK, will work for both types of URLs
You don't need a regex, PHP has built-in functions to parse URLs.
Will the URLs always be in this exact format, or will it be expected to match any Amazon URL?
If the format will always be like this, then you can use #cryptic's answer. Otherwise, it would be more flexible to use a pattern like |dp/([A-Z0-9]+)|i for the pattern.
This will match any alphanumeric string (case insensitive) directly following dp/ in the string. Well, the entire match will include the dp/ part, but the parenthetical portion is a sub-match which will match only the product id.
Edit: According to this page, Amazon's product IDs (ASINs) can be present in a wide variety of URLs, making them difficult to match, and my code above won't catch them all.
One way to try to catch these would be to use parse_url to extract the host and the path portions of the URL. From there, you can check the host portion against known Amazon domain names, and you could explode the path, and check each portion for an alphanumeric section which is ten characters long. Even then, the ASIN for books is the books ISBN, and there are 13-digit versions which Amazon might use in some cases (though I don't have evidence that they do).
Here is a very basic example that I haven't thoroughly tested:
$url = get_url_from_wherever();
$url_parts = parse_url($url);
$host = $url_parts['host'];
$path = explode('/', $url_parts['path']);
$amazon_hosts = array(
'amazon.com', // United States
'amazon.ca', // Canada
'amazon.cn', // China
'amazon.fr', // France
'amazon.it', // Italy
'amazon.de', // Germany
'amazon.es', // Spain
'amazon.co.jp', // Japan
'amazon.co.uk', // United Kingdom
'amzn.to' // URL Shortener
);
$amazon_hosts = array_map('preg_quote', $amazon_hosts);
$asin = FALSE; // initialize in case we don't find the ASIN
if (preg_match('/(^|\.)(' . implode($amazon_hosts, '|') . ')$/i', $host)) {
// valid host
foreach($path as $path_component) {
if (preg_match('/^[A-Z0-9]{10}$/i', $path_component)) {
// this is probably the ASIN, since the string is a 10-character alphanumeric
$asin = $path_component;
}
}
}
if ($asin) {
// process ASIN
} else {
// couldn't find an ASIN in this URL
}
Here's what I did, since I'm pretty sure that the link has always the same format:
$link = 'http://www.amazon.com/Pampers-Softcare-Fresh-Wipes-Count/dp/B007KXO998/ref=pd_zg_rss_ts_165796011_165796011_7?ie=UTF8&tag=elson06-20'
$link = parse_url($link);
$link = explode('/',$link['path']);
$link = $link[3];
echo $link; //B007KXO998

Hash url transform to a long url htaccess? php?

I have constructed my entire webpage with hashes (http://example.com/videos#video01), but the problem is when I want to share on facebook obviously it doesn't recognize the hash, so my question is: Is there a way to transform or redirect the hash url to a long social-friendly-url?
Solution:
I tried one more time with bit.ly's API, I got 50 videos to show each with a hash at the end of the url. I made a little cache script (bit.ly has a limit) and I wrote with PHP a "foreach", seem like bit.ly accepts hashes.
Thanks anyway.
The # and everything after is not sent to a server. In your case you're only sending http://example.com/videos.
New Link format: http://example.com/videos?name=video01
Call this function toward top of controller or http://example.com/videos/index.php:
function redirect()
{
if (!empty($_GET['name'])) {
// sanitize & validate $_GET['name']
// Remove anything which isn't a word, whitespace, number
// or any of the following caracters -_~,;[]().
// If you don't need to handle multi-byte characters
// you can use preg_replace rather than mb_ereg_replace
$file = mb_ereg_replace("([^\w\s\d\-_~,;\[\]\(\).])", '', $_GET['name']);
// Remove any runs of periods
$file = mb_ereg_replace("([\.]{2,})", '', $file);
$valid = file_exists('pathToFiles/' . $file);
if ($valid) {
$url = '/videos#' . $file;
} else {
$url = '/your404page.php';
}
header("Location: $url");
}
}
Sanitization snippet from this highly ranked answer: https://stackoverflow.com/a/2021729/1296209

clean the url in php

I am trying to make a user submit link box. I've been trying all day and can't seem to get it working.
The goal is to make all of these into example.com... (ie. remove all stuff before the top level domain)
Input is $url =
Their are 4 types of url:
www.example.com...
example.com...
http://www.example.com...
http://example.com...
Everything I make works on 1 or 2 types, but not all 4.
How one can do this?
You can use parse_url for that. For example:
function parse($url) {
$parts = parse_url($url);
if ($parts === false) {
return false;
}
return isset($parts['scheme'])
? $parts['host']
: substr($parts['path'], 0, strcspn($parts['path'], '/'));
}
This will leave the "www." part if it already exists, but it's trivial to cut that out with e.g. str_replace. If the url you give it is seriously malformed, it will return false.
Update (an improved solution):
I realized that the above would not work correctly if you try to trick it hard enough. So instead of whipping myself trying to compensate if it does not have a scheme, I realized that this would be better:
function parse($url) {
$parts = parse_url($url);
if ($parts === false) {
return false;
}
if (!isset($parts['scheme'])) {
$parts = parse_url('http://'.$url);
}
if ($parts === false) {
return false;
}
return $parts['host'];
}
Your input can be
www.example.com
example.com
http://www.example.com
http://example.com
$url_arr = parse_url($url);
echo $url_arr['host'];
output is example.com
there's a few steps you can take to get a clean url.
Firstly you need to make sure there is a protocol to make parse_url work correctly so you can do:
//Make sure it has a protocol
if(substr($url,0,7) != 'http://' || substr($url,0,8) != 'https://')
{
$url = 'http://' . $url;
}
Now we run it through parse_url()
$segments = parse_url($url);
But this is where it get's complicated because the way domain names are constructed is that you can have 1,2,3,4,5,6 .. .domain levels, meaning that you cannot detect the domain name from all urls, you have to have a pre compiled list of tld's to check the last portion of the domain, so you then can extract that leaving the website's domain.
There is a list available here : http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1
But you would be better of parsing this list into mysql and then select the row where the tld matches the left side of the domain string.
Then you order by length and limit to 1, if this is found then you can do something like:
$db_found_tld = 'co.uk';
$domain = 'a.b.c.domain.co.uk';
$domain_name = substr($domain,0 - strlen($db_found_tld));
This would leave a.b.c.domain, so you have removed the tld, now the domain name would be extracted like so:
$parts = explode($domain_name);
$base_domain = $parts[count($parts) - 1];
now you have domain.
this seems very lengthy but I hope now you know that its not easy to get just the domain name without tld or sub domains.

Categories