I have a set of URL's in an array. Some are just domains (http://google.com) and some are subdomains (http://test.google.com).
I am trying to extract just the domain part from each of them without the subdomain.
parse_url($domain)
still keeps the subdomain.
Is there another way?
If you're only concerned with actual top level domains, the simple answer is to just get whatever's before the last dot in the domain name.
However, if you're looking for "whatever you buy from a registrar", that is much more tricky. IANA delegats authority for each country-specific TLD to the national registrars, which means that allocation policy varies for each TLD. Famous examples include .co.uk, .org.uk, etc, but there are countless others that are less known (for example .priv.no).
If you need a solution that will work correctly for every single TLD in existence, you will have to research policy for each TLD, which is quite an undertaking since many national registrars have horrible websites with unclear policies that, just to make it even more confusin, often are not available in English.
In practice however, you probably don't need to account for every TLD or for every available subdomain within every TLD. So a practical solution would be to compile a list of known 2-part (and more) TLD's that you need to support. Anything that doesn't match that list, you can treat as a 1-part TLD. Like so:
<?php
$special_domains = array('co.uk', 'org.uk, /* ... etc */');
function getDomain($domain)
{
global $special_domains;
for($i = 0; $i < count($special_domains); $i++)
{
if(substr($domain, -strlen($special_domains[i])) == $special_domains[i])
{
$domain = substr($domain, 0, -strlen($special_domains[i])));
$lastdot = strrchr($domain, '.');
return ($lastdot ? substr($domain, $lastdot) : $domain;
}
$domain = substr($domain, 0, strrchr($domain, "."));
$lastdot = strrchr($domain, '.');
return ($lastdot ? substr($domain, $lastdot) : $domain;
}
}
?>
PS: I haven't tested this code so it may need some modification but the basic logic should be ok.
There might be a work-around for .co.uk problem.
Let's presume that if it is possible to register *.co.uk, *.org.uk, *.mil.ae and similar domains, then it is not possible to resolve DNS of co.uk, org.uk and mil.ae. I've checked some URL's and it seemed to be true.
Then you can use something like this:
$testdomains = array(
'http://google.com',
'http://probablynotexisting.com',
'http://subdomain.bbc.co.uk', // should resolve into bbc.co.uk, because it is not possible to ping co.uk
'http://bbc.co.uk'
);
foreach ($testdomains as $raw_domain) {
$domain = join('.', array_slice(explode('.', parse_url($raw_domain, PHP_URL_HOST)), -2));
$ip = gethostbyname($domain);
if ($ip == $domain) {
// failure, let's include another dot
$domain = join('.', array_slice(explode('.', parse_url($raw_domain, PHP_URL_HOST)), -3));
$ip = gethostbyname($domain);
if ($ip == $domain) {
// another failure, shall we give up and move on!
echo $raw_domain . ": failed<br />\n";
continue;
}
}
echo $raw_domain . ' -> ' . $domain . ": ok [" . $ip . "]<br />\n";
}
The output is like this:
http://google.com -> google.com: ok [72.14.204.147]
http://probablynotexisting.com: failed
http://subdomain.bbc.co.uk -> bbc.co.uk: ok [212.58.241.131]
http://bbc.co.uk -> bbc.co.uk: ok [212.58.241.131]
Note: resolving DNS is a slow process.
Let dig do the hard work for you. Extract the required base domain from the first field in the AUTHORITY section of a dig on any sub-domain (which doesn't need to exist) of the sub-domain/domain in question. Examples (in bash not php sorry)...
dig #8.8.8.8 notexist.google.com|grep -A1 ';; AUTHORITY SECTION:'|tail -n1|sed "s/[[:space:]]\+/~/g"|cut -d'~' -f1
google.com.
or
dig #8.8.8.8 notexist.test.google.com|grep -A1 ';; AUTHORITY SECTION:'|tail -n1|sed "s/[[:space:]]\+/~/g"|cut -d'~' -f1
google.com.
or
dig #8.8.8.8 notexist.www.xn--zgb6acm.xn--mgberp4a5d4ar|grep -A1 ';; AUTHORITY SECTION:'|tail -n1|sed "s/[[:space:]]\+/~/g"|cut -d'~' -f1
xn--zgb6acm.xn--mgberp4a5d4ar.
Where
grep -A1 filters out all lines except the line with the string ;; AUTHORITY SECTION: and 1 line after it.
tail -n1 leaves only the last 1 line of the above 2 lines.
sed "s/[[:space:]]\+/~/g" replaces dig's delimeters (1 or more consecutive spaces or tabs) with some custom delimiter ~. Could be any character which never occurs on the line.
cut -d'~' -f1 extracts the first field where the fields are delimited by the custom delimiter from above.
Related
I'm working with some code used to try and find all the website URLs within a block of text. Right now we've already got checks that work fine for URLs formatted such as http://www.google.com or www.google.com but we're trying to find a regex that can locate a URL in a format such as just google.com
Right now our regex is set to search for every domain that we could find registered which is around 1400 in total, so it looks like this:
/(\S+\.(COM|NET|ORG|CA|EDU|UK|AU|FR|PR)\S+)/i
Except with ALL 1400 domains to check in the group(the full thing is around 8400 characters long). Naturally it's running quite slowly, and we've already had the idea to simply check for the 10 or so most commonly used domains but I wanted to check here first to see if there was a more efficient way to check for this specific formatting of website URLs rather than singling every single one out.
You could use a double pass search.
Search for every url-like string, e.g.:
((http|https):\/\/)?([\w-]+\.)+[\S]{2,5}
On every result do some non-regex checks, like, is the length enough, is the text after the last dot part of your tld list, etc.
function isUrl($urlMatch) {
$tldList = ['com', 'net'];
$urlParts = explode(".", $urlMatch);
$lastPart = end($urlParts);
return in_array($lastPart, $tldList);
}
Example
function get_host($url) {
$host = parse_url($url, PHP_URL_HOST);
$names = explode(".", $host);
if(count($names) == 1) {
return $names[0];
}
$names = array_reverse($names);
return $names[1] . '.' . $names[0];
}
Usage
echo get_host('https://google.com'); // google.com
echo "\n";
echo get_host('https://www.google.com'); // google.com
echo "\n";
echo get_host('https://sub1.sub2.google.com'); // google.com
echo "\n";
echo get_host('http://localhost'); // localhost
Demo
I have this code right here:
// get host name from URL
preg_match('#^(?:http://)?([^/]+)#i',
"http://www.joomla.subdomain.php.net/index.html", $matches);
$host = $matches[1];
// get last two segments of host name
preg_match('/[^.]+\.[^.]+$/', $host, $matches);
echo "domain name is: {$matches[0]}\n";
The output will be php.net
I need just php without .net
Although regexes are fine here, I'd recommend parse_url
$host = parse_url('http://www.joomla.subdomain.php.net/index.html', PHP_URL_HOST);
$domains = explode('.', $host);
echo $domains[count($domains)-2];
This will work for TLD's like .com, .org, .net, etc. but not for .co.uk or .com.mx. You'd need some more logic (most likely an array of tld's) to parse those out .
Group the first part of your 2nd regex into /([^.]+)\.[^.]+$/ and $matches[1] will be php
Late answer and it doesn't work with subdomains, but it does work with any tld (co.uk, com.de, etc):
$domain = "somesite.co.uk";
$domain_solo = explode(".", $domain)[0];
print($domain_solo);
Demo
It's really easy:
function get_tld($domain) {
$domain=str_replace("http://","",$domain); //remove http://
$domain=str_replace("www","",$domain); //remowe www
$nd=explode(".",$domain);
$domain_name=$nd[0];
$tld=str_replace($domain_name.".","",$domain);
return $tld;
}
To get the domain name, simply return $domain_name, it works only with top level domain. In the case of subdomains you will get the subdomain name.
www.example.com
foo.example.com
foo.example.co.uk
foo.bar.example.com
foo.bar.example.co.uk
I've got these URL's here, and want to always end up with 2 variables:
$domainName = "example"
$domainNameSuffix = ".com" OR ".co.uk"
If I someone could get me from $url being one of the urls, all the way down to $newUrl being close to "example.co.uk", it would be a blessing.
Note that the urls are going to be completely "random", we might end up having "foo.bar.example2.com.au" too, so ... you know... ugh. (asking for the impossible?)
Cheers,
We had a few questions like this before, but I can't find a good one right now either. The crux is, this cannot be done reliably. You would need a long list of special TLDs (like .uk and .au) which have their own .com/.net level.
But as general approach and simple solution you could use:
preg_match('#([\w-]+)\.(\w+(\.(au|uk))?)\.?$#i', $domain, $m);
list(, $domain, $suffix) = $m;
The "domainNameSuffix" is called a top level domain (tld for short), and there is no easy way to extract it.
Every country has it's own tld, and some countries have opted to further subdivide their tld. And since the number of subdomains (my.own.subdomain.example.com) is also variable, there is no easy "one-regexp-fits-all".
As mentioned, you need a list. Fortunately for you there are lists publicly available: http://publicsuffix.org/
You will need to maintain a list of extensions for most accurate results I believe.
$possibleExtensions = array(
'.com',
'.co.uk',
'.com.au'
);
// parse_url() needs a protocol.
$str = 'http://' . $str;
// Use parse_url() to take into account any paths
// or fragments that may end up being there.
$host = parse_url($str, PHP_URL_HOST);
foreach($possibleExtensions as $ext) {
if (preg_match('/' . preg_quote($ext, '/') . '\Z/', $host)) {
$domainNameSuffix = $ext;
// Strip extension
$domainName = substr($str, 0, -strlen($ext));
// Strip off http://
$domainName = substr($domainName, 7);
var_dump($domainName, $domainNameSuffix);
break;
}
}
If you never have any paths or extra stuff, you can of course skip the parse_url() and the http:// adding and removal.
It worked for all your tests.
There isn't a builtin function for this.
A quick google search lead me to http://www.wallpaperama.com/forums/php-function-remove-domain-name-get-tld-splitter-split-t5824.html
This leads me to believe you need to maintain a list of valid TLD's to split URLs on.
Alright chaps, here's how I solved it, for now. Implementation of more domain names will be done as well, at some point in the future. Don't know what technique I'll use, yet.
# Setting options, single and dual part domain extentions
$v2_onePart = array(
"com"
);
$v2_twoPart = array(
"co.uk",
"com.au"
);
$v2_url = $_SERVER['SERVER_NAME']; # "example.com" OR "example.com.au"
$v2_bits = explode(".", $v2_url); # "example", "com" OR "example", "com", "au"
$v2_bits = array_reverse($v2_bits); # "com", "example" OR "au", "com", "example" (Reversing to eliminate foo.bar.example.com.au problems.)
switch ($v2_bits) {
case in_array($v2_bits[1] . "." . $v2_bits[0], $v2_twoPart):
$v2_class = $v2_bits[2] . " " . $v2_bits[1] . "_" . $v2_bits[0]; # "example com_au"
break;
case in_array($v2_bits[0], $v2_onePart):
$v2_class = $v2_bits[1] . " " . $v2_bits[0]; # "example com"
break;
}
I want to work around email addresses and I want to explode them using php's explode function.
It's ok to separate the user from the domain or the host doing like this:
list( $user, $domain ) = explode( '#', $email );
but when trying to explode the domain to domain_name and domain_extention I realised that when exploding them using the "." as the argument it will not always be foo.bar, it can sometimes be foo.ba.ar like fooooo.co.uk
so how to separate "fooooo.co" from "uk" and let the co with the fooooo. so finally I will get the TLD separated from the other part.
I know that co.uk is supposed to be treated as the TLD but it's not official, like fooooo.nat.tn or fooooo.gov.tn
Thank You.
Just use strripos() to find the last occurrence of ".":
$blah = "hello.co.uk";
$i = strripos($blah, ".");
echo "name = " . substr($blah, 0, $i) . "\n";
echo "TLD = " . substr($blah, $i + 1) . "\n";
Better use imap_rfc822_parse_adrlist or mailparse_rfc822_parse_addresses to parse the email address if available. And for removing the “public suffix” from the domain name, see my answer to Remove domain extension.
Expanding on Oli's answer...
substr($address, (strripos($address, '.') + 1));
Will give the TLD without the '.'. Lose the +1 and you get the dot, too.
end(explode('.', $email)); will give you the TLD. To get the domain name without that, you can do any number of other string manipulation tricks, such as subtracting off that length.
I have an url like this:
http://www.w3schools.com/PHP/func_string_str_split.asp
I want to split that url to get the host part only. For that I am using
parse_url($url,PHP_URL_HOST);
it returns www.w3schools.com.
I want to get only 'w3schools.com'.
is there any function for that or do i have to do it manually?
There are many ways you could do this. A simple replace is the fastest if you know you always want to strip off 'www.'
$stripped=str_replace('www.', '', $domain);
A regex replace lets you bind that match to the start of the string:
$stripped=preg_replace('/^www\./', '', $domain);
If it's always the first part of the domain, regardless of whether its www, you could use explode/implode. Though it's easy to read, it's the most inefficient method:
$parts=explode('.', $domain);
array_shift($parts); //eat first element
$stripped=implode('.', $parts);
A regex achieves the same goal more efficiently:
$stripped=preg_replace('/^\w+\./', '', $domain);
Now you might imagine that the following would be more efficient than the above regex:
$period=strpos($domain, '.');
if ($period!==false)
{
$stripped=substr($domain,$period+1);
}
else
{
$stripped=$domain; //there was no period
}
But I benchmarked it and found that over a million iterations, the preg_replace version consistently beat it. Typical results, normalized to the fastest (so it has a unitless time of 1):
Simple str_replace: 1
preg_replace with /^\w+\./: 1.494
strpos/substr: 1.982
explode/implode: 2.472
The above code samples always strip the first domain component, so will work just fine on domains like "www.example.com" and "www.example.co.uk" but not "example.com" or "www.department.example.com". If you need to handle domains that may already be the main domain, or have multiple subdomains (such as "foo.bar.baz.example.com") and want to reduce them to just the main domain ("example.com"), try the following. The first sample in each approach returns only the last two domain components, so won't work with "co.uk"-like domains.
explode:
$parts = explode('.', $domain);
$parts = array_slice($parts, -2);
$stripped = implode('.', $parts);
Since explode is consistently the slowest approach, there's little point in writing a version that handles "co.uk".
regex:
$stripped=preg_replace('/^.*?([^.]+\.[^.]*)$/', '$1', $domain);
This captures the final two parts from the domain and replaces the full string value with the captured part. With multiple subdomains, all the leading parts get stripped.
To work with ".co.uk"-like domains as well as a variable number of subdomains, try:
$stripped=preg_replace('/^.*?([^.]+\.(?:[^.]*|[^.]{2}\.[^.]{2}))$/', '$1', $domain);
str:
$end = strrpos($domain, '.') - strlen($domain) - 1;
$period = strrpos($domain, '.', $end);
if ($period !== false) {
$stripped = substr($domain,$period+1);
} else {
$stripped = $domain;
}
Allowing for co.uk domains:
$len = strlen($domain);
if ($len < 7) {
$stripped = $domain;
} else {
if ($domain[$len-3] === '.' && $domain[$len-6] === '.') {
$offset = -7;
} else {
$offset = -5;
}
$period = strrpos($domain, '.', $offset);
if ($period !== FALSE) {
$stripped = substr($domain,$period+1);
} else {
$stripped = $domain;
}
}
The regex and str-based implementations can be made ever-so-slightly faster by sacrificing edge cases (where the primary domain component is a single letter, e.g. "a.com"):
regex:
$stripped=preg_replace('/^.*?([^.]{3,}\.(?:[^.]+|[^.]{2}\.[^.]{2}))$/', '$1', $domain);
str:
$period = strrpos($domain, '.', -7);
if ($period !== FALSE) {
$stripped = substr($domain,$period+1);
} else {
$stripped = $domain;
}
Though the behavior is changed, the rankings aren't (most of the time). Here they are, with times normalized to the quickest.
multiple subdomain regex: 1
.co.uk regex (fast): 1.01
.co.uk str (fast): 1.056
.co.uk regex (correct): 1.1
.co.uk str (correct): 1.127
multiple subdomain str: 1.282
multiple subdomain explode: 1.305
Here, the difference between times is so small that it wasn't unusual for . The fast .co.uk regex, for example, often beat the basic multiple subdomain regex. Thus, the exact implementation shouldn't have a noticeable impact on speed. Instead, pick one based on simplicity and clarity. As long as you don't need to handle .co.uk domains, that would be the multiple subdomain regex approach.
You have to strip off the subdomain part by yourself - there is no built-in function for this.
// $domain beeing www.w3scools.com
$domain = implode('.', array_slice(explode('.', $domain), -2));
The above example also works for subdomains of a unlimited depth as it'll alwas return the last two domain parts (domain and top-level-domain).
If you only want to strip off www. you can simply do a str_replace(), which will be faster indeed:
$domain = str_replace('www.', '', $domain);
You need to strip off any characters before the first occurencec of [.] character (along with the [.] itself) if and only if there are more than 1 occurence of [.] in the returned string.
for example if the returned string is www-139.in.ibm.com then the regular expression should be such that it returns in.ibm.com since that would be the domain.
If the returned string is music.domain.com then the regular expression should return domain.com
In rare cases you get to access the site without the prefix of the server that is you can access the site using http://domain.com/pageurl, in this case you would get the domain directly as domain.com, in such case the regex should not strip anything
IMO this should be the pseudo logic of the regex, if you want I can form a regex for you that would include these things.