I'm looking for a PHP router for my website and I have my files in the root directory; For example: index.php, details.php, signup.php, recover.php
So I want details.php, signup.php and recover.php in a directory named pages and then to access the links instead on mysite.com/details.php to mysite.com/details or mysite.com/?page=details... I realy want to put them in a separate folder than root.
Thanks everyone !
You simply want an MVC.
I highly suggest you to follow this tutorial series. No frameworks used, yet working.
https://www.youtube.com/watch?v=OsCTzGASImQ&list=PLfdtiltiRHWGXVHXX09fxXDi-DqInchFD
I wrote a script years ago, and I think it might well suit your needs.
Notes:
The script is controlled by the pattern location at the top.I modified it (just inserted pages/ in the pattern) to suit your case, I hope.
You could replace all the files you want to move down to the directory page by this script - without any modification.
The script handles most of the URI elements automatically, including port numbers. But: it does not handle log-in credentials (username and password) as many modern browsers and servers do not honor these.
Also this script is copyrighted it is freeware. So you are allowed to use it in non-commercial and commercial environments.
Code
<?php
/*
---------------------------------------------------------------------------
Redirector Script
Copyright (C) 2008, methodica.ch, http://www.methodica.ch
Purpose: Redirect GET request according the pattern 'location'
Pattern variables (embraced in curly brackets):
prot: Protocol (i.e. http | https)
host: Hostname (e.g. www.example.org)
port: Portnumber (80 | 443 | 81)
path: Path to the file (includes trailing '/', might be empty)
file: Filename
quer: Query, GET parameters (e.g. 'id=myId&mode=1&close=automatic')
Other text is taken literally
---------------------------------------------------------------------------
*/
define('location','{prot}://{host}{port}/{path}pages/{file}{quer}');
// Do not modify the code below! -------------------------------------------
// Get URI elements
$prot = ((isset($_SERVER['HTTPS'])) && (strtoupper($_SERVER['HTTPS']) == 'ON')) ? 'https' : 'http';
$server_name = getenv('SERVER_NAME');
$script_url = $prot . '://' . $server_name . ':' . getenv('SERVER_PORT') . $_SERVER['PHP_SELF'];
$xres = preg_match_all('%^(http[s]{0,1})://(.*?):[\d]*?/(.*/)*(.*)$%i', $script_url, $res, PREG_PATTERN_ORDER);
if (($xres!==false) && ($xres>0)) {
// Prepare variables
$prot = $res[1];
$host = $res[2];
$path = $res[3];
$file = $res[4];
$quer = $_SERVER['QUERY_STRING'];
$xport = getenv('SERVER_PORT');
$port = ($prot=='https') ? ( ($xport=='443') ? '' : ":$xport" ) : ( ($xport=='80') ? '' : ":$xport" );
$location = location;
// Replace variable references in pattern
preg_match_all('/\{.*?\}/i', $location, $res, PREG_PATTERN_ORDER);
for ($i = 0; $i < count($res[0]); $i++) {
$varname = str_replace(array('{','}'),'',$res[0][$i]);
$srce = $res[0][$i];
$dest = $$varname;
if (is_array($dest)) $dest = ($dest[0]);
if ($srce=='{quer}') $dest = '?'.$dest;
$location = str_replace($srce,$dest,$location);
}
// Send redirection header
header("Location: $location");
exit();
} else {
// Something went awfully wrong
echo "ERROR: Cannot parse URI:<br />'$script_url'";
exit();
}
?>
I'm currently working to make my own CRM website application and I followed Alex youtube tutorial which is the login/register using OOP.
In addition I need my index.php to be the dynamic content switcher, which I only include header and footer while the content load from a folder where it stores all the page. I believe the end result should be like www.example.com/index.php?page=profile
I look around and it seems like what I'm doing it's something similar to MVC pattern where index is the root file and all the content is loaded from view folder.
I managed to get everything done correctly but now instead of displaying the link like: www.example.com/user.php?name=jennifer
I wanted it to be www.example.com/user/name/jennifer
I try to look around phpacademy forum but the forum seems to be abandon, some search I managed to find a topic that relevant to what I want, but the code doesn't seems to be working and I got the same error with poster.
here is the code:
<?php
// Define the root of the site (this page should be in the root)
define('ROOT', rtrim(__DIR__, '/') . '/');
define('PAGES', ROOT . 'pages/');
// Define "safe" files that can be loaded
$safeFiles = ["login", "regiser", "profile", "changepassword"];
// Get URL
if(isset($_GET['page']) && !empty($_GET['page'])) {
$url = $_GET['page'];
} else {
$url = '/';
}
// Remove Path Traversal
$sanatize = array(
// Basic
'..', "..\\", '../', "\\",
// Percent encoding
'%2e%2e%2f', '%2e%2e/', '..%2f', '%2e%2e%5c', '%2e%2e', '..%5c', '%252e%252e%255c', '..%255c',
// UTF-8 encoding
'%c1%1c', '%c0%af', '..%c1%9c'
);
$url = str_replace($sanatize, '', $url);
// Prevent Null byte (%00)
// PHP 5.6 + should take care of this automatically, but PHP 5.0 < ....
$url = str_replace(chr(0), '', $url);
// Filter URL
$url = filter_var($url, FILTER_SANITIZE_URL);
// Remove any extra slashes
$url = rtrim($url, '/');
// Make lowercase url
$url = strtolower($url);
// Check current page
$path = PAGES . $url . '.php';
// If the file is in our safe array & exists, load it!
if(in_array($url, $safeFiles) && file_exists($path)) {
include($path);
} else {
echo "404: Page not found!";
}
I search around Google but I couldn't find a solution and I notice there were people asking in this forum as well hence I hope someone can assist me in this area.
This question already has answers here:
How to get host name from this kind of URL?
(2 answers)
Closed 8 years ago.
Is there any way to accept a URL and change it's domain to .com ?
For example if a user were to submit www.example.in, I want to check if the URL is valid, and change that to www.example.com. I have built a regex checker that can check if the URL is valid, but I'm not entirely sure how to check if the given extension is valid, and then to change it to .com
EDIT : To be clear I am not actually going to these URL's. I am getting them submitted as user input in a form, and am simply storing them. These are functions I want to do to the URL before storing, that is all.
Edit 2 : An example to make this clearer -
$url = 'www.example.co.uk'
$newurl = function($url);
echo $newurl
which would yield the output
www.example.com
Are you looking for something like this on the server side to replace a list of selected TLDs to be translated to .coms?
<?php
$url = "www.example.in";
$replacement_tld = "com";
# array of all TLDs you wish to support
$valid_tlds = array("in","co.uk");
# possible TLD source lists
# http://data.iana.org/TLD/tlds-alpha-by-domain.txt
# https://wiki.mozilla.org/TLD_List
# from http://stackoverflow.com/a/10473026/723139
function endsWith($haystack, $needle)
{
$haystack = strtolower($haystack);
$needle = strtolower($needle);
return $needle === "" || substr($haystack, -strlen($needle)) === $needle;
}
foreach($valid_tlds as $tld){
if(endsWith($url, $tld))
{
echo substr($url, 0, -strlen($tld)) . $replacement_tld . "\n";
break;
}
}
?>
Create an empty text file using a text editor such as notepad, and save it as htaccess.txt.
301 (Permanent) Redirect: Point an entire site to a different URL on a permanent basis. This is the most common type of redirect and is useful in most situations. In this example, we are redirecting to the "mt-example.com" domain:
# This allows you to redirect your entire website to any other domain
Redirect 301 / http://mt-example.com/
302 (Temporary) Redirect: Point an entire site to a different temporary URL. This is useful for SEO purposes when you have a temporary landing page and plan to switch back to your main landing page at a later date:
# This allows you to redirect your entire website to any other domain
Redirect 302 / http://mt-example.com/
For more details : http://kb.mediatemple.net/questions/242/How+do+I+redirect+my+site+using+a+.htaccess+file%3F
The question is not entirely clear, I'm assuming you wish to make this logic on PHP part.
Here's useful function to parse such strings:
function parseUrl ( $url )
{
$r = "^(?:(?P<scheme>\w+)://)?";
$r .= "(?:(?P<login>\w+):(?P<pass>\w+)#)?";
$r .= "(?P<host>(?:(?P<subdomain>[\w\.\-]+)\.)?" . "(?P<domain>\w+\.(?P<extension>\w+)))";
$r .= "(?::(?P<port>\d+))?";
$r .= "(?P<path>[\w/]*/(?P<file>\w+(?:\.\w+)?)?)?";
$r .= "(?:\?(?P<arg>[\w=&]+))?";
$r .= "(?:#(?P<anchor>\w+))?";
$r = "!$r!";
preg_match( $r, $url, $out );
return $out;
}
You can parse URL, validate it, and then recreate from resulting array replacing anything you want.
If you want to practice regexp and create own patterns - this site will be best place to do it.
If your goal to route users from one url to another or change URI style, then you need to use mod rewrite.
Actually in this case you will end up configuring your web server, probably virtual host, because it will route only listed domains (those being parked at the server).
To validate a URL in PHP You can use filter_var() .
filter_var($url, FILTER_VALIDATE_URL))
and then to get Top Level Domain (TLD) and replace the it with .com , you can use following function :
$url="http://www.dslreports.in";
$ext="com";
function change_url($url,$ext)
{
if(filter_var($url, FILTER_VALIDATE_URL)) {
$tld = '';
$url_parts = parse_url( (string) $url );
if( is_array( $url_parts ) && isset( $url_parts[ 'host' ] ) )
{
$host_parts = explode( '.', $url_parts[ 'host' ] );
if( is_array( $host_parts ) && count( $host_parts ) > 0 )
{
$tld = array_pop( $host_parts );
}
}
$new_url= str_replace($tld,$ext,$url);
return $new_url;
}else{
return "Not a valid URl";
}
}
echo change_url($url,$ext);
Hope this helps!
I am trying to write a function to just get the users profile id or username from Facebook. They enter there url into a form then I'm trying to figure out if it's a Facebook profile page or other page. The problem is that if they enter an app page or other page that has a subdomain I would like to ignore that request.
Right now I have:
$author_url = http://facebook.com/profile?id=12345;
if(preg_match("/facebook/i",$author_url)){
$parse_author_url = (parse_url($author_url));
$parse_author_url_q = $parse_author_url['query'];
if(preg_match('/id[=]([0-9]*)/', $parse_author_url_q, $match)){
$fb_id = "/".$match[1];}
else{ $fb_id = $parse_author_url['path'];
}
$grav_url= "http://graph.facebook.com".$fb_id."/picture?type=square";
}
echo $gav_url;
This works if $author_url has "id=" then use that as the profile id if not then it must be a user name or page name so use that instead. I need to run one more check that if the url contains facebook but is a subdomain ignore it. I belive I can do that in the first preg_match preg_match("/facebook/i",$author_url)
Thanks!
To ignore facebook subdomains you can ensure that
$parse_author_url['host']
is facebook.com.
If its anything else like login.facebook.com or apps.facebook.com you need not proceed.
Alternatively you can also ensure that the URL begins with http://facebook.com as:
if(preg_match("#(?:http://)?facebook#i",$author_url)){
This isn't a direct solution for what you were asking but the parts are here to do what you need to do.
I found that a subdomain resulted in an issue with parse_url. Namely it returned an array with only $result['path'] and no 'host' or 'scheme'.
My theory here is if there is no 'host' or 'scheme' results from parse_url and it has domain suffix ( .ext ) in the string, it is a subdomain.
Here is the code:
(the $src is a url I had to sort out the relative src from subdomains ):
$srcA = parse_url( $src );
//..if no scheme or host test if subdomain.
if( !$srcA['scheme'] && !$srcA['host'] ){
//..this string / array is set elsewhere but for this example I will put it here
$tld = "AC,AD,AE,AERO,AF,AG,AI,AL,AM,AN,AO,AQ,AR,ARPA,AS,ASIA,AT,AU,AW,AX,AZ,BA,BB,BD,BE,BF,BG,BH,BI,BIZ,BJ,BM,BN,BO,BR,BS,BT,BV,BW,BY,BZ,CA,CAT,CC,CD,CF,CG,CH,CI,CK,CL,CM,CN,CO,COM,COOP,CR,CU,CV,CW,CX,CY,CZ,DE,DJ,DK,DM,DO,DZ,EC,EDU,EE,EG,ER,ES,ET,EU,FI,FJ,FK,FM,FO,FR,GA,GB,GD,GE,GF,GG,GH,GI,GL,GM,GN,GOV,GP,GQ,GR,GS,GT,GU,GW,GY,HK,HM,HN,HR,HT,HU,ID,IE,IL,IM,IN,INFO,INT,IO,IQ,IR,IS,IT,JE,JM,JO,JOBS,JP,KE,KG,KH,KI,KM,KN,KP,KR,KW,KY,KZ,LA,LB,LC,LI,LK,LR,LS,LT,LU,LV,LY,MA,MC,MD,ME,MG,MH,MIL,MK,ML,MM,MN,MO,MOBI,MP,MQ,MR,MS,MT,MU,MUSEUM,MV,MW,MX,MY,MZ,NA,NAME,NC,NE,NET,NF,NG,NI,NL,NO,NP,NR,NU,NZ,OM,ORG,PA,PE,PF,PG,PH,PK,PL,PM,PN,POST,PR,PRO,PS,PT,PW,PY,QA,RE,RO,RS,RU,RW,SA,SB,SC,SD,SE,SG,SH,SI,SJ,SK,SL,SM,SN,SO,SR,ST,SU,SV,SX,SY,SZ,TC,TD,TEL,TF,TG,TH,TJ,TK,TL,TM,TN,TO,TP,TR,TRAVEL,TT,TV,TW,TZ,UA,UG,UK,US,UY,UZ,VA,VC,VE,VG,VI,VN,VU,WF,WS,XXX,YE,YT,ZA,ZM,ZW";
$tldA = explode( ',' , strtolower( $tld ) );
$isSubdomain = false;
foreach( $tldA as $tld ){
if( strstr( $src , '.'.$tld)!=false){
$isSubdomain = true;
break;
}
}
//..prefixing with the $host if it is not a subdomain.
$src = $isSubdomain ? $src : $src = $host . '/' . $srcA['path'];
}
Could write a further confirmation by parsing the subdomain==true strings before the first '/' and testing against characters with a RegEx.
Hope this helps some people out.
Im looking for a method (or function) to strip out the domain.ext part of any URL thats fed into the function. The domain extension can be anything (.com, .co.uk, .nl, .whatever), and the URL thats fed into it can be anything from http://www.domain.com to www.domain.com/path/script.php?=whatever
Whats the best way to go about doing this?
parse_url turns a URL into an associative array:
php > $foo = "http://www.example.com/foo/bar?hat=bowler&accessory=cane";
php > $blah = parse_url($foo);
php > print_r($blah);
Array
(
[scheme] => http
[host] => www.example.com
[path] => /foo/bar
[query] => hat=bowler&accessory=cane
)
You can also write a regular expression to get exactly what you want.
Here is my attempt at it:
$pattern = '/\w+\..{2,3}(?:\..{2,3})?(?:$|(?=\/))/i';
$url = 'http://www.example.com/foo/bar?hat=bowler&accessory=cane';
if (preg_match($pattern, $url, $matches) === 1) {
echo $matches[0];
}
The output is:
example.com
This pattern also takes into consideration domains such as 'example.com.au'.
Note: I have not consulted the relevant RFC.
You can use parse_url() to do this:
$url = 'http://www.example.com';
$domain = parse_url($url, PHP_URL_HOST);
$domain = str_replace('www.','',$domain);
In this example, $domain should contain example.com, irrespective of it having www or not. It also works for a domain such as .co.uk
Following code will trim protocol, domain and port from absolute URL:
$urlWithoutDomain = preg_replace('#^.+://[^/]+#', '', $url);
Here are a couple simple functions to get the root domain (example.com) from a normal or long domain (test.sub.domain.com) or url (http://www.example.com).
/**
* Get root domain from full domain
* #param string $domain
*/
public function getRootDomain($domain)
{
$domain = explode('.', $domain);
$tld = array_pop($domain);
$name = array_pop($domain);
$domain = "$name.$tld";
return $domain;
}
/**
* Get domain name from url
* #param string $url
*/
public function getDomainFromUrl($url)
{
$domain = parse_url($url, PHP_URL_HOST);
$domain = $this->getRootDomain($domain);
return $domain;
}
Solved this...
Say we're calling dev.mysite.com and we want to extract 'mysite.com'
$requestedServerName = $_SERVER['SERVER_NAME']; // = dev.mysite.com
$thisSite = explode('.', $requestedServerName); // site name now an array
array_shift($thisSite); //chop off the first array entry eg 'dev'
$thisSite = join('.', $thisSite); //join it back together with dots ;)
echo $thisSite; //outputs 'mysite.com'
Works with mysite.co.uk too so should work everywhere :)
I spent some time thinking about whether it makes sense to use a regular expression for this, but in the end I think not.
firstresponder's regexp came close to convincing me it was the best way, but it didn't work on anything missing a trailing slash (so http://example.com, for instance). I fixed that with the following: '/\w+\..{2,3}(?:\..{2,3})?(?=[\/\W])/i', but then I realized that matches twice for urls like 'http://example.com/index.htm'. Oops. That wouldn't be so bad (just use the first one), but it also matches twice on something like this: 'http://abc.ed.fg.hij.kl.mn/', and the first match isn't the right one. :(
A co-worker suggested just getting the host (via parse_url()), and then just taking the last two or three array bits (split() on '.') The two or three would be based on a list of domains, like 'co.uk', etc. Making up that list becomes the hard part.
There is only one correct way to extract domain parts, it's use Public Suffix List (database of TLDs). I recomend TLDExtract package, here is sample code:
$extract = new LayerShifter\TLDExtract\Extract();
$result = $extract->parse('www.domain.com/path/script.php?=whatever');
$result->getSubdomain(); // will return (string) 'www'
$result->getHostname(); // will return (string) 'domain'
$result->getSuffix(); // will return (string) 'com'
This function should work:
function Delete_Domain_From_Url($Url = false)
{
if($Url)
{
$Url_Parts = parse_url($Url);
$Url = isset($Url_Parts['path']) ? $Url_Parts['path'] : '';
$Url .= isset($Url_Parts['query']) ? "?".$Url_Parts['query'] : '';
}
return $Url;
}
To use it:
$Url = "https://stackoverflow.com/questions/176284/how-do-you-strip-out-the-domain-name-from-a-url-in-php";
echo Delete_Domain_From_Url($Url);
# Output:
#/questions/176284/how-do-you-strip-out-the-domain-name-from-a-url-in-php