I have a TYPO3 Web site that needs to have its home page (and only its home page) served over SSL.
My first stab at dealing with this was to install the HTTPS Enforcer extension, which lets you specify particular pages in your TYPO3 site that should be forced to HTTPS. At that level, the extension works as advertised. But the problem is that while requests for one of those pages are indeed handled over SSL, resources included inline in the page (like images) are not delivered over SSL. So you get a warning in your browser (which, depending on the browser, can range from a quiet information message to a full-out screaming warning page) telling you that the page isn't completely secure, which (understandably) freaks people out.
So my question is -- how do you get TYPO3 to deliver a complete page over SSL, including static resources? Is there some way to configure/extend HTTPS Enforcer to do that? Is there another extension that's better in this scenario? Or am I just completely out of luck?
HTTPs Enforcer does a good job.
If it's just one page, you can create a condition to change the baseUrl:
[PIDinRootline = 123]
config.baseURL = https://www.example.com/
[global]
if it should work for a whole subdomain (e.g. ssl.example.com), your condition looks like this:
[globalString = ENV:HTTP_HOST=ssl.example.com]
config.baseURL = https://ssl.example.com/
[global]
With the second way, you can choose on a per page basis if the page should be encrypted or not.
A pitfall might be externally loaded ressources (like Facebook API etc.). They might not offer a SSL encrypted service.
EDIT (from #cascaval's comment) This might be the preferred solution:
[globalString = _SERVER|HTTPS=on]
config.baseURL = https://ssl.example.com/
[global]
EDIT (from #konsolenfreddy's comment)
[globalString = ENV:TYPO3_SSL=1]
config.baseURL = https://ssl.example.com/
[global]
I guess it should be:
[globalVar = IENV:TYPO3_SSL = 1]
config.baseURL = https://ssl.example.com/
[global]
Note the "IENV": This is TYPO3 specific. "ENV" would only use the normal PHP variables in $_ENV or $_SERVER where TYPO3_SSL is not a valid key.
But what this does is only the following: Set a tag in the output so content of relativ links i.e. <img src="uploads/pics/image.jpg" /> will get fetched over SSL.
If you have asset links (images, css, etc.) to absolute URLs in your site this wont help. In such a case you could give the extension "https" a try (merge of https_enforcer and another extension) or stfl_replace to make some regex replacing "http://" links to "https://".
Related
I've been trying to get the URL (including GET parameters) of a site that is displaying my image. This is because I want to extract one parameter of the URL.
A friend told me that she knew someone that could achieve this, but I don't know if he was doing it with an image. Also I don't think I can do it with a link because when going to external sites it will appear a warning page saying that you're being redirected outside, so if I put a link to my page and someone clicks, I will get the referrer URL of redirection warning page. I can't assure if my friend was telling the truth about this, but it's very likely that it was true.
All I could get with the image was the IP and other things of the HTTP header, but the referrer part is empty and I thought that the referrer contained the full URL I'm talking about.
This is what I have tried.
First the img tag in the other site in BBCode:
[img]http://______.com/get_image.php?i=myimage[/img]
And in my site this script in PHP, although any language that does the work would be good for me:
<?php
// Get name of image to be displayed (non-sanitized here for simplicity)
$filename = $_GET["i"];
// Here I want to get the site where image is being viewed
if (!empty($_SERVER['HTTP_REFERER'])) {
$visitor_url = $_SERVER['HTTP_REFERER'];
} else {
$visitor_url = "none";
}
// And write the referrer to a file just to test if it works
$fp = fopen('referer.txt', 'w');
fwrite($fp, $visitor_url);
fclose($fp);
// Eventually display the image
header('Content-Type: image/png');
readfile($filename . '.png');
?>
So my questions are:
Is it possible to get full URL of a site that is displaying my image?
If not, is there any other method to get the full URL?
Thank you in advance.
Note: I don't have any permision in the other site where I'm posting the image, I'm just an user there. Please tell me if I'm missing something or I have to ask this in another way, I'm new to StackOverflow.
Try REMOTE_HOST instead of HTTP_REFERER:
// Here I want to get the site where image is being viewed
if (!empty($_SERVER['REMOTE_HOST'])) {
$visitor_url = $_SERVER['REMOTE_HOST'];
} else {
$visitor_url = "none";
}
The web server where you are serving the image will need to be configured properly. If using Apache, this is with HostNameLookups On.
See http://php.net/manual/en/reserved.variables.server.php
Normally browsers are sending full referer with all URL components including query parameters - $_GET params. If they don't then there is no other way to achieve that URL while passing throught an image content.
Sometimes sending referer may be blocked, for eg. in some batch URL processing using some crawler like program/script or on some proxies.
In PHP receiving referer is done by $_SERVER['HTTP_REFERER'] because it's normally just http header from request and it's the only $_SERVER array key with referer info.
You added the .htaccess tag so I think you're using the Apache web server. If you'd like to prevent the issue entirely, you can disable hotlinking entirely by going one layer lower. Instead of managing in PHP, you can configure the web server to not serve content to domains other than the one you are hosting.
Check out the guide for more details.
I fixed this problem by switching my site (where image is hosted) to HTTPS. The code in my question was doing its job correctly.
It looks that HTTP_REFERER was blank because of it coming from an HTTPS site and my site being HTTP it would always send it blank. I was aware that it could be a problem, but didn't make much sense for me because HTTP_REFERER was also blank when coming from another HTTP site (which I think it's not normal) so I thought the error was in another place.
Usually HTTP_REFERER is sent when it comes from and goes to:
from HTTP to HTTP
from HTTPS to HTTPS
from HTTP to HTTPS
But it's not sent when it comes from and goes to:
from HTTPS to HTTP
And in my case, I don't know why, it wasn't being sent from HTTP to HTTP which was confusing me.
I am a beginner in PHP & i am in process of converting all my http links to https.
Following is my code footer.php
function css_generator() {
/* #footer_background_image */
.td-footer-wrapper::before {
background-image: url('#footer_background_image');
}
$td_css_compiler->load_setting('footer_background_image');
Where can i apply preg_replace function to replace http link with https?. The value of footer_background_image is always getting generated as http
Thanks
You're looking at this from the wrong way. Wordpress has built-in support for HTTPS for the backend, wich can be enabled in wp-config.php, and for the front-end, wich can be used by changing the URL in your admin->reading page.
If you have a bunch of hardcoded links rather than soft, Wordpress generated links, you can choose to use .htaccess to force the user to change to HTTPS.
Do note that HTTPS data can not be cached and this will may make your site slower for visitors. Depending on the type of site this can be a big deal.
I have my site on the server http://www.myserver.uk.com.
On this server I have two domains:
one.com and two.com
I would like to get the current domain using PHP, but if I use $_SERVER['HTTP_HOST'] then it is showing me
myserver.uk.com
instead of:
one.com or two.com
How can I get the domain, and not the server name?
Try using this:
$_SERVER['SERVER_NAME']
Or parse:
$_SERVER['REQUEST_URI']
Reference: apache_request_headers()
The best use would be
echo $_SERVER['HTTP_HOST'];
And it can be used like this:
if (strpos($_SERVER['HTTP_HOST'], 'banana.com') !== false) {
echo "Yes this is indeed the banana.com domain";
}
This code below is a good way to see all the variables in $_SERVER in a structured HTML output with your keywords highlighted that halts directly after execution. Since I do sometimes forget which one to use myself - I think this can be nifty.
<?php
// Change banana.com to the domain you were looking for..
$wordToHighlight = "banana.com";
$serverVarHighlighted = str_replace( $wordToHighlight, '<span style=\'background-color:#883399; color: #FFFFFF;\'>'. $wordToHighlight .'</span>', $_SERVER );
echo "<pre>";
print_r($serverVarHighlighted);
echo "</pre>";
exit();
?>
The only secure way of doing this
The only guaranteed secure method of retrieving the current domain is to store it in a secure location yourself.
Most frameworks take care of storing the domain for you, so you will want to consult the documentation for your particular framework. If you're not using a framework, consider storing the domain in one of the following places:
Secure methods of storing the domain
Used By
A configuration file
Joomla, Drupal/Symfony
The database
WordPress
An environmental variable
Laravel
A service registry
Kubernetes DNS
The following work... but they're not secure
Hackers can make the following variables output whatever domain they want. This can lead to cache poisoning and barely noticeable phishing attacks.
$_SERVER['HTTP_HOST']
This gets the domain from the request headers which are open to manipulation by hackers. Same with:
$_SERVER['SERVER_NAME']
This one can be made better if the Apache setting usecanonicalname is turned off; in which case $_SERVER['SERVER_NAME'] will no longer be allowed to be populated with arbitrary values and will be secure. This is, however, non-default and not as common of a setup.
In popular systems
Below is how you can get the current domain in the following frameworks/systems:
WordPress
$urlparts = parse_url(home_url());
$domain = $urlparts['host'];
If you're constructing a URL in WordPress, just use home_url or site_url, or any of the other URL functions.
Laravel
request()->getHost()
The request()->getHost function is inherited from Symfony, and has been secure since the 2013 CVE-2013-4752 was patched.
Drupal
The installer does not yet take care of making this secure (issue #2404259). But in Drupal 8 there is documentation you can you can follow at Trusted Host Settings to secure your Drupal installation after which the following can be used:
\Drupal::request()->getHost();
Other frameworks
Feel free to edit this answer to include how to get the current domain in your favorite framework. When doing so, please include a link to the relevant source code or to anything else that would help me verify that the framework is doing things securely.
Addendum
Exploitation examples:
Cache poisoning can happen if a botnet continuously requests a page using the wrong hosts header. The resulting HTML will then include links to the attackers website where they can phish your users. At first the malicious links will only be sent back to the hacker, but if the hacker does enough requests, the malicious version of the page will end up in your cache where it will be distributed to other users.
A phishing attack can happen if you store links in the database based on the hosts header. For example, let say you store the absolute URL to a user's profiles on a forum. By using the wrong header, a hacker could get anyone who clicks on their profile link to be sent a phishing site.
Password reset poisoning can happen if a hacker uses a malicious hosts header when filling out the password reset form for a different user. That user will then get an email containing a password reset link that leads to a phishing site. Another more complex form of this skips the user having to do anything by getting the email to bounce and resend to one of the hacker's SMTP servers (for example CVE-2017-8295.)
Here are some more malicious examples
Additional Caveats and Notes:
When usecanonicalname is turned off the $_SERVER['SERVER_NAME'] is populated with the same header $_SERVER['HTTP_HOST'] would have used anyway (plus the port). This is Apache's default setup. If you or DevOps turns this on then you're okay -- ish -- but do you really want to rely on a separate team, or yourself three years in the future, to keep what would appear to be a minor configuration at a non-default value? Even though this makes things secure, I would caution against relying on this setup.
Red Hat, however, does turn usecanonical on by default [source].
If serverAlias is used in the virtual hosts entry, and the aliased domain is requested, $_SERVER['SERVER_NAME'] will not return the current domain, but will return the value of the serverName directive.
If the serverName cannot be resolved, the operating system's hostname command is used in its place [source].
If the host header is left out, the server will behave as if usecanonical
was on [source].
Lastly, I just tried exploiting this on my local server, and was unable to spoof the hosts header. I'm not sure if there was an update to Apache that addressed this, or if I was just doing something wrong. Regardless, this header would still be exploitable in environments where virtual hosts are not being used.
A Little Rant:
This question received hundreds of thousands of views without a single mention of the security problems at hand! It shouldn't be this way, but just because a Stack Overflow answer is popular, that doesn't mean it is secure.
Using $_SERVER['HTTP_HOST'] gets me (subdomain.)maindomain.extension. It seems like the easiest solution to me.
If you're actually 'redirecting' through an iFrame, you could add a GET parameter which states the domain.
<iframe src="myserver.uk.com?domain=one.com"/>
And then you could set a session variable that persists this data throughout your application.
Try $_SERVER['SERVER_NAME'].
Tips: Create a PHP file that calls the function phpinfo() and see the "PHP Variables" section. There are a bunch of useful variables we never think of there.
To get the domain:
$_SERVER['HTTP_HOST']
Domain with protocol:
$protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']), 'https') === FALSE ? 'http' : 'https';
$domainLink = $protocol . '://' . $_SERVER['HTTP_HOST'];
Protocol, domain, and queryString total:
$url = $protocol . '://' . $_SERVER['HTTP_HOST'] . '?' . $_SERVER['QUERY_STRING'];
**As the $_SERVER['SERVER_NAME'] is not reliable for multi-domain hosting!
I know this might not be entirely on the subject, but in my experience, I find storing the WWW-ness of the current URL in a variable useful.
In addition, please see my comment below, to see what this is getting at.
This is important when determining whether to dispatch Ajax calls with "www", or without:
$.ajax("url" : "www.site.com/script.php", ...
$.ajax("url" : "site.com/script.php", ...
When dispatching an Ajax call the domain name must match that of in the browser's address bar, and otherwise you will have an Uncaught SecurityError in the console.
So I came up with this solution to address the issue:
<?php
substr($_SERVER['SERVER_NAME'], 0, 3) == "www" ? $WWW = true : $WWW = false;
if ($WWW) {
/* We have www.example.com */
} else {
/* We have example.com */
}
?>
Then, based on whether $WWW is true, or false run the proper Ajax call.
I know this might sound trivial, but this is such a common problem that is easy to trip over.
Everybody is using the parse_url function, but sometimes a user may pass the argument in different formats.
So as to fix that, I have created a function. Check this out:
function fixDomainName($url='')
{
$strToLower = strtolower(trim($url));
$httpPregReplace = preg_replace('/^http:\/\//i', '', $strToLower);
$httpsPregReplace = preg_replace('/^https:\/\//i', '', $httpPregReplace);
$wwwPregReplace = preg_replace('/^www\./i', '', $httpsPregReplace);
$explodeToArray = explode('/', $wwwPregReplace);
$finalDomainName = trim($explodeToArray[0]);
return $finalDomainName;
}
Just pass the URL and get the domain.
For example,
echo fixDomainName('https://stackoverflow.com');
will return:
stackoverflow.com
And in some situation:
echo fixDomainName('stackoverflow.com/questions/id/slug');
And it will also return stackoverflow.com.
This quick & dirty works for me.
Whichever way you get the string containing the domain you want to extract, i.e. using a super global -$_SERVER['SERVER_NAME']- or, say, in Drupal: global $base_url, regex is your friend:
global $base_url;
preg_match("/\w+\.\w+$/", $base_url, $matches);
$domain = $matches[0];
The particular regex string I am using in the example will only capture the last two components of the $base_url string, of course, but you can add as many "\w+." as desired.
Hope it helps.
what I'm trying to do is to handle multiple versions of the same web application, somewhat like Google does with some of their products where you get the "Try the new version" link.
The goal is to have both a "stable" and a "beta" version of the webapp and letting the users try out the new features without forcing them (and their bugs) on them.
Now, a very simple way of doing this would be to put each version in its own subfolder, like www.mywebapp.com/v1 and www.mywebapp.com/v2.
However, I would like this to be transparent to the user and the webapp URL to stay the same (e.g.: www.mywebapp.com/).
Which version must be loaded is determined server-side after the user logs in (e.g.: active version for the given user is stored in the DB) and may be later changed when the user clicks on the "try the new version"/"go back to the old version" links.
On the server side I must make do with MySQL, PHP and Apache.
I have already managed to get this working placing each version into its own subfolder, then storing version information in cookies (updated by the server at each login or page refresh) and using RewriteRule(s) to "proxy" requests from the base/versionless URL to the proper subfolder. If no cookie is set, a default folder is selected by a fallback RewriteRule.
This kludge works but feels extremely fragile and it puts additional burden on the Apache daemon so here I am asking if anybody knows a better way of doing this.
Thanks!
htaccess allows for rewrites based on the contents of cookies. Since Apache is AWESOME at redirects and PHP is adequate, I would handle it that way.
This example tests to see if there is a vers cookie. If there is, it adds 'vers=' + whatever was in the vers cookie to the request.
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_COOKIE} vers=([^;]+) [NC]
RewriteRule ^(.*)$ /$1?vers=%1 [NC,L,QSA]
(that example can be found here)
Use a single RewriteRule to reroute all requests to a single route.php file in the root folder of your website.
The route.php file should look like this:
$user = authenticateUser();
$version = $user->getPreferredVersion();
$filePath = $_SERVER['DOCUMENT_ROOT'].'v'.$version.$_SERVER['REQUEST_URI'];
if( !file_exists( $filePath ) ) {
header("Status: 404 Not Found", true, 404 );
die();
}
$pathDetails = pathinfo( $filePath );
if( $pathDetails['extension'] == 'php' ) {
require( $filePath );
} else {
if( $pathDetails['extension'] == 'jpg' ) {
header( 'Content-Type: image/jpeg' );
} elseif( $pathDetails['extension'] == 'gif' ) {
...
} elseif (...) {
...
} else {
// unsupported file type
header("Status: 404 Not Found", true, 404 );
die();
}
echo file_get_contents( $filePath );
}
This is the outline of what you should do in route.php there are some other security and technical issues that you should take care of in that file.
I think a better way in PHP would be to simply query the database, get the desired version, and then include() it from a subfolder. That way it's transparent to the user.
For instance, assuming the user has opted in for the new beta, you save his entry in the database,
UPDATE `versions_table` SET (`version`) VALUES ('1.02b') WHERE `userid` = 5
And then when he accesses the page, you have something like this going on:
//PDO Connection here, skipped for example purposes
$stmt = $pdo->query('SELECT `version` from `versions_table` WHERE `userid` = 5 LIMIT 1');
//Of course 5 is only an example, in actual code that would be a variable
//representing the actual user ID.
$row = $stmt->fetch(); //Should be only one row.
include_once($row['version'].'/myApp'.'.php'); //Include 1.02b/myApp.php
While it is possible to use a routing mechanism and session information to make rules, I would rather keep your current solution. All you'd do by implementing it in PHP is that you push the load from the apache2 (which already has a very fast Rewrite-Engine) to the php-binary. Also, you would put your application at risk of interfering versions due to incompatibilities of data, cookies and other session-based variables.
On the other hand, you have the benefit of reusability of shared objects like libraries, Domain Models, Data Mappers, etc. But in my opinion that advantage doesn't weight up the worse performance and interference risk.
So in one phrase, I believe your current solution is best.
Load your version-ed page/wepages in IFRAME and IFRAME src could be "webapp.com/v2"..
So whichever version user selects your address bar will read webapp.com..but your IFRAME url keeps on changing depending on version..
There is no need to write rewrite rules..
This might help: what I am using for a month now is
Keep all the files in database (path, code, version=Decimal(3,1), flag:stable=3/lab=2/beta=1/alpha=0)
have .htaccess redirects all non existing files (don't redirect images, css, js or other static non version-ate files) internally to loader.pm (for you loader.ph_) don't use same extension as your other files for differentiation
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ loader.pm [L,QSA]
loader loads the latest stable version from database or beta version if no stable version exists.
SELECT code, version, flag FROM table WHERE path = ? AND flag > 0 ORDER BY flag DESC, version DESC LIMIT 1 Don't show alpha pages that are under development
output 404 error if no results, and
when version is specified like ?v=2.0 loader file uses different query
SELECT code, version, flag FROM table WHERE path = ? AND version <= ? ORDER BY flag DESC LIMIT 1
Assumptions:
A user either want stable or latest; so you actually don't need more than 2 files per path; until unless you want to keep old versions for showcase. If user want latest, use .. ORDER BY version DESC, flag DESC, ..
User doesn't care to know which version it is, hence instead of setting same version for whole website at once, we set version of each file individually; so tracking development is easy.
Problems:
No index/primary key
You need to make sure that you don't have duplicate entries, it will not break your website though, but it will not look good ;)
if same beta & stable version exists for a file, and user has opted for latest (beta); then you might accidentally deliver him beta file instead of stable file.
query-string variable ?v= is reserved for selecting versions, but that is fine; you may choose ?var=
There is a cloud platform that does this automatically upon deployment. www.cycligent.com
It is just a matter of setting a cookie once the two versions are deployed. A lot less work than some of the other answers shown.
Full Disclosure: I work for Cycligent.
What I want to ask is if there is a way to find out if a web-server instance has URL Rewriting enabled. I need this in order to be able to instantiate the correct type of URL handler.
Theoretically you know in advance if you have it enabled or not and can use something to configure it. I would like, however, to be able to detect this setting automatically at runtime.
The URL rewrite rule would be something very simple like:
^/(.*)$ => /bootstrap.php
This guarantees that the relevant string is present in the REQUEST_URI, but doesn't pollute the _GET array.
Where did my research took me so far:
Apache.
In my opinion Apache has a very quirky approach, since it sets the REDIRECT_SCRIPT_URI header for rewrote URLs, but not for the ones that are not rewrote.
E.g. http://host/ana/are/mere would be re-wrote to index.php so the aforementioned header would be present, but http://host/ wouldn't be re-wrote.
Lighttpd.
Lighttpd with fast-cgi behaves OK, setting the REDIRECT_URI header if URL Rewrite is enabled for the current host. This is reliable.
Cherokee.
Well, for Cherokee there is no method that I found out, as it uses (in my opinion) a more complicated method for obtaining URL rewriting. (I.e., it's called internal redirect – and the fcgi process doesn't know that the request was redirected)
Also I haven't tested other http servers, as nginx, so if someone has some input on this matter I would love to hear it.
Not the most elegant solution, but you could create a directory, insert a .htaccess and a small php file and try to open it with curl/file_get_contents() from your actual code:
.htaccess
RewriteEngine on
RewriteRule ^(.*?)$ index.php?myparam=$1
index.php
<?php
//open with file_get_contents("http://yoursite/directory/test")
if($_GET['myparam']){die("active");}
?>
Although this might be acceptable during an installation, for performance reasons this shouldn't be used for every request on your site! Save the information somewhere (sqlite/textfile).
Update
Apache specific, but apache_get_modules()/phpinfo() in combination with array_search/strpos is maybe helpful to you.
It's already touched upon below, but I believe the following recipe is a rather waterproof solution to this problem:
Set up the redirection
Request a page through its rewritten url
If the request returns the page in question, you have redirection set up correctly, if you get HTTP 404 response, then it's not working.
The idea is basically that this works with just about any redirection method. It has already been mentioned, but bears reiterating, such tricks add quite a bit of overhead and are better performed only once (installation or from the settings panel) and then saved in the settings.
Some implementation details, choices to make and a little on how I came to this solution:
I remembered Drupal did such a check during the installing process, so I looked up how they did it. They had the javascript on the install page do an ajax request (synchronously, to prevent concurrency issues with the database). This requires the user installing the software to have javascript turned on, but I don't think that's an unreasonable requirement.
However, I do think using php to request the page might be a cleaner solution. Alongside not bothering with a javascript requirement, it also needs less data to be sent back and forth and just doesn't require the logic of the action to be spread over multiple files. I don't know if there are other (dis)advantage for either method, but this should get you going and let you explore the alternative choices yourself.
There is another choice to be made: whether to test in a test environment or on the normal site. The thing Drupal does is just have the redirection always turned on (such as in the apache case, have the .htaccess file that does redirects just be part of the Drupal download) but only write the fancy urls if the redirection is turned on in the settings. This has the disadvantage that it takes more work to detect which type of redirection is used, but it's still possible (you can for example add a GET variable showing the redirection engine either on a specific test page or even on every page, or you can redirect to a page that sets $redirectionEngine and then includes the real index). Though I don't have much experience with redirection other than with mod_rewrite on apache, I believe this should work with just about every redirection engine.
The other option here is to use a test environment. Basically the idea is to either create a folder and set up redirection for it, or remove the need for file system write access and instead have a folder (or a folder for each redirection engine). This has some disadvantages: you still need write access to set up the redirection for the main site (though maybe not for all redirection engine, I don't really know how you all set them up properly - but for apache you will need write access if you are going to turn on redirection), it might be easier for a bot to detect what software and what version of it you are using through accessing the tests (unless you remove the test folders after testing) and you need to be able to rewrite for only a part of the site (which makes sense for any redirection engine to be a possibility, but I'm not blindly going to assume this functionality). However, this does come with the advantage of it being easier to find out which rewrite engine is being used or basically any other aspect of the redirection. There might also be other advantages I don't know of, so I just give the options and let you pick your method yourself.
With some options left to the user, I believe this should help you set up the system in the manner that you like.
PHP has server-specific functions for Apache, IIS and NSAPI servers. I only have Apache but as merkuro suggested this works as expected:
<?php
if (in_array('mod_rewrite',#apache_get_modules()))
echo 'mod_rewrite enabled';
else
echo 'mod_rewrite not enabled';
?>
As PHP server-specific functions don't cover all the servers you'd like to test in this probably isn't the best solution.
I'd recommend merkuro's first answer - implementing then testing it in script. I believe it's the only way to get a good result.
Hope that helps!
You can programmatically check for the existence of mod_rewrite if the server is Apache by using the apache_get_modules() function in PHP:
$modules = apache_get_modules();
echo in_array('mod_rewrite', $modules) ? 'mod_rewrite detected' : 'mod_rewrite not detected';
This could be used as the first step, but it is not a full proof method by any means. Just because mod_rewrite is loaded does not mean it is available for your environment. This also doesn't help if you are on a server that is not Apache.
There are not many consistent methods that will work across all platform combinations. But since the result is consistent, you can test for that. Setup a special redirect, and have a script use PHP's cURL or file_get_contents() to check a test URL. If the redirect was successful, you will get the expected content, and you can test easily for this.
This is a basic .htaccess I setup to redirect ajax to ajax.php:
RewriteEngine On
RewriteRule ajax ajax.php [L]
The following PHP script will attempt to get the contents of ajax. The real script name is ajax.php. If the redirect fails, then it will not get the expected contents.
error_reporting(E_ALL | E_STRICT);
$url = 'http://'.$_SERVER['HTTP_HOST'].dirname($_SERVER['REQUEST_URI']).'/ajax';
$result = json_decode(#file_get_contents($url));
echo ($result === "foobar") ? 'mod_rewrite test was successful' : 'mod_rewrite test failed';
Lastly, here is the final piece of the script, ajax.php. This returns an the expected response when the redirect is successful:
echo json_encode('foobar');
I have setup a live example of this test, and I have also made available the full sources.
As all the awnser already mention, actually testing it is the only way to be sure it works. But instead of actually redirecting to an actual page and waiting for it to load, I would just check the header.
In my opinion this is quickly enough to be even used at runtime at a regular site. If it realy needs to be high performance, then ofcourse caching it is better.
Just put something like the following in your .htaccess file
RewriteEngine on
RewriteRule ^/redir/My/Super/Special/Hidden/Url/To/Test/$ /redir/longload.php [L,R=307]
And then you can use the following php code to check if mod_rewrite is enabled.
<?php
function HasModRewrite() {
$s = empty($_SERVER["HTTPS"]) ? '' : ($_SERVER["HTTPS"] == "on") ? "s" : "";
$sp = strtolower($_SERVER["SERVER_PROTOCOL"]);
$protocol = substr($sp, 0, strpos($sp, "/")) . $s;
$port = ($_SERVER["SERVER_PORT"] == "80") ? "" : (":".$_SERVER["SERVER_PORT"]);
$options['http'] = array(
'method' => "HEAD",
'follow_location' => 0,
'ignore_errors' => 1,
'timeout' => 0.2
);
$context = stream_context_create($options);
$body = file_get_contents($protocol . "://" . $_SERVER['SERVER_NAME'] . $port .'/redir/My/Super/Special/Hidden/Url/To/Test/', NULL, $context);
if (!empty($http_response_header))
{
return substr_count($http_response_header[0], ' 307')>0;
}
return false;
}
$st = microtime();
$x = HasModRewrite();
$t = microtime()-$st;
echo 'Loaded in: '.$t.'<hr>';
var_dump($x);
?>
output:
Loaded in: 0.002657
---------------------
bool(true)