I have configured the wildcard DNS of *.mydomain.com and it's all working properly. My question is which of these should I rely on identifying client subdomain requests?
$_SERVER["HTTP_HOST"]
$_SERVER["SERVER_NAME"]
$_SERVER["SCRIPT_URI"]
They all seem to contain the subdomain part I want but after reading this article by Chris: http://shiflett.org/blog/2006/mar/server-name-versus-http-host, I'm lost at sea and there appears to be no safe way to do this?
Any idea on accomplishing this task securely? Which approach would you prefer?
Update: sorry, I meant this post: http://shiflett.org/blog/2006/mar/server-name-versus-http-host
HTTP_HOST comes directly from the HOST header. Apache does not clean it up in any way. Even for non-wildcard setups, your first virtualhost in your config will receive a request for a HOST header that doesn't match any of your configured vhosts, so you have to be careful with it. Treat it like any other user data. Filter it appropriately before using it.
I'd suggest that you get the current page url, then use a regular expression to check. Be sure to ignore things link www, www2, etc.
You can use any but most use HTTP_HOST.
You don't have to worry about 'security' here since you allow a wildcard for your subdomains. You won't be able to stop a user from entering a 'threatening' subdomain and sending a request to your server.
If you want to disallow certain subdomains then you have several options but thats a different question.
$subdomain = explode('.', $_SERVER['HTTP_HOST'], -2);
Returns an array always, and can be empty if there is no sub domain. You should also make sure to notice that this could return www as an array value and that would link to your root domain anyway.
Too much talk of such a little problem.
Everyone says its dangerous but noone bother to write a solution, as simple as
$mydomain='example.com';
$subdomain="";
$matches=array();
$pat='!([a-z0-9_]+)\.'.preg_quote($mydomain).'$!i';
if (preg_match($pat,$_SERVER['HTTP_HOST'],$matches)) $subdomain=$matches[1];
Related
I am writing a small script in which it redirects to country specific landing pages(example: if you come from Germany you will be re-directed to xyz.com/de/) this redirection happens using index.php which connects to web service returns the country the user is accessing the website from then I redirect the user using 301 to a the new page xyz.com/de/
I have two questions
1- Can the same functionality integrated with mod_rewrite, if so what is the advantage in terms of performance and SEO quality?
2- Can the mod_rewrite save the query string including GCLID on the redirects (as I am concatenating the $_SERVER to php redirection
You can install mod_geoip on your server, which enables database-based geolocation lookup directly inside Apache. Look at the examples for exactly the scenario you talk about.
The advantage would be much better performance, since the lookup will be done locally using a database, instead of needing to call an external web service. It also requires virtually no code once this is set up, easing maintenance. You will only have to make sure your local copy of the lookup database is regularly updated, typically with a weekly/daily cron job.
You can rewrite the URL in any way you want appending any parameters you want.
SEO-wise it should have no effect at all compared to PHP based redirects, since to the client the behaviour appears exactly the same.
mod_rewrite can't do geolocation, nor can it connect to an external service
If your PHP code is doing the 301 redirect, then you'll need to preserve the query string in your PHP code. If you have an htaccess rule doing the 301 redirect, then the query string should be passed through with the redirect.
The documentation states:
Modifying the Query String
By default, the query string is passed through unchanged. [...] When you want to erase an existing query string, end the substitution string with just a question mark. To combine new and old query strings, use the [QSA] flag.
In answer to question 1.
You can do the Geo IP direction in the vhost Apache configuration if you have mod_geoip/mod_geoip2 installed.
You can also do it using mod_rewrite if the mod_geoip/mod_geoip2 installed.
In answer to question 2.
You can use mod_rewrite to keep the existing query string on the rewritten url there are some examples of this here
If I belong to the no-www camp, cookies I have set in http://example.com would be read by http://sub-domain.example.com,
And regardless of the language I use (perl / asp.net / php / JSP) there is no way I could ever work around this issue because it is a fundamental architecture of HTTP itself, true or false ?
What I'm concerned here is, is there any DNS config that would prevent http://sub-domain.example.com from reading the cookies set in http://example.com ?
I have a domain name http://qweop.com
I have a subdomain at http://sd.qweop.com
Now, the problem is that even though I've not set any cookies on http://sd.qweop.com, when I read the cookies, there are cookies there. They are reading cookies from http://qweop.com.
How do I fix the problem so that the cookies from the main domain would not be read by (a request to) the sub-domain?
I've tried altering the 5th parameter of the php setcookie function but it doesn't seem to do anything. Basically that parameter is like useless. I'm suspecting it's a limitation of the HTTP infrastructure.
DETAILS:
http://qweop.com/set.php (try to use incognito to allow easy cookie removal)
<?php setcookie("testcookie","testvalue",time()+60*60*24*30,"/","qweop.com");?>
cookies set
http://sd.qweop.com/read.php
<?php print_r($_COOKIE); ?>
// No one had set any cookies in http://sd.qweop.com but we can see cookies here! Error!
Answer: Yes
I had better catalog the answer here after 500 hours of google research.
Basically we should always use www if we're planning to use any other sub-domains and we want them cookie-free. This is because there are different browser behaviors with regards to top-level domain cookies.
We can try our best to tell the browser "Hey's set it to just the domain and not to it's sub-domains" but as long as the url is non-www, they won't be nice and behave the way we want them to.
In fact, even if the url is not non-www, they can still do whatever they want to, there is currently no record of any browser that does that (and most likely so too into the future).
I believe you cannot do anything about it. You might try to set the cookie as:
setcookie('some_name', 'some_val', 0, '/', 'yourdomain');
but it will be set to all subdomains of yourdomain even though RFC 2109 says if the cookie is to match the subdomains it should be set with a dot as .yourdomain. All major browsers are sending it to the subdomains. I checked it with IE, FF and Chrome.
Unfortunately, DNS config has absolutely nothing to do with cookies (as long as they belong to the same 2-nd level domain, of course).
You still can have a practical answer if you ask a practical question though.
My problem is not so easy to describe ... for me :-) so please be lenient towards me.
I have several ways to view a list. which means, there are some possibilities how to come to and create the view which displays my list. this wokrs well with parallel opend browser tabs and is desired though.
if I click on an item of my list I come to a detail-view of that item.
at this view I want to know from which type of list the link was "called". the first problem is, that the referrer will allways be the same and the second: I should not append a get variable to the url. (and it should not be a submitted form too)
if I store it to the session, I will overwrite my session param when working in a parallel tab as well.
what is the best way to still achive my goal, of knowing which mode the previous list was in.
You need to use something to differentiate one page from another, otherwise your server won't know what you're asking for.
You can POST your request: this will hide the URL parameters, but will hinder your back button functionality.
You can GET your request: this will make your URLs more "ugly" but you should be able to work around that by passing short, concise identifiers like www.example.com/listDetail?id=12
If you can set up mod_rewrite, then you can GET requests to a url like www.example.com/listDetails/12, and apache will rewrite the request behind the scenes to look more like www.example.com/listDetails?id=12 but the user will never see it -- they will just see the original, clean/friendly version.
You said you don't have access to the server configuration -- I assume this is because you are on a shared server? Most shared servers already have mod_rewrite installed. And while the apache vhost is typically the most appropriate place to put rewrite rules, they can also be put in a .htaccess file within any directory you want to control. (Sometimes the server configuration disables this, but usually on a shared host, it is enabled) Look into creating .htaccess files and how to use mod_rewrite
by default, using sfDGP, when i try to execute an action of an application with security activated, the signin form appears but the URL doesn't change to "frontend_dev.php/login".
So, what should I do the URL to "frontend_dev.php/login" ?
Regards
Javi
Its been awhile since i dipped that deep, but if i recal correctly the security fowarding in Symfony uses an internal foward so that the server doesnt have to handle an entirely new request. When you use an internal forward like this the URL will not change, because as far as the client is concerned you are still at the same URL you initially requested.
You would need to create your own Security filter to replace the default sfBasicSecurityFilter i believe, and then you would also probably need to modify any instances in actions or elsewhere that use forward in response to invalid/non-existent credentials.
I dont think there is an easy way to do this, and honestly its not advisable if you do. There are probably other solutions to what you need to acheive... Why do you need the URL to change?
If you're already on frontend_dev.php/ before you attempt to login, this should happen normally as part of default behaviour --- unless you've been changing settings somewhere. You can always replace the URL manually once you've signed in by adding /frontend_dev.php/. It will work as you're authenticated on the machine anyway.
I've inherited a bad sitation where at our network we have many (read: many) hosted sites doing this:
include "http://www.example.com/news.php";
Yes, I know that this is bad, insecure, etc, and that it should be echo file_get_contents(...); or something like that (the output of "news.php" is just HTML), but unfortunately this is what they use now and we can't easily change that.
It used to work fine, until yesterday. We started to 301-redirect all www.example.com requests to just example.com. PHP does not appear to follow this redirect when people include the version with www, so this redirect breaks a lot of sites.
To sum it up: Is there a way to get PHP to follow those redirects? I can only make modification on the example.com side or through server-wide configuration.
You said, in a comment: "I can go and change all the includes, but it'd just be a lot of work".
Yes. That's the "bad, insecure, but-I-don't-have-a-reason-to-change-it code" coming back to bite you. It will be a lot of work; but now there is a compelling reason to change it. Sometimes, cleaning up an old mess is the simplest way out of it, although not the easiest.
Edit: I didn't mean "it's your code and your fault" - rather, "bad code is often a lot of work to fix, but it's usually less work than to keep piling hacks around it for eternity, just to keep it kinda-working".
As for "going and changing it", I'd recommend using cURL - it works much better than PHP's HTTP fopen wrappers.
Can't you use curl? In curl_setopt it has an option to follow redirects.
Let's start with the redirecting http repsonse.
<?php
error_reporting(E_ALL);
var_dump( get_headers('http://www.example.com/news.php') );
// include 'http://www.example.com/news.php'
The output should contain HTTP/1.0 301 Moved Permanently as the first entry and Location: http://example.com/news.php somewhere.
I don't think any of those solutions provided by PHP itself would help... I just don't think any of them follow headers and what not. For what it's worth, I do think, though, that this behaviour is correct: you're asking for the result of a certain request and you got it. The fact that the result is telling you to look elsewhere is, in and of itself, a valid result.
If I were you, I'd look at cURL. There's a PHP extension for it and it will allow you to tell it to follow headers and get to where you're trying to get. If this is not usable (as in, you absolutely, positively have to use the approach you currently are), you will need to revert the redirects on the 'source' server: maybe you could have it return the information or the redirect based on requesting IP address or something similar?