Is it possible to restrict a domain to only allow visitors using a specific browser?
I'm building an app and it's only been tested in Chrome so far so I only want to allow Chrome users during Beta testing. Basically, I want to white-list browsers as I go through testing. Any suggestions on the approach I should would be greatly appreciated.
Yes you can.
The browser that is accessing the page is in the _SERVER array.
If you find that the accessing browser is not Chrome, then just send a 404 header.
You can block all other useragents with a relevant error message using .htaccess
Put this in .htaccess in root of your site:
ErrorDocument 503 "Your must use Chrome to access this site"
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} ^(Mozilla.*|Safari.*|Other.*)$ [NC]
RewriteRule .* - [R=503,L]
Related
I've coded a website myself (html, css, a bit of jquery/javascript and php, with a database connection). It's in essence just a simple portfolio with text, a carousel of photos/designs/videos and some pdf files. So, no users are involved except a simple login for myself to upload additional photos or designs. These photos, designs and videos are saved in a simple database.
Whenever I go to my website in e.g. a safari browser, safari let's me know that the website is 'not safe' because I don't have a SSL certificate or .htaccess file. (So, my website is http://example.com and not https://example.com). However, it works perfectly regardless of the 'not safe' notification in the browser bar.
I've contacted the hosting company and they told me to create a .htaccess file and place it in the public_html folder.
This is the content of that file:
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
SetEnvIf X-Forwarded-Proto "https" HTTPS=on
RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
However, when I add this to my public_html folder, my website simply doesn't work. When I go to example.com it correctly redirects to https://example.com, but I get this notification:
Not Found
The requested URL was not found on this server.
Additionally, a 404 Not Found error was encountered while trying to
use an ErrorDocument to handle the request.
When I delete the .htaccess file, it works again (but is unsafe according to my browser).
The hosting company responded with: "we are not website builders so we can't help you with this", so I'm at a loss on how to fix this. Hence this post.
How do I fix this, without a cms to depend on?
The error is not on the code, it depend on apache configuration, if you have access to the server ssh, look this tool : https://certbot.eff.org/. if you're not, the hosting compagny have to resolve you're problem.
I have htaccess allowing access through a link to a Wordpress site (siteA.com) only through one specific URL (siteB.com) and denying all others.
This does it for me...
RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http?://siteA.com/
RewriteRule ^ - [L]
RewriteRule ^ - [F]
ErrorDocument 403 /forbidden.html
BUT it doesn't load siteB.com stylesheet.
I'm looking for a htaccess rule that would allow me to access a site if only accessed through a specific link. Security here is not an issue.
TL;DR; While you can try playing around with htaccess, there is no reliable way to do what you want.
The simple answer is that request to stylesheet has your main page as the referrer. To see this, navigate to your site, open Dev Tools (F12 in Chrome), then switch to Networks tab, select your CSS and look at request headers.
For example, the page for this question has this URL:
http://stackoverflow.com/questions/40220527/htaccess-allow-only-specific-url-doesnt-load-its-css
And the request for CSS has this in its headers:
Referer: http://stackoverflow.com/questions/40220527/htaccess-allow-only-specific-url-doesnt-load-its-css
Overal, it's a very, very bad idea to filter based on the Referer or any request header for that matter, as they are very easily spoofed.
I currently have a site where users can login and do various things. The site uses SSL(HTTPS).
Is there a way to use .php or .htaccess to unsecure a specific link and any link connected (ie. (https://domain.com/unsecure, https://domain.com/unsecure/randominfinit)) unsecure?
But also would this work with a user being logged in to their account and be able to navigate out of /unsecure or /unsecure/randominfinit and still be logged in and not throw errors or browser errors or reduce security?
I have been looking everywhere for a solution for this and have not been able yet to find a solution.
The reason why I need to do this is because I am using iframe to load .swf content on my secure site that is hosted on another domain/server. If you have a better idea to deliver content using iframe with non-SSL content, please tell me - I am all ears.
As long as you have a valid cert you can go between http and https. This will check if the directory is unsecure and https then redirect to non https.
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/unsecure [NC]
RewriteCond %{HTTPS} ^on
RewriteRule ^(.*) http://example.com/$1 [R=301,L]
Let me know how that works for you.
A new client needs my help, their web developer messed up - built website on a draft/test server but forgot to block Google etc. I would appreciate help for the community here, I am not an expert with HTACCESS redirection.
As I said, another website developer setup the clients draft site on their draft server, its been there for months, however they forgot to hide it from search engines, so the content has been indexed by Google etc – this will trigger a duplicate content penalty if web put the new website live and the new website will be useless effectively.
I have access to the draft site / server and can modify the HTACCESS file, so when the new site goes live I would like to have the correct redirects in place. There are a few subdomains on the site (it's a multi language site), so it's a little tricky.
The website is built on Wordpress
The website structure looks like this on the test server. All files page names and file names are identical, just moving to a new server.
http://clientdomain.testserver.com
http://it.clientdomain.testserver.com
http://fr.clientdomain.testserver.com
http://es.clientdomain.testserver.com
http://de.clientdomain.testserver.com
http://ko.clientdomain.testserver.com
http://pt.clientdomain.testserver.com
http://ru.clientdomain.testserver.com
http://tr.clientdomain.testserver.com
http://cn.clientdomain.testserver.com
The redirects will need to go here:
http://clientdomain.com
http://it.clientdomain.com
http://fr.clientdomain.com
http://es.clientdomain.com
http://de.clientdomain.com
http://ko.clientdomain.com
http://pt.clientdomain.com
http://ru.clientdomain.com
http://tr.clientdomain.com
http://cn.clientdomain.com
The existing HTACCESS file on the test server looks like this
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
# add a trailing slash to /wp-admin
RewriteRule ^wp-admin$ wp-admin/ [R=301,L]
RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
RewriteRule ^(.*\.php)$ $1 [L]
RewriteRule . index.php [L]
I would really appreciate any help on this.
There are some existing threads which contain all the pieces of the HTACCESS puzzle, but I am a little confused:
How can I redirect from one subdomain to another in .htaccess?
How can I redirect from one subdomain to another in .htaccess?
Kind Regards,
GG
If it was me I wouldn't bother messing around with redirects, get the urls removed from the index. Google will remove them with 24 hours, sometimes much quicker nowadays.
Add the development domain to your Webmaster Tools account and verify it. Then go to Google Index -> Remove Urls;
Just enter the the value / in the removal request which tells Google to remove every url in the index for that domain.
Then add a blocking robots.txt file to site root;
User-agent: *
Disallow: /
And what I normally do (this has happened a couple of times to me despite robots.txt and basic auth protection - git disaster/shenanigans) is prompt Google to reindex the site straight away. Go to Crawl -> Fetch as Google
Leave the input box blank so it fetches the whole site and just hit the Fetch button. When Google has fetched it click the 'Submit to Index' button.
You will be amazed how quickly this can happen these days, used to take weeks if you were lucky.
EDIT
And just to make sure this doesn't happen to anyone else finding this, the best way to stop it getting a dev site indexed isn't a robots.txt file or using Basic Auth via the .htaccess file (as previously mentioned it's easy to accidentally delete these). You should enable Basic Auth on the development site via the vhosts file.
Like it's not only for Google...
You can use this .htaccess:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^((?:..\.)?clientdomain)\.testserver\.com [NC]
RewriteRule ^ http://%1.com%{REQUEST_URI} [NE,L,R=301]
my website has a log in by open id feature. When a user logs in for the first time using his/ her openid they are redirected to a create account page. I noticed just recently that one user when logged in using her google account created an account for the first time. However when she tried to log in again using the same google account - she was faced with creating a new account again. I checked the db and saw that although she used the same google account - the open ID urls which were retrieved are different?
EDIT===================
Thanks Kobi for the information - the issue is that I need to set up my website so it always opens with www prepended to it i.e. http://www.mysite.com and NOT http://mysite.com
Owing to this subtle difference google OpenID recognises the two urls as different urls!!! Help please
I realised its an htaccess thing however I googled a bit and found these htaccess commands:
RewriteCond %{HTTP_HOST} ^site.com [NC]
RewriteRule (.*) http://www.site.com/$1 [L,R=301]
However the problem is that when I use this in my htaccess it does forward and ensure the link reads as www.site.com however it messed up all the javascript links - actually I'm using url rewriting here as well... my whole htaccessfile is somewhat like this:
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule .* index.php
RewriteRule !\.(js|ico|gif|jpg|png|css)$ index.php
AddType text/css .css
inclusion of the two lines messes up the url rewriting :( what do I do here
======================
Uh never mind I figured it out :) I was putting the two rewrite url lines at the end thus somehow overriding the other rewrite rules - putting them in the beginning fixed it :) thanks anyway
Google gives different URLs for different domains.
It is possible your user used a different URL each time to log in? Even www on the start of the url can change the code Google returns.