I am having an issue with my website which I have been a bit unsure of how to fix, only because it is not constant happening. Sometimes it does, and other times it does not.
The issue has to do with sessions, and redirecting. What I am doing is, when an user completes a task, they are redirected. When they are redirected, a $_SESSION is set with a message which says that what they did has been done. Once the page which the user is being redirected to loads, and the message is not being displayed, the session variable that contains the message is unset because the page refreshes before it is finally shown to the user. The weird thing is that for one specific redirect (when a user logs in with Facebook Connect) it works, but for all other redirects it does not.
My latest idea as to why this happens is because I am using mod_rewrite. Can mod_rewrite cause the page to refresh before it is displayed in the browser?
Here is the code which I am using for the URL rewriting:
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*) index.php [L]
No. mod_rewrite tells apache to take the incoming request that matches your rule (before a response has been sent to you), and route it to a particular destination. It's like mail forwarding--you don't get the letter twice, you just get it in a different location.
UPDATING TO ADD SOLUTION
You are likely performing some actions after the header call for the redirect. Add a call to exit after header, and your problem should be solved.
Related
When my site redirect to with www I wrote below code in htaccess, mistakenly I forgot instead of example.com I placed example.org, it redirects to example.org now, I found my mistake and replaced with .com instead of .org, in browser I tested its redirect default to .org.
I don't know why its redirect to another site with .org. Is there any cache? How to resolve it?
RewriteCond %{HTTP_HOST} ^example.com [NC]
RewriteRule (.*) http://www.example.org/$1 [R=301,L]
www.example.com is redirect to www.example.org
see this posting:
• The simplest and best solution is to issue another 301 redirect back
again.
The browser will realise it is being directed back to what it
previously thought was a de-commissioned URL, and this should cause it
re-fetch that URL again to confirm that the old redirect isn't still
there.
Edit: some comments throw doubt upon this, see below.
• If you don't have control over the site where the previous redirect
target went to, then you are outta luck. Try and beg the site owner to
redirect back to you.
and also this one:
Make the user submit a post form on that url and the cached redirect
is gone
How long do browsers cache HTTP 301s?
I have a php script that redirects to an external https page, but unfortunately firefox (and maybe other browser, didn't try yet) block https redirects with a really "scary" message for the most inexperienced users.
Is there a way to bypass this issue without asking the user to handle their browser preferences?
For the redirect I'm using a simple header ("Location: $url");
Thank you
In short, no.
If a user has their Firefox preferences set to show a warning when a redirect happens, you can't get around it.
You can avoid this by showing an interstitial page where users can manually click on a link.
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
I know that I can setup a 404 error page simply enough using .htaccess.
As I understand it, if a page isn't found, an error gets sent back to the browser and the 404 page is displayed. The user may or may not be aware of this 'error'.
What I would like to do is allow the user to type in page1.php, page2.php, page3.php etc., and go to a page that prints page1, page2, page3, etc.. respectively.
The reasoning is that the logic is very similar for page1, page2, page3, etc.. I think it might be easier to send the user to the same page, which then calculates what to do using the PHP server variables: determine if this is a valid page or not, and then either throw a 404 error or print the correct message.
Is this possible? Or just a stupid thought?
I would like the url to be unaffected and remain as the user typed.
Thanks.
This is definitely not a stupid thought.
The instance that delivers content, or error pages if no content is found, is the web server.
If you want to handle that logic in PHP, you can tell the web server to pass all requests to the same PHP file. Then you can make use of the $_SERVER variable to determine what the user was requesting and serve the content and send the correct status code (e.g. 404).
In Apache, you can define a Fallback Resource if you are using version 2.2.16 or later. This will redirect requests to non-existent files to a specified file.
<Directory /var/www/my_blog>
FallbackResource index.php
</Directory>
In older Apache versions, you can use mod_rewrite to redirect the requests:
<Directory /var/www/my_blog>
RewriteBase /my_blog
RewriteCond /var/www/my_blog/%{REQUEST_FILENAME} !-f
RewriteCond /var/www/my_blog/%{REQUEST_FILENAME} !-d
RewriteRule ^ index.php [PT]
</Directory>
I'm creating a website with php backend. I have a directory called /inc/ which contains php include files that get included when generating html webpages.
If a user tries to request any files in the /inc/ directory (by url in their browser, for example), I've made it so they get redirected to the home page. I did this in an attempt to ensure that none of these files get called externally.
I have need to call one of these files via jQuery POST request.
Here is my question:
1) Can I somehow hide the url of the file requested in the POST?
2) Will a POST to a file in the /inc/ folder via jQuery fail, since external requests for files in the /inc/ folder get redirected to the home page? Or does the server make a distinction between POST requests, and other types of requests?
3) (OPTIONAL) How can I ensure that the POST request is done "legitimately", as opposed to a bot trying to crash my server by issuing thousands of simultaneous post requests?
Not without using a link somewhere, somehow. Remind yourself that jQuery / Ajax / XHTTPRquests / Xm... anything pointing outwards has to point outwards. There will always be an url, and it will always be traceable.
Options to make sure your url is less traceable
Create a page for your javascript calls (hides away, but doesn't really do anything)
Edit .htaccess options and use it to process javascript requests
Edit .htaccess options and a php page for server-side processing of javascript
I'll be going over option 3
Example (includes 2.!)
#Checks if the request is made within the domain
#Edit these to your domain
RewriteCond %{HTTP_REFERER} !^.*domain\.com [NC]
RewriteCond %{HTTP_REFERER} !^.*domain\.com.*$ [NC]
#Pretends the requested page isn't there
RewriteRule \.(html|php|api.key)$ /error/404 [L]
#Set a key to your 'hidden' url
#Since this is server-based, the client won't be able to get it
#This will set the environment variable when a request is made to
#www.yourwebsite.com/the folder this .htaccess is in/request_php_script
SetEnvIf Request_URI "request_php_script" SOMEKINDOFenvironmentNAME=http://yourlink.com
#Alternatively set Env in case your apache doesn't support it
#I use both
SetEnv SOMEKINDOFNAME request_php_script
#This will send the requester to the script you want when they call
#www.yourwebsite.com/the folder this .htaccess is in/request_php_script
RewriteCond %{REQUEST_URI} request_php_script$ [NC]
#if you don' want a php script to handle javascript and benefit the full url obfuscation, write the following instead
#RewriteRule ^.*$ /adirectscript.php [L]
RewriteRule ^.*$ /aredirectscript.php [L]
#and yes this can be made shorter, but this works best if folders and keys in your ENV are similar in some extend
In this case could call a php script that redirects you to the right page, but if everything is internal, then I don't see the reason why you would hide away the url to your scripts. If you have set the .htaccess as shown, only your page can access it. Users and external sources aren't able to reach it, as they'll be redirected.
If your scripts refer to an external API key however, then this might be useful, and you could call the redirect script
<?php
echo file_get_contents(getEnv("SOMEKINDOFNAME"));
?>
Now when this script is called, it'll return your contents. If you want to load in pages, you can call something described like here this instead
getting a webpage content using Php
To make full use of this, you have to set your jQuery POST method to POST at www.yourwebsite.com /the folder this .htaccess is in/request_php_script.php
If 1 and 2 are done properly, like above, you shouldn't have to worry from bots from the outside trying to reach your .php scripts
Sidenote:
You can skip the extra php script, but you'll be traceable in the .har files. Which means that in the end, your url is still reachable somewhere. Using the extra php script (give it query parameters for convenience) will obfuscate the url enough to make it dead hard to find. I've used this way to hide away requests to an external API key
TLDR:
set .htaccess server environment variable for page
set .htaccess rewrite condition to page to be redirected to page if
originally from my page
called by script
set javascript to call specified page
Medium 'secure'
cannot be found in script
cannot be traced back without saving har files
create php file to return page contents
set .htaccess to point to php instead of page
Highly 'secure'
cannot be found in script
cannot be traced back, even when saving har files
1) No.
2) Depends on how you handle redirects, best way is to try and see.
3) Not an easy task in general. Simple approach is to detect same client and limit request rate. No way to detect a bot in general only by request data.
As for your last comment, you can restrict access to thouse files with .htaccess without need for redirects. However you still won't be able to get them with AJAX. The only real reason to hide something is if there is some sensitive information inside like, passwords, logins, something like that. Overwise it's doesn't really matter, nobody is interested in some hidden utulity files.
I have two different domains that both point to my homepage in the same server.
I want to log every single access made to my homepage and log which domain the user used to access my homepage, how can I do this?
I tried mod_rewrite in Apache and logging to a MySQL database with PHP but all I could do was infinite loops.
Any ideas?
EDIT:
By your answers, I see you didn't get what I want...
As far as I know Google Analytics does not allow me to differentiate the domain being used if they both point to the same site and it also does not allow me to see that some files like images were accessed directly instead of through my webpages.
I can't also just use $_SERVER['HTTP_HOST'] cause like I just said, I want to log EVERYTHING, like images and all other files, every single request, even if it doesn't exist.
As for Webalizer, I never saw it differentiate between domains, it always assumes the default domain configure in the account and use that as root, it doesn't even display it. I'll have to check it again, but I'm not sure it will do what I want...
INFINITE LOOP:
The approach I tried involved rewriting the urls in Apche with a simple Rewrite rule pointing to a PHP script, the PHP script would log the entry into a MySQL database and the send the user back to the file with the header() function. Something like this:
.htaccess:
RewriteCond %{HTTP_HOST} ^(www\.)?domain1\.net [NC]
RewriteRule ^(.*)$ http://www.domain1.net/logscript?a=$1 [NC,L]
RewriteCond %{HTTP_HOST} ^(www\.)?domain2\.net [NC]
RewriteRule ^(.*)$ http://www.domain2.net/logscript?a=$1 [NC,L]
PHP Script:
$url = $_GET['a'];
$domain = $_SERVER['HTTP_HOST'];
// Code to log the entry into the MySQL database
header("Location: http://$domain/$url");
exit();
So, I access some file, point that file to the PHP script and the script will log and redirect to that file... However, when PHP redirects to that file, the htaccess rules will pick it up and redirect again too the PHP script, creating an infinite loop.
The best thing do would be to parse the server logs. Those will show the domain and request. Even most shared hosting accounts provide access to the logs.
If you're going to go the rewrite route, you could use RewriteCond to check the HTTP_REFERER value to see if the referer was a local link or not.
RewriteCond %{HTTP_HOST} ^(www\.)?domain1\.net [NC]
RewriteCond %{HTTP_REFERER} !^(.*)domain1(.*)$ [NC]
RewriteRule ^(.*)$ http://www.domain1.net/logscript?a=$1 [NC,L]
RewriteCond %{HTTP_HOST} ^(.*)domain2\.net [NC]
RewriteCond %{HTTP_REFERER} !^(.*)domain2(.*)$ [NC]
RewriteRule ^(.*)$ http://www.domain2.net/logscript?a=$1 [NC,L]
You may also want to post in the mod_rewrite forum. They have a whole section about handling domains.
If Google Analytics is not your thing,
$_SERVER['HTTP_HOST']
holds the domain that is used, you can log that (along with time, browser, filepath etc). No need for mod_rewrite I think. Check print_r($_SERVER) to see other things that might be interesting to log.
Make sure to still escape (mysql_real_escape_string()) all the log values, it's trivially easy to inject SQL via the browser's user-agent string for example.
So, I access some file, point that file to the PHP script and the script will log and redirect to that file... However, when PHP redirects to that file, the htaccess rules will pick it up and redirect again too the PHP script, creating an infinite loop.
Can you check for HTTP headers in the RewriteCond? If so, try setting an extra header alongside the redirect in PHP (by convention custom HTTP headers start with 'X-' so it could be header('X-stayhere: 1');), and if the X-stayhere header is present, the RewriteCond fails and it doesn't forward the browser to the PHP script.
If, however, you can cron a script to download the server logs and run them through some freeware logfile analyzer, I'd go with that instead. Having two redirects for every request is a fair bit of overhead.. (and if I was more awake I might be able to come up with different solutions)
Does Google Analytics not provide this option? Or could you not parse your server log files?
Why not use the access log facility build in apache?
Apache have a "piped log" function that allow you redirect the access log to any program.
CustomLog "|/path/to/your/logger" common