I have inherited working on a very large php website. We're running into some issues where certain pages are causing strange redirects. I'm looking for some ideas to help find these redirects in the php code.
We are not hosting the site so we do not have shell access. Originally I was going to hook into a shutdown function or a destructor and log the call stack, but the call stack looks to only contain the exit function. Next I was looking for ways to override the php 'header' function (through namespaces most likely) but this would involve adding the namespace to too many files.
Are there any php techniques, short of placing die statements everywhere or downloading all of the source code and running grep over it, that would allow finding which redirect in the code is being triggered?
First
Avoid debugging on a live production site whenever possible. If you must, then I would set up "gates" to prevent other people from seeing your tests. You can do this by checking for an IP address or a custom GET variable. The following are two PHP examples.
if ($_SERVER["REMOTE_ADDR"] == "xxx.xxx.xxx.xxx") {
// Run Your Code Here
}
or
if (!empty($_GET["my_custom_get_var"])) {
// Run Your Code Here
}
Technique 1 - PHP Specific
Sometimes the easiest way to find something or test if something is working is to try to break it. In this case, for a PHP redirect to be successful (which uses the PHP header function), it requires that no header information was sent to the browser before making the call. So first, see if you can trigger the dreaded headers already sent warning by outputting anything ahead of that call. If you can trigger that warning you should be able to get the line number and file of where the redirect is occurring. From there you can debug_backtrace() to find the culprit of the redirect.
If output buffering is on, you will want to disable that temporarily or flush it for your test.
Technique 2 - WordPress Specific
I thought I would also throw in a WordPress solution in case you are working on a very large WordPress site. As long as the code / plug-in that is causing the redirect is using WordPress's core then you should be able to tap into their hook system. The following would be a test you could run from the "functions.php" file in the active template / theme - which should show you the stack trace leading up to the redirect.
function my_custom_function ($location, $status) {
print_r(debug_backtrace());
exit;
}
add_filter("wp_redirect", "my_custom_function" , 10, 2);
Related
I am not able to identify how and where it is happening. When i run a test on pingdom, every 3 out of 5 times it will show in the result that my website www.filliplingua.com is redirected to "/". I am giving the link to the reults below:
http://tools.pingdom.com/fpt/#!/bpgda9/www.filliplingua.com
It is a joomla website. I even reset my .htaccess. And turned off redirect plugin in joomla and cleared all kinds of caches. Still it is showing. Can u please help me find out how to solve is.
If the redirect is originating from PHP it's programmed with the header() function, and that function will throw a warning if the output has already started, because the first output will send the HTTP headers.
PHP Manual about header():
Remember that header() must be called before any actual output is
sent, either by normal HTML tags, blank lines in a file, or from PHP.
It is a very common error to read code with include, or require,
functions, or another file access function, and have spaces or empty
lines that are output before header() is called. The same problem
exists when using a single PHP/HTML file.
So what you can do in index.php is start the file with any output, like echo "here";
and this will trigger a warning when the script is trying to redirect, the redirect will fail, and in the error description (in the log or on the page, depending on your error-reporting settings) you will be able to see what file the redirect originates from. From there, you will probably figure out why it tries to redirect.
Good luck!
I am using a PHP redirect in one of my projects. I do it like this:
header( 'Location: http://domain.net/?cig=warn&onum=' . $onum . '&cnum=' . $cnum . '&c=' . $c . '#Form' );
So you see, it's a normal use of PHP's header() function where some variables are included in the URL.
It's working, but testing the page with Firebug, I get this error in the console:
"NetworkError: 404 Not Found - http://domain.net/?cig=warn&onum=12&cnum=73&c=ui#Form"
Is there actually something wrong with the way I do it or is this just Firebug being picky because of all the parameters and the anchor?
Since this page lives within a WordPress environment, you must force a 200 status code by manually setting it before your headers get sent (using the WordPress status_header() function):
<?php
// This must appear before any output is sent!
status_header(200);
?>
You'll have to pull in the WordPress functions to get at this, obviously. The problem is that WordPress is intercepting the request before your page has a chance to process it. It cannot locate the "page" and sets a 404 response code, which is what eventually gets sent back out.
When I've had to pull in the WordPress functions in the past, I've done so by including these two lines on the page:
define('WP_USE_THEMES', false);
require_once("/path/to/my/site/wp-blog-header.php");
Now, that said, the pages I've done this with in the past have had the same look and feel as the ones in my WordPress installation. There may be a better way to pull in the functions alone; you'd have to consult the WordPress documentation to find out (I don't know if there is). How to do it properly might be a good question for the WordPress StackExchange site.
I am experiencing some very strange behavior when including a php file.
I need to load a script that is not on the same domain as the page that will be calling it.
I have already created a system that works using cURL, but I just recently found out that many of the sites that will need to have access to this script, do not have cURL installed.
I did, however, notice that these sites have allow_url_fopen set to on. With this knowledge I got started creating a new system that would let me just include the script on the remote site.
Just testing this out, I coded the script test.php as follows:
<?php
echo("test");
?>
I include this script on the remote page using:
<?php
include("http://mydomain.com/script.php");
?>
and it works no problem and "test" is printed at the top of the page.
However, if I add a function to the script and try to call the function from the page, it crashes.
To make it worse, this site has php errors turned off and I have no way of turning it on.
To fully make sure that I didn't just mess up the code, I made my test.php look like this:
<?php
function myfunc()
{
return "abc";
}
?>
Then on the page including the file:
<?php
include("http://mydomain.com/script.php");
echo(myfunc());
?>
And it crashes.
Any ideas would be greatly appreciated.
This is not odd behavior, but since you load the file over the internet (note in this case the World Wide Web), the file is interpreted before it is sent to your include function.
Since the script is interpreted no functions will be visible, but only the output of the script.
Either load it over FTP or create an API for the functions.
My guess: The PHP of http://mydomain.com/script.php is interpreted by the web server of mydomain.com. All you're including is the result of that script. For a simple echo("test"), that's "test". Functions do not produce any output and are not made available to the including script. Confirm this by simply visiting http://mydomain.com/script.php in your browser and see what you get. You would need to stop mydomain.com from actually interpreting the PHP file and just returning it as pure text.
But: this sounds like a bad idea to begin with. Cross-domain includes are an anti-patterns. Not only does it open you up to security problems, it also makes every page load unnecessarily slow. If cross-domain inclusions is the answer, your question is wrong.
You are including the client side output from test.php rather than the server-side source code. Rename test.php to test.phpc to prevent executing the script. However this is dangerous out of security point of view.
Simple problem:
I have conditions in php like so:
if (!$authorized)
show_site_404();
or like so for that matter
if (!$logged_on)
show_login_page();
These are obviously toll gates so that we don't have trespassers into parts of the system where only a specific user or only those that are logged on should be able to go.
The code in both these cases simply loads another page than that which was intended by
require( MAINPATH . 'site-404.php' );
exit();
With Apache, this was never a problem. No settings needed.
With Nginx, it sends all such calls to the frontpage. It's like it doesn't accept an internal "re-direct" if you see what I mean.
Any help appreciated.
The problem with require as this would stop your script with an error message although with errors turned off you probably wouldn't notice this as you run an exit(); to stop execution anyway.
Check the constant variable is including the files, try them in the same directory without mainpath constant.
I'm developing a website, and due to user-input or by other reason, I need to show some error messages.
For this, I have a page named error.php, and I get the error number using $_GET. All error messages are stored in a array.
Example:
header( 'Location: error.php?n=11' );
But I don't want the users to the enter the error code in the URL and see all the other error messages.
For preventing that, I thought I could whitelist the referer page, and only show the error message if the referer is found in my whitelist.
It should be fair similar to this (haven't tested yet ;) )
$accept = false;
$allowedReferer = array (0=>'page1.php', 'page2.php');
if (in_array($_SERVER['HTTP_REFERER'], $allowedReferer )) {$accept = true;}
if ($accept) { $n=$_GET['n'];echo "Error: " . $errorList[$n];}
Is this method good enough to avoid the spy-users?
I'm doing this with PHP5
Thanks
No, it isn't remotely secure: the HTTP Referer header is trivial to spoof, and is not a required header either. I suggest you read this article for an example of exploiting code (written in PHP), or download this add-on for Firefox to do it yourself from the comfort of your own browser.
In addition, your $allowedReferer array should contain full URL's, not just the script name, otherwise the code will also be exploitable from remote referrals, e.g. from
http://www.example.org/page1.php
To summarise: you cannot restrict access to any public network resource without requiring authentication.
Rather than redirect, you could simply display the error "in place" - e.g. something as simple as adapting your present code with something like
if ($error_condition)
{
$_GET['n']=11;
include "/path/to/error.php";
exit;
}
In practice it might be a little more sophisticated, but the idea is the same - the user is presented with an error message without redirecting. Make sure you output some kind of error header, e.g. header("HTTP/1.0 401 Bad Request") to tell the browser that it's not really seeing the requested page.
If you do want to redirect, then you could create a "tamperproof" URL by including a hash of the error number with a salt known only to your code, e.g.
$n=11;
$secret="foobar";
$hash=md5($n.$secret);
$url="http://{$_SERVER['HTTP_HOST']}/error.php?n={$n}&hash={$hash}";
Now your error.php can check whether the supplied hash was correctly created. If it was, then in all likelihood it was created by your code, and not the user.
You shouldn't use an external redirect to get to an error page. How I structure my PHP is like this:
I have a common file that's included in every page, with common functions: handle login/logout, set up constants and the like. Have an error() function there you can pass error information to that will show an error page and exit. An alternative is to use the index.php?include=pagename.php idiom to achieve common functionality but I find this far more flaky and error prone.
If you externally redirect the client (which you obviously need to do sometimes) never rely on the information passed via that mechanism. Like all user input it's inherently untrustworthy and should be sanitized and treated with extreme caution. Don't use cookies either (same problem). Use sessions if you need to persist information between requests.
HTTP_REFERER can be spoofed trivially by those with enough incentives (telnet is the tool of choice there), and shouldn't be trusted.
Error messages should never reveal anything critical anyhow, so I'd suggest you to design your error messages in such a way that they can be showed to anyone.
That, or use random hashes to identify errors (instead of 11, use 98d1ud109j2, etc), that would be stored in a central place in an associative array somewhere:
$errors[A_VERY_FATAL_ERROR] => "308dj10ijd"
Why don’t you just include the error message script? And to get rid of previous output data, use the output control to buffer it and clear it on error:
if ($error) {
ob_clear();
$errorCode = 11;
include 'error.php';
exit;
}
Instead of redirecting to an error page why not include an error page. You can restrict access to a directory containing the php files that contain the error content with .htaccess:
RedirectMatch 404 ^error-pages/*$
and inside the error-pages you can have include-able pages which display errors.
With this method you can be sure that no one can directly access the pages in the error-pages directory yet you can still include them within scripts that are publicly accessible.
If you handle errors before sending headers, you can easily create a function that outputs a basic html page with content, and exit right after it. That way there is no specific need for any other page (apart from the functions page, I guess).
It's as simple as checking if there's a problem, and if there is a problem, just call the function.
I use a function like this that even writes data away when it is called, so I have my own error logs...