Prevent scripts from stealing password in open source PHP project? - php

I'm currently developing a PHP framework. Other developers may create modules for the framework. Source code of these modules should reside in the framework directory.
Since the project is open-source, modules know location of the config file which has database password in it. How to protect passwords from malicious modules? Please check that modules may just require_once the config file and do harmful things!
Currently I'm storing Database passwords in a directory named config, and protecting it by a .htaccess file:
<Directory config>
order allow,deny
deny from all
<Directory>
But that is not sufficient to prevent scripts steal the password, is it?
I've read the thread How to secure database passwords in PHP? but it did not help me finding the answer.

In PHP, you can't. It's not a sandboxed language; any code you run gets all the permissions of the user it's running under. It can read and write files, execute commands, make network connections, and so on, You must absolutely trust any code you're bringing in to your project to behave well.
If you need security boundaries, you would have to implement them yourself through privilege separation. Have each module run in its own process, as a user with very low privileges. Then you need some sort of inter-process communication. That could be using OS-level pipes, or by having separate .php files run as different users running as web services accessed by the user-facing scripts. Either way, it doesn't fit neatly into the usual way PHP applications work.
Or use another language such as Java, which can offer restricted code with stronger guarantees about what it is allowed to do (see SecurityManager et al).

Unfortunately, PHP is not a very secure language or runtime. However, the best way to secure this sort of information is to provide a configuration setting that has your username/password in it, outside of your document root. In addition, the modules should just use your API to get a database connection, not create one of their own based on this file. The config setting should not be global. You should design something like this in a very OOP style and provide the necessary level of encapsulation to block unwarranted access.

I've got an idea that may work for you, but it all really depends on what abilities your framework scripts have. For my idea to be plausible security wise you need to essentially create a sandbox for your framework files.
One idea:
What you could do (but probably more resource intensive) is read each module like you would a text file.
Then you need to identify everywhere that reads a file within their script. You've got things like fopen for file_get_contents to consider. One thing I'd probably do is tell the users they may only read and write files using file_get_contents and file_put_contents, then use a tool to strip out any other file write/read functions from their script (like fopen).
Then write your own function to replace file_get_contents and file_put_contents, make their script use your function rather than PHP's file_get_contents and file_put_contents. In your file_get_contents function you're essentially going to be checking permissions; are they accessing your config file, yes or no, then return a string saying "access denied" if they are or you use the real file_get_contents to read and return the file if not.
As for your file_put_contents, you just need to make sure they're not writing files to your server (they shouldn't be allowed, imagine what they could do!), alternatively, you could probably use a CHMOD to stop that happening.
Once you've essentially rewritten the module in memory, to be secure, you then use the "exec" function to execute it.
This would take a considerable amount of work - but it's the only pure PHP way I can think of.

I am not sure if it is possible, however you could maybe make a system which checks the files in the module for any php code which tries to include the config file, and then warn the user about it before installing.
However it really shouldn't be your responsibility in the end.

A very good question with no good answer that I know of, however...
Have you seen runkit? It allows for sandboxing in PHP.
The official version apparently isn't well maintained any more, however there is a version on GitHub that is quite popular: zenovich/runkit on GitHub
Although the best solution is perhaps a community repository where every submission is checked for security issues before being given the OK to use.
Good Luck with your project

Well, I see no problem here.
If it's a module, it can do harmful things by definition, with or without database access. It can delete files, read cookies, etc etc.
So, you have to either trust to these modules (may be after reviewing them) or refuse to use modules at all.

Don't include your actual config file in your open source project.
The way I do it is a create just the template config file config.ini.dist
When a user downloads your project they have to rename it to config.ini and enter their own configuration information.
This way every user will have their own database connection info like username and password. Also when you update your project and users download your newest version, their own config files will not be overwritten by the one from your program.
This a a very common way to store configuration in open source projects - you distribute a template config file and tell users that they have to rename it and enter their own configuration details.

I don't think there is a way to prevent a module to capture sensible data from the actual framework configuration and send it to some stranger out there. On the other end, I don't think that should be your responsability to protect the user from that to happen.
After all, it's the user that will decide to install any module, right? In theory it should be him that would have to verify the module intents.
Drupal, for example, does nothing in this direction.
There is a worst problem, anyway: what'd prevent a nasty module to wipe out your entire database, once it is installed?
And, by the way, what could the malicious stranger do with your database password? At the very least you anyway need to secure the connection of the database, so that only trusted hosts can connect to the database server (IP/host based check, for example).

Related

php security - prevent loading php files that's writable

I'm trying to enforce a policy of mutually exclusive write and execute. Since the interpreter only needs read access to be able to execute a file, it becomes very tricky to avoid loading php code that is not trusted.
So, in essence, if a php file can be opened for writing by the interpreter, or if the file is owned (ie, can change permissions to have write access) by the user running the php process I'd like to stop the file from being loaded. Does anyone know of a way to achieve this?
This presumably only makes sense in a hosting type environment where the user uploading legitimate PHP code and the php interpreter's user are different users. As such, this would probably need to be some config option as the restriction only make sense in certain configurations and goes straight against all "best practices" recommendations I could find on the matter.
There's a much easier way to accomplish this property than worrying about filesystem permissions:
Ship all of your code in a Phar (PHP Archive).
Use a digital signature for your Phars.
By default, PHP treats PHP archives as read-only. See SitePoint's guide for serving websites from a PHP Archive.
But even if someone can circumvent PHP through misconfiguration, by using a digital signature (for which the private key must NOT be stored on the webserver, only the public key), they will not be able to tamper with the Phar (unless they steal your private key).
Additionally, Snuffleupagus seems helpful for locking down what PHP scripts are allowed to do.

Custom Permissions for a Specific PHP Script

I am developing a website which provides the option that clients can upload their PHP scripts to a specific directory on my server. I want to make sure that my system is secure, and thus I do not want people to be able to use those PHP scripts to edit or view files outside of the directory they are uploaded to. In other words, if there is a file at public_html/directory1/foo.php, it should only be able to edit and view files in public_html/directory1, and should not be able to edit or view files anywhere else on the system. Is there any way of doing this?
This is super dangerous. Technically there are ways to do this if you know your way around linux/windows user and group configuration, Apache configuration, and PHP configuration. You'll need to run Apache under a user with extremely specific permissions and configure PHP to forbid certain types of commands (most notably the exec/system commands, but there are a lot of other ones that are likely to get you in trouble).
I'd strongly suggest you try to figure out a way to avoid giving your users the right to upload files to a folder where they'll be evaluated by the server as PHP. There's just too many things that can go wrong, and too many settings that can be overlooked.
If you do decide to go this route, do a lot of reading on secure PHP configuration and Apache Privilege Separation.
Since PHP is a server side script, I belive you'll find it hard to properly secure your system. Having said that, you can limit those files by running the apache server by a user which have no access to other directories, check SElinux for more info. please note that it's really hard to do so, you might forget even one file which can be used later to hack the system.
A better way might be running these server on top of a VM, so that even if someone hijacks the VM, you could always shut it down and restore it's data.

Confused On Higher Web Root Access For Secure Login

Going to try and mess around with forms of secure login now, and the php files that connect to the database are going to be stored above the web root, public_html, so they cannot be publicly accessed.
My first question is that people are saying you cannot invoke this php file with Javascript.
That makes sense because Javascript runs client-side and could expose information, but this leaves me a bit confused on how to invoke this php file securely.
Should I have another php file below the web root that invokes the content-sensitive one above the web root?
Would this be achieved with "../../some-folder-above-web-root/some-php-above-web-root.php", and if so isn't that revealing to the location of the php file in the web root? Or doesn't it's location matter since people cannot access it (.. hackers).
All in all I really just want to know how to communicate to a script above the web root, properly and securely.
Yes, you are correct. There should be a PHP file below the web root that will access the secured PHP files above the web root. In Zend Framework, there is a single index.php file, called the bootstraper, which does many things including:
set the error display level
set the include paths
define global constants
read the configuration files
load the library classes
get the front controller
configure the database connection
determine the route, per RESTful url's, and MVC
set Exception handling
call the requested controller
I would highly suggest using an MVC framework, they are industry standard, and have pre-built functionality for many common problems including secure logins. Zend Framework implements Access Control Lists style security, though you can easily role your own. Other notable frameworks are Drupal, Yii, Codeigniter, Symphony, CakePHP, and Joomla.
Other best practices for security are:
filter all file uploads based on mimetype, NOT file extension or filetype
filter all POST and GET data, based on the database table column type and length
sanitize all SQL strings before running them
change all the default login passwords on your servers, ex: Apache, MySQL, FTP, SSH, SVN, etc.
learn how to configure php.ini, httpd.conf, etc.
disable any services, modules, and plugins, not being used in your framework, PHP, Apache, and MySQL
fuzz your code
use unit tests
learn a bit about penetration testing
you can give those files READ ONLY permission for other, something like 754 (all permissions for root, read and execute for group, read only for other) for example, then you can read its contents using for example file_get_contents and a absolute path.
A common way to do this is have a config file (with the sensible info inside) outside the public web dir, read it using a absolute path, and then use it as variables.
If you want to EXECUTE a script outside the public web path you have to give EXECUTE permission to 'other' which isn't much secure.
Also regarding your question about javascript, it ins't about security: javascript code won't be executed in the server, where the file with sensible info is, it will be executed on the client browser, so there's nothing to read there.

Securely storing database connection details. Why use .inc at all?

I am always reading that you should always store your database credentials outside of your document root because normally you would have them set to db.inc or something similar.
I can understand this and naturally it makes perfect sense.
What I don't understand is why you are making the file into one that you either need to set apache to hide or you need to put it into a secure location in the first place.
What is the issue with making it, say db.php - Then apache knows to execute the script first and return the output (which would presumably be blank in most cases).
Maybe I am being dumb and missing an inherent security flaw but is there any issues with just storing your details in a .php file? I mean Wordpress and other major open source PHP applications manage to get away with it, but is this because they can't make their script talk to folders outside of www or because it is just as secure as any other method?
Maybe I am being dumb and missing an inherent security flaw but is there any issues with just storing your details in a .php file?
A tiny slip up in the configuration of Apache, and the file starts being served raw instead of being processed by the PHP engine.
I mean Wordpress and other major open source PHP applications manage to get away with it, but is this because they can't make their script talk to folders outside of www or because it is just as secure as any other method?
They accept increased risk for increased convenience.
Storing files containing (database) credentials outside the document root is always a good idea.
Say, you upgrade Apache, but forget updating the configuration with PHP. Any file in the document root can possibly be downloaded without getting parsed.
Wordpress, Joomla, phpBB and others are made to be portable. That is, reside in one folder.

How to self-update PHP+MySQL CMS?

I'm writing a CMS on PHP+MySQL. I want it to be self-updatable (throw one click in admin panel). What are the best practices?
How to compare current version of cms and a version of the update (application itself and database). Should it just download zip archive, upzip it and overwrite files? (but what to do with files that are no longer used). How to check if an update is downloaded correctly? Also it supports modules and I want this modules to be downloadable from the admin panel of cms.
And how should I update MySQL tables?
Keep your code in a separate location from configuration and otherwise variable files (uploaded images, cache files, etc.)
Keep the modules separate from the main code as well.
Make sure your code has file system permissions to change itself (use SuPHP for example).
If you do these, simplest would be to completely download the new version (no incremental patches), and unzip it to a directory adjacent to the one containing the current version. Because there won't be variable files inside the code directory, you can just remove or rename the old one and rename the new one to replace it.
You can keep the version number in a global constant in the code.
As for MySQL, there's no other way than making an upgrade script for every version that changes the DB layout. Even automatic solutions to change the table definition can't know how to update the existing data.
A slightly more experimental solution could be to use something like the phpsvnclient library.
With features:
List all files in a given SVN repository directory
Retrieve a given revision of a file
Retrieve the log of changes made in a repository or in a given file between two revisions
Get the repository latest revision
This way you can see if there are new files, removed files or updated files and only change those in your local application.
I recon this will be a little harder to implement, but the benefit would probably be that it is easier and quicker to add updates to your CMS.
You have two scenarios to deal with:
The web server can write to files.
The web server can not write to files.
This just dictates if you will be decompressing a ZIP file or using FTP to update the files. In ether case, your first step is to take a dump of the database and a backup of the existing files, so that the user can roll back if something goes horribly wrong. As others have said, its important to keep anything that the user will likely customize out of the scope of the update. Wordpress does this nicely. If a user has made changes to core logic code, they are likely smart enough to resolve any merge conflicts on their own (and smart enough to know that a one click upgrade is probably going to lose their modifications).
Your second step is to make sure that your script doesn't die if the browser is closed. This is a process that really should not be interrupted. You could accomplish this via ignore_user_abort(true);, or some other means. Or, if you like, allow the user to check a box that says "Keep going even if I get disconnected". I'm assuming that you'll be handling errors internally.
Now, depending on permissions, you can either:
Compress the files to be updated to the system /tmp directory
Compress the files to be updated to a temporary file in the home directory
Then you are ready to:
Download and decompress the update en situ , or in place.
Download and decompress the update to the system's /tmp directory and use FTP to update the files in the web root
You can then:
Apply any SQL changes as needed
Ask the user if everything went OK
Roll back if things went badly
Clean up your temp directory in the system /tmp directory, or any staging files in the user's web root / home directory.
The most important aspect is making sure you can roll back changes if things went bad. The other thing to ensure is that if you use /tmp, be sure to check permissions of your staging area. 0600 should do nicely.
Take a look at how Wordpress and others do it. If your choice of licenses and their's agree, you might even be able to re-use some of that code.
Good luck with your project.
There is a SQL library called SQLOO (that I created) that attempts to solve this problem. It's a little rough still, but the basic idea is that you setup the SQL schema in PHP code and then SQLOO changes the current database schema to match the code. This allows for the SQL schema and attached PHP code to be changed together and in much smaller chunks.
http://code.google.com/p/sqloo/
http://code.google.com/p/sqloo/source/browse/#svn/trunk/example <- examples
Based on experience with a number of applications, CMS and otherwise, this is a common pattern:
Upgrades are generally one-way. It's possible to take a snapshot of full system state for a restore upon failure, but to restore usually entails losing any data/content/logs added to the system since the upgrade. Performing an incremental rollback can put data at risk if something were not converted properly (e.g. database table changes, content conversions, foreign key constraints, index creation, etc.) This is especially true if you've made customizations that rollback scripts couldn't possibly account for.
Upgrade files are packaged with some means of authentication/verification, such as md5 or sha1 hashes and/or digital signature to ensure it came from a trusted source and was not tampered. This is particularly important for automated upgrade processes. Suppose a hacker exploited a vulnerability and told it to upgrade from a rogue source.
Application should be in an offline mode during the upgrade.
Application should perform a self-check after an upgrade.
I agree with Bart van Heukelom's answer, it's the most usual way of doing it.
The only other option would be to turn your CMS into a bunch of remote Web Services/scripts and external CSS/JS files that you host in one location only.
Then everyone using your CMS would connect to your central "CMS server" and all that would be on their (calling) server is a bunch of scripts to call your Web Services/scripts that do all the processing and output. If you went down this route you'd need to identify/authenticate each request so that you returned the corresponding data for the given CMS user.

Categories