Securing Passwords in a Multi-Dev nginx setup - php

We have a Ubuntu12.04+PHP+nginx setup on our servers. Our developers have access to both /usr/lib/php5/ and /var/www/ folders. We work on a lot of projects and at given time have 50-100 different apps/modules each with db active.
We would like to come up with a mechanism to secure our DB passwords with the following considerations:
The sysadmins create the password and register it somewhere (a file, or a sqlite db or some such)
The apps provide a key indicating which DB and what permissions level they want and this module returns an object that contains everything needed for the connection. Something like "user_manager.client1.ro", "user_manager.client1.rw".
The mechanism should provide the specific password to the app and hence accessible by 'www-data', but all the other passwords can't be seen unless their keys are known.
We have managed to get a prototype going for this, but the central password-providing module runs in www-data space and hence the file/sqlite can always be accessed by any other file in /var/www/ or /usr/lib/php5 and hence all passwords can be compromised.
Is there a way to set things up such that the password-providing module runs at root privileges and the app request the passwords from this? I know we can build a whole new service for this, but it seems too much to build and maintain (specially because this service becomes our single point of failure.)
Any suggestions?

Using permissions, you could do something like:
1) give one developer a user
2) chown every folder under /var/www/ to user www-data, and a specific group for that site, something like:
/var/www/site-a www-data group-a
/var/www/site-b www-data group-b
etc.
3) chmod every directory (and all subdirectory and files with -R) to 770
4) add each developer to every group for which he is actually developing.

A different approach, as I mentioned in a different answer, would be to
to provide the crypto keys via an API, when an application asks for it.
Your strusted devs would then query the API with a unique key to get the relevant credentials. The key can be mapped to a set of credentials (for devs on several projects).
If you protect the API either via a client certificate or IP filtering you will reduce the risk of data leak (if the access key is lost, you still need to be in the right network or to have the certificate to access the API). I would favor the certificate if you trust the developers (per your comment).

Simplest solution is to run your application that manages the credentials and hands them out to the developers from a different instance of the webserver (obviously listening on a different port) and then you can run that instance as a different user and tighten down the permissions so only that user has access to the secret files it needs.
But create an additional user, don't run it as root.
Under apache I'd point to suexec or suPHP. But since you don't use apache, that's not an option for you.

Related

how to run PHP as an individual users specifically including file access permissions

The application produces data by user. Each user has a unique user id and associated unique file permissions. User files are stored in individual directories with associated user permissions on each directory.
The requirements are to provide secure individual user access to only user's files via a portal, assume PHP. A design being considered is to mimic the directory structure and permissions in the portal environment. If it were possible to run PHP as a user then the system permission access security could be used. (this would limit the scope of security implementation to the login process and not to the application.)
Question: Is it possible to run PHP as a user and assume user file permissions?
Research has identified some similar questions, but not the direct question of running PHP as an individual user.
There are a handful of solutions, from best to worst:
Use something like FPM to configure separate process pools configured to run as each user.
Only best if you have a small, fixed number of users, becomes a config/admin nightmare otherwise.
Basically shared hosting.
Stop relying on OS-level users and permissions enforcement altogether and build it into your app.
Create your own permissions enforcement abstraction layer in PHP.
Basically #2, but without the first part, which actually makes it more complicated.
Use posix_seteuid() and posix_setegid() to change the effective UID and GID of the running process.
"But wait!" I hear you say, "That last option seems like exactly what I need! Why is it the worst?"
Because in order to change the UID or GID of a process that process must first be running as a user that is permitted to do such a thing. That user is root.
Running PHP as root, even briefly in order to drop to a different UID/GID, is a massive security hole. Even the most minor bug or flaw is now game over, and this is exponentially more true if you're writing a file manager.
"That's fine," you retort, "this is only for internal use with trusted users, so I'm not worried about security."
NO. BAD. [bops you with a rolled-up newspaper]
Never. Trust. Users.
At best they will never intentionally break or compromise your app, but:
This view is naieve at best.
The universe is constantly manufacturing new and innovative forms of idiot.
Smart, well-meaning idiots like to find "workarounds" so that they don't have to bother you.
Compromised client machines are a threat.
Assuming that your internal network is not compromised nor ever will be is a mistake. [see #4]
Security auditors will crucify you.
and the list goes on.
TL;DR: Unless you're setting up per-user vhosts/sites/apps. Store the files outside of the docroot and use Option #2 to gate access via PHP. If anyone catches you running PHP as root you're going to have a bad time.

Best way to run php scripts on website? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I wanted to check to see what would be the most appropriate way to run a php script on a website that does several updates and makes dynamic changes to the website.
Should these be run by putting the php files in the same FTP directory as the rest of the website and accessing
them as webpages? If so, how could I control it so that only the web admins can access these links or php scripts?
Thank you!
you might use a htaccess protection on a folder containing your admin scripts.
.htaccess
AuthName "Restricted Area"
AuthType Basic
AuthUserFile /var/www/mysite/.htpasswd
AuthGroupFile /dev/null
<Files my-protected-file.php>
require valid-user
</Files>
.htpasswd (user:john, pw:john):
john:cH/Bl.u9Yl2x.
If you are protecting files, which live in an FTP folder, then move the htaccess/htpassword files one level up and adjust the paths OR set correct permissions to disallow reading (see comment).
/var/www/mysite/ftp (contains your admin scripts and has ftp access)
/var/www/mysite (has no ftp access, so add your protection here)
Put your php files in a non www directory.
For example, do not put PHP source files in a public_html directory so they can never be accessible by browser.
You gave us little detail, but I'll try my best to answer your question from as many standpoints as I can think of.
Should these be run by putting the php files in the same FTP directory as the rest of the website and accessing them as webpages? If so, how could I control it so that only the web admins can access these links or php scripts?
If I were you, I'd add my own account management system in PHP, that does not use .htpasswd. Then I'd protect it with HTTPS so that passwords cannot be sniffed with packet analyzers. HTTPS is considered significantly stronger protection than using .htpasswd files.
That way, in order to execute the updates, I'd have to click a button or something similar after logging in. This allows you to easily extend your admin panel in future and makes it human-friendly.
Doing some work just by visiting a URL seems very fragile. It's very useful when it's supposed to be part of API, so that it's bot-friendly (see also: REST API), but in this case it probably shouldn't be exposed to everybody. If it's going to be used by bots, but should be available to only "good" bots, you might want to read about REST authentication methods (most of them rely on accounts of some form anyway).
Finally, if you want to run jobs in automatic fashion, research cron tasks. Then again, nothing prevents you from creating both admin panel and cron jobs that execute the same code.
If you don't want to put it in your public WWW folder, you're making it accessible to only those who have SSH access (not to confuse with FTP access). These people can execute that script using php script.php command. I guess it's not what you're looking for, since you never mentioned SSH access. Additionally, making your administrators connect to your server via SSH to do some tasks may or may not be an good idea, depending on the nature of your script and application. Other than through SSH, there are no ways to execute the script outside WWW folder, except for cron jobs.
Are you familiar with the security implications of what you are designing? I assume that you have some concerns since you mentioned authenticating admins.
Survive the Deep End
OWASP Home Page
Personally, I would not make the scripts you are describing accessible via the web interface. Furthermore, I would not use FTP. I have used cron jobs + wget/curl to call PHP scripts in the past, and the only benefit that I can see you getting from doing that would be a consistent language and consistent environment definition. If there is nothing special in your environment that these admin scripts would need and they were to be run on a schedule, then you could just as easily invoke those scripts on the server from cron via the command line (don't expose the scripts via the web interface).
Cron works best if the maintenance scripts are run on a schedule, but never need to be run manually. Do you ever use SSH? Did you know that you can execute commands on the remote server in a single command executed on your local system? It works quite well and would address your concerns about authenticating admins on the server- SSH is already a strong authentication framework when configured properly (not hard at all).
$ ssh username#server.domain.com "php /path/to/scripts/task1.php"
password:
The credentials (i.e. username/password requested by SSH) are those that you defined on the system itself (unless you are using Kerbose, ldap, or similar credential managment infrastructure).
You can also install update you scripts to the server without using FTP; yes, I would keep them away from "normal" webserver script files. Perhaps you would find it easy to create a user account on the server, let's say you create the username maintenance, set-up key authentication and authorize admins to use the account by adding their public keys to the /home/maintenance/.ssh/authorized_keys file. This way you can control (read limit the potential damage in case of accident, angry admin, crazy girlfriend, aliens... it doesn't matter because you only give the maintenance user permissions and access to the things required to do it's job, can only write to the areas that are safe for those processes to write, and limit both read/execute permissions into areas that it doesn't need. Jails or chroot are wonderful, but probably a bit too much to worry about at this point.
BTW, your FTP and web server should be running as users (not root) with limited access as well. Hopefully you are already familiar with the concept I am trying to describe.
Imagine these accounts.
admin1
Description: some webdude I decided to trust
- id_rsa
- id_rsa.pub
admin2
Description: webdude's friend; I'm skeptical
- id_rsa
- id_rsa.pub
adminN
Description: the Nth admin that I let manage the server
- id_rsa
- id_rsa.pub
youruser
- id_rsa
- id_rsa.pub
server
User: httpd / www-data
Description: the user account under which the webserver runs
User: ftpd
Description: the user account under which the FTP server runs
User: root
Description: default account; ssh login disabled for root (learn how to use SUDO)
User: admin
Description: The first user account created when you set up the server; this might be the account that you log in as remotely unless you created an account to match your username or have centralized credential management. DO NOT GIVE YOUR ADMINS ACCESS TO THIS ACCOUNT.
User: maintenance
Description: Newly created shared account that admins will allowed to use to execute the server maintaince / update scripts as they see fit. Alternatively, you could create each admin their own account based on a template of limited privileges similar to this account. That is a burden if there are a lot of admins or high turnover. The main drawback to a shared account though is that it becomes a little more difficult to determine "who" is logging in because the account is shared- it's possible, but not as easy as just looking at the username; obviously.
Workflow. When you add/authorize a new admin to your team you ask him/her to create an ssh key-pair (with a passphrase that is not empty) and send you the public key. By default that file will be called "id_rsa.pub" and it is perfectly fine if anybody in the world sees the contents of that file- it is the "public" key of the key pair. They should keep the counterpart (which is similarly named "id_rsa") private key in a directory that does not allow other users to read it (this is the default for the ~/.ssh directory) and if they suscpect that their private key has been compromised, then simply create a new pair and throw away the old one.
When you receive their public key, you will add it to the server's maintance account authorized_keys file like this; from your local system you are going to copy the "id_rsa" key they create and give/share with you up to the server.
# just to set up the example (not required once you understand)
$ ssh maintenance#server.domain.com "ls ~/.ssh/
id_rsa id_rsa.pub admin1_id_rsa.pub
known_hosts authorized_keys2 ssh_config
# That's an example of the files you should might see if you you, admin1, and the
# maintenance account itself were already set up. The public are just for record
# keeping and not actually required once their contents are added to the authorized_keys
# file. If you don't keep the admin's public key files, then the following two commands
# could actually be done in a single step, but I've shown them as two here for clarity.
#
# Copy the new guy's public keyfile up to the server (admin2)
$ scp id_rsa.pub maintenance#server.example.com:~/.ssh/admin2_id_rsa.pub
$ ssh maintenance#server.example.com "cat ~/.ssh/admin2_id_rsa.pub >> ~/.ssh/authorized_keys2
That's it. You just added a new admin to the server. If you want to restrict them from logging into the server too, you could configure the account to do that while still allowing them to execute the php scripts remotely via ssh.
Did that make sense? Let me know if not and I'll try and clarify for you.

PHP: Securing database connection credentials

Just to make sure everyone is on the same page these are the credentials I'm talking about...
$user = 'user';// not actual user, not root either
$pass = 'pass';// not actual password
$server = 'localhost';
$database = mysqli_connect($server,$user,$pass,true|false);
So I'm talking about the passwords used to connect to the database, not the passwords in the database (which for clarification I have hashed with salt and pepper).
I have not read anything that I think remotely suggests you can have 100% foolproof security since obviously the server needs to connect to the database and get the content for visitors 24/7; if I am mistaken I would love to hear how this would be possible.
So let's presume a hacker has root access (or if that does not imply access to the PHP code let's just say then have access to all the PHP source code) and they (in this circumstance) desire to access/modify/etc databases. If we can not prevent them should they have access to the PHP source then we want to slow them down as much as possible. I can keep each site/database connection password in separate files (can as in I'm a few weeks from finishing multi-domain support) for each site and not inside of public_html (obviously). I use serialize and unserialize to store certain variables to ensure certain level of fault tolerance for when the database becomes unavailable on shared hosting (preventing site A from looking and acting like site B and vice-versa) as the database can sometimes become unavailable numerous times a day (my database error logs are written to when the SQL service becomes available again and catches these "away" errors). One thought that has crossed my mind is determining a way to store the passwords in one hash and un-hashing them to be used to connect to the database by PHP though I'd like some opinions about this as well please.
If someone has a suggestion from the database perspective (e.g. having the ability to restrict users to SELECT, INSERT, DELETE, UPDATE, etc and not allowing DROP and TRUNCATE as examples) my primary concern is making sure I am SQL neutral as I plan to eventually migrate from MySQL to PostgreSQL (this may or may not be relevant though if it is better to mention it). I currently use phpMyAdmin and cPanel and phpMyAdmin shows the connected user is not the same as the site's database user names so in that regard I can still use certain commands (DROP and TRUNCATE as examples again) with that user and restrict the SITE user permissions unless I am mistaken for some reason?
Is there a way to configure the context of where the connection credentials are accepted? For clarification a hacker with access to the source code would not be accessing the site the same way legitimate users would.
Another idea that crossed my mind is system based encryption, is there a near-universal (as in on every or almost every LAMP web host setup) web-hosting technique where the system can read/write the file through Apache that would introduce a new layer that a hacker would have to determine a way to circumvent?
I am using different passwords for each user of course.
I currently am on shared hosting though hopefully my setup will scale upwards to dedicated hosting eventually.
So what are the thoughts on my security concepts and what other concepts could I try out to make my database connection credentials more secure?
Clarification: I am looking for ideas that I can pursue. If there is disagreement with any of the suggestions please ask for clarification and explain your concern in place of debating a given approach as I may or may not have even considered let alone begun to pursue a given concept. Thanks!
There is little to be gained from trying to slow down an intruder that already has root access to your system. Even if you manage to hide the credentials well enough to discourage them, they already have access to your system and can wreak havoc in a million ways including modifying the code to do whatever they wish.
Your best bet is to focus on preventing the baddies from ever penetrating your outer defenses, worry about the rest only after you've made sure you did everything you can to keep them at the gates.
Having said that, restricting database user accounts to only a certain subset of privileges is definitely not a bad thing to do if your architecture allows it.
As code_burgar says, once your box gives root, it's too late. That being said, I have had to implement additional security mesures on a project I was involved with a while back. The solution to store config files in an encrypted partition so that people with direct access to the machine can't pull the passwords off by connecting the drive to another PC. Of course this was in addition to file system permissions so people can't read the file from inside the OS itself.
Another detail worth bringing up, if you are really paranoid on security:
$user = 'user';// not actual user, not root either
$pass = 'pass';// not actual password
$server = 'localhost';
$database = mysql_connect($server,$user,$pass,true|false);
unset($user, $pass, $server); // Flush from memory.
You can unset the critical variables after use, ensuring they cannot be var_dumped or retrieved from memory.
Good-luck, hope that helps.
You want to approach security in layers. Yes, if an attacker has root access, you're in a very bad place - but that doesn't mean you shouldn't protect yourself against lower levels of penetration. Most of these recommendations may be hard to do on shared hosting...
Assuming you're using a decent hosting provider, and recent versions of LAMP, the effort required to gain root access is substantial - unless you're a very lucrative target, it's not your biggest worry.
I'll assume you harden your server and infrastructure appropriately, and check they're configured correctly. You also need to switch off services you don't need - e.g. if you have an FTP server running, an attacker who can brute force a password doesn't need root to get in.
The first thing you should probably do is make sure that the application code has no vulnerabilities, and that you have a strong password policy. Most "hacks" are not the result of evil geniuses worrying away at your server for months until they have "root" - they are the result of silly mistakes (e.g. SQL injection), or weak password ("admin/admin" anyone?).
Next, you want to make sure that if your webserver is compromised - but not at "root" level - you can prevent the attacker from executing arbitrary SQL scripts. This means restricting the permissions of your web server to "read and execute" if at all possible so they can't upload new PHP files. It also means removing things like CPanel and phpMyAdmin - an attacker who can compromise your production server could compromise those apps, and steal passwords from you (run them on a different server if you need them).
It's definitely worth looking at the way your database permissions are set up - though this can be hard, and may not yield much additional security. At the very least, create a "web user" for each client, and grant that user only "insert, update and delete" on their own database.
I have found a solution for PHP(Linux) On the root create a directory say db and create a class and define all the database connection variables and access methods in a class say DBConnection.php now your website is example.com you are storing your files in public_html directory create a php file under this directory to connect and do all database operations and include DBConnection.php file using following statement
require('../db/DBConnection.php');
this file cannot be accessed using 'www.example.com/db/DBConnection.php'
you can try this on your web site.

security precautions to take when running multiple sub-sites on the same main site

I have shared hosting, and within my own user space I run three different .com domains. One serves as the actual hosting plan master domain, and the other are subs via URL redirects and domain pointing.
One of those subs is a Wordpress blog, and I'm concerned about the ability of an attacker to use security holes in Wordpress to access the other sites under my virtual umbrella. If the blog itself gets trashed, I'm not going to lose any sleep over it. But if the other sites get nailed I'll be a pretty sad panda.
What sort of server permissions and such can I use to isolate that blog? It's entirely contained within its own sub-directory.
More details can be provided if needed, I'm new at this and may have left out some key info.
Thank you.
This is a valid concern. If not properly separated a vulnerability in one site will affect all of them.
1) The first thing you need to do is to use suPHP, which forces an application to be run with the rights of a specific user. This user account should not have shell access (/bin/false).
2) All three application directories need to be chown user -R /home/user/www/ and chmod 500 -R /home/user/www/ The last two zeros in the chmod means that no other accounts have access to the files. This only provides read and execute rights, it is ideal if write privileges are disallowed for the entire web root.
3) All three applications must have a separate MySQL database and separate MySQL user accounts. This user account should only have access to its own database. This account should not have GRANT or FILE privileges. Where the FILE privilege isby far the most dangerous privilege you can give to a MySQL user account because it is used to upload backdoors and read files. This protects against sql injection in one site allowing the attacker to read data for all sites.
After these three steps are taken if 1 site where to be hacked the other 2 will be untouched. You should run a vulnerability scanner such as Sitewatch (Commercial but there is a free version) or Skipfish (Open Source). After scanning the application then run phpsecinfo and modify your php.ini file to remove as much Red and Yellow as possible. Modifying your php.init can fool vulnerability scanners, but often times the flaw still exists to make sure you patch your code and keep everything up to date.
Depends on the hole attacker finds. If you use the same user/pass for all your databases, including the WP db, then that might be a problem. Of course, file permissions are an issue... anything available to be accessed by the web server can be read, and often written to.
There are a lot of security issues at play here, but if you use ftps, ssh and update WP when they release a security fix, then you lower your chances of problems. The most secure computer is encased in cement and sunk in the Mariana Trench. But it isn't very useful. You are looking for a balance.
Chmod files 755 and know chmod settings to stay updated about classic problem: hacker getting a shell via uri query string due to a bug in perl, php or what scripting language used. Having a secure ISP like secureserver.net should live up to its name, staying aware of specific bugs in language implementation and choosing most secure available instead of most performance and reading insecure.org is what I do running all experimental and development stuff with secureserver.net doable with no security issue reported.

Administrator account: Where, when and how?

Where, when and how to create the administrator account/user for a private website?
So what I am asking is what's the preferable technique for creating that first administrator account/user. In my case it's for a private webapplication. I am talking about the account/user that will own the application and will if needed create/promote the other administrators. I guess you can this guy the root user?
Here are a few ways I encountered in other websites/webapplication.
Installation wizard:
You see this a lot in blog software or forums. When you install the application it will ask you to create an administrator user. Private webapplication will most likely not have this.
Installation file:
A file you run to install your application. This file will create the administrator account for you.
Configuration files:
A configuration file that holds the credentials for the administrator account.
Manually insert it into a database:
Manually insert the administrator info into the database.
When:
On a bootstrapping phase. Someone has suggested seeds.rb. I personally prefer to use the bootstrapper gem (with some addtions that allow me to parse csv files).
This action allows you to create a rake task which can be invoked like this:
rake db:bootstrap
This will create the initial admin user, as well as any seeding data (such as the list of countries, or a default blog format, etc). The script is very flexible. You can make it ask for a password, or accept a password parameter, if you feel like it.
How:
In all cases I use declarative_authorization in order to manage user permissions.
Your admin user must return a role called 'admin' (or whatever name you choose) on the list of roles attached to it. I usually have 1 single role per user, mainly because I can use role inheritance (e.g. admins are also editors by default). This means that on my database I've got a single field for users called "role_id". 0 is usually for the admin role, since it is the first one created.
Where:
A specific file inside db/bootstrap/users.rb (or yaml, or csv) specifies the details of a user with the admin role activated. The rake db:boostrap order parses that file and creates the user accordingly.
I see you tagged ruby on rails here. In RoR you would probably use the seeds.rb file under /your_app/db.
If you are using asp.net, I might assume you are using MSSQL or maybe Oracle. Having a stored proc that runs as an install script might do the job.
I have seen php apps using an install.php file that when run once installs the necessary data into the database and then tells the installer to delete the file before the app will run.
So there are three ways to deal with it.
If you have user accounts on your website (and I see you have them), config file with administrator's credentials is very awkward. This solution enforces you to duplicate a big part of authentication logic. Better keep the account in database.
I understand you are preparing application for yourself, not delivering it to your customers. Preparing installation wizard or installation files seems to be waste of time.
I would do the simplest - just raw insert. Pros: no extra work, same authentication mechanism as for other users. If you are using some kind of database migrations, you could create a migration which create a root account with some dummy password you can change later.
Installation wizard:
- definitvely the best approach. Clean, sure and user-friendly. Should be integrated with application installer.
Installation file:
- ok, but only if you have one and only script to run. Having more -> problems and potentially security flaws (all the folks who forget to delete this file after ...)
Configuration files:
To avoid. You are demanding user to know PHP, internals of your app, maybe server side configuration (anything above ftp can be "difficult")
Manually insert it into a database:
To avoid * 2.
In addition, two last solutions are impossible if you are using password hashing (ie. md5 or sha1 with site specific salt) - which is quite an obligation today.

Categories