Backup & Security Considerations for MySQL/PHP app in production - php

Some specific questions beyond this:
1) What are your most critical security considerations? I feel like I've secured the data in transit with encryption/https, data at rest by encrypting sensitive data that I don't need access to, set up a firewall to access phpmyadmin, changed root passwords, etc., but I'm not confident that I've 'checked all the boxes' so to speak. Is there a robust guide to securing mysql/php applications out there somewhere? Perhaps a pen test is the only way to get confidence to some degree?
2) What are your backup considerations? I've got a master/slave relationship set up for the mysql database in two different datacenters, and weekly backups of the production server itself. The code is all in source control, but I have some uploaded documents that I'd lose if the whole thing crashed on Day 6 after backups. Any ideas on that one? Considering moving the document storage to a different server and backing it up nightly, or asynchronously just saving the document on initial upload to two separate servers. They are not large docs, and volume isn't high yet, but, is that scalable?

I feel like I've secured the data in transit with encryption/https
It's not really clear from this how far you have gone, so forgive me if you have already addressed the following.
Obviously https secures the transmission of data between the client to your webserver but it should not be confused with an encrypted connection between your application and the database. Admittedly the risk factor of data being intercepted here is much lower, but it depends how sensitive the data is. Information on setting up encrytped database connections can be found here. As you are replicating between data centres you should consider setting up replication to use encrypted connections also.
Replication and backing up are not the same thing, once you have replicated the data from the master to the slave you still need to back up the slave. You can use mysqldump but the caveat is mysqldump creates a logical backup not a physical one so restore functions are slow and it's not good for scaleability. There's a good post here with solutions for making a physical backup instead.
Apart from that there are the usual kind of security measures you should implement with any system:
Create and assign separate user accounts for each process with the
minimum level of access permissions needed to perform their function.
Create an admin account with a non generic admin type username.
Remove root account and any others that are likely to get you pwned
like admin etc
Only encrypt data if you need to be able to decrypt it again. Salt
and hash sensitive data which is only needed for validation
(passwords etc) and save the resulting value in the database. Validate user inputs against the hash.
Depending upon your use case you may also consider:
In /etc/mysql/my.cnf within the mysqld section you can disallow
connections from anywhere except the local machine by adding
bind-address = 127.0.0.1
Disable loading files into MySQL from the local filesystem for
users without file level privileges by adding local-infile=0
Implementing stored procedures for common database tasks. These
have several benefits (as well as some drawbacks) but are good from a
security point of view as you can grant user accounts permissions on
stored procedures without granting any permissions on the tables
which they utilise.
Using database views to restrict access to some columns within
tables while still allowing access to other columns within the same
table. As with stored procedures there are pros and cons for views.
There are probably a million other things which someone who understands MySQL much better than I do could reel off to you without even thinking about it, but that's all I've got.

Related

One user database, two applications at different servers

I currently run an Rails app on a single VPS with it's own MySQL database. This database also contains the users table. We are currently building a second application which should have the same users. If one registers for either, he has an account for both.
However, the problem is that this second (PHP) application must be hosting at another location. What are the best practices here? Can I directly connect to the external database without causing a big delay? Should I sync them? Create a API at the first?
I'm searching for the most maintainable method possible. Thank you.
You can allow the second host access to the first MySQL server. The best practices (as far as I'm aware) to do this would be to create a new user account, give it the required privileges to the users table, only allow access to it from the IP or domain of the second host and using a secure password. You would run this query on the MySQL server:
GRANT ALL ON mydatabase.users TO 'mynewuser'#'123.123.123.123'
IDENTIFIED BY 'mysecurepassword';
Needless to say you would replace 'mydatabase.users' with your database and table name, 'mynewuser' with the username of the user you want to have access (you can put anything here, the user will be automatically created), '123.123.123.123' with the IP or domain name of your second server and 'mysecurepassword' with a good and long password, preferably randomly generated.
Your second server would now be able to connect to the MySQL server and have ALL privileges (you can change that to what ever privileges it needs) on the mydatabase.users table.
MySQL 5.6 GRANT Syntax
Greensql.com on MySQL remote access best practices
To minimize the small performance penalty I would refrain from creating multiple MySQL connections (Try to use as few as possible, preferably just one). I'm not 100% on the following, but I'm pretty sure it would be a great idea to reduce the amount of separate queries you execute as well. Instead of running 10 separate inserts, run one insert with multiple VALUES segments, e.g.
INSERT INTO mytable(id, name) VALUES
(0, 'mia'),
(1,'tom'),
(2,'carl');
I'm sure MySQL Prepared Statements would also be of considerable help reducing the speed penalty
You can connect to the database from the second application.
but the delay for the connection will always be there.
Another thing is you have to consider about the security as the database will be accessible to the world. You can restrict the connection only to the second web server.
To reduce the delay, you have options like having a memcached or other caches so that you dont have to hit the database again and again.
You can have a mysql replication and have the main database as master and setup another mysql in the second application server and make it as slave.
The Second server's mysql syncs with the master and all the write queries (update, insert, delete etc.) should go to master. This should reduce the database load.

PHP: Securing database connection credentials

Just to make sure everyone is on the same page these are the credentials I'm talking about...
$user = 'user';// not actual user, not root either
$pass = 'pass';// not actual password
$server = 'localhost';
$database = mysqli_connect($server,$user,$pass,true|false);
So I'm talking about the passwords used to connect to the database, not the passwords in the database (which for clarification I have hashed with salt and pepper).
I have not read anything that I think remotely suggests you can have 100% foolproof security since obviously the server needs to connect to the database and get the content for visitors 24/7; if I am mistaken I would love to hear how this would be possible.
So let's presume a hacker has root access (or if that does not imply access to the PHP code let's just say then have access to all the PHP source code) and they (in this circumstance) desire to access/modify/etc databases. If we can not prevent them should they have access to the PHP source then we want to slow them down as much as possible. I can keep each site/database connection password in separate files (can as in I'm a few weeks from finishing multi-domain support) for each site and not inside of public_html (obviously). I use serialize and unserialize to store certain variables to ensure certain level of fault tolerance for when the database becomes unavailable on shared hosting (preventing site A from looking and acting like site B and vice-versa) as the database can sometimes become unavailable numerous times a day (my database error logs are written to when the SQL service becomes available again and catches these "away" errors). One thought that has crossed my mind is determining a way to store the passwords in one hash and un-hashing them to be used to connect to the database by PHP though I'd like some opinions about this as well please.
If someone has a suggestion from the database perspective (e.g. having the ability to restrict users to SELECT, INSERT, DELETE, UPDATE, etc and not allowing DROP and TRUNCATE as examples) my primary concern is making sure I am SQL neutral as I plan to eventually migrate from MySQL to PostgreSQL (this may or may not be relevant though if it is better to mention it). I currently use phpMyAdmin and cPanel and phpMyAdmin shows the connected user is not the same as the site's database user names so in that regard I can still use certain commands (DROP and TRUNCATE as examples again) with that user and restrict the SITE user permissions unless I am mistaken for some reason?
Is there a way to configure the context of where the connection credentials are accepted? For clarification a hacker with access to the source code would not be accessing the site the same way legitimate users would.
Another idea that crossed my mind is system based encryption, is there a near-universal (as in on every or almost every LAMP web host setup) web-hosting technique where the system can read/write the file through Apache that would introduce a new layer that a hacker would have to determine a way to circumvent?
I am using different passwords for each user of course.
I currently am on shared hosting though hopefully my setup will scale upwards to dedicated hosting eventually.
So what are the thoughts on my security concepts and what other concepts could I try out to make my database connection credentials more secure?
Clarification: I am looking for ideas that I can pursue. If there is disagreement with any of the suggestions please ask for clarification and explain your concern in place of debating a given approach as I may or may not have even considered let alone begun to pursue a given concept. Thanks!
There is little to be gained from trying to slow down an intruder that already has root access to your system. Even if you manage to hide the credentials well enough to discourage them, they already have access to your system and can wreak havoc in a million ways including modifying the code to do whatever they wish.
Your best bet is to focus on preventing the baddies from ever penetrating your outer defenses, worry about the rest only after you've made sure you did everything you can to keep them at the gates.
Having said that, restricting database user accounts to only a certain subset of privileges is definitely not a bad thing to do if your architecture allows it.
As code_burgar says, once your box gives root, it's too late. That being said, I have had to implement additional security mesures on a project I was involved with a while back. The solution to store config files in an encrypted partition so that people with direct access to the machine can't pull the passwords off by connecting the drive to another PC. Of course this was in addition to file system permissions so people can't read the file from inside the OS itself.
Another detail worth bringing up, if you are really paranoid on security:
$user = 'user';// not actual user, not root either
$pass = 'pass';// not actual password
$server = 'localhost';
$database = mysql_connect($server,$user,$pass,true|false);
unset($user, $pass, $server); // Flush from memory.
You can unset the critical variables after use, ensuring they cannot be var_dumped or retrieved from memory.
Good-luck, hope that helps.
You want to approach security in layers. Yes, if an attacker has root access, you're in a very bad place - but that doesn't mean you shouldn't protect yourself against lower levels of penetration. Most of these recommendations may be hard to do on shared hosting...
Assuming you're using a decent hosting provider, and recent versions of LAMP, the effort required to gain root access is substantial - unless you're a very lucrative target, it's not your biggest worry.
I'll assume you harden your server and infrastructure appropriately, and check they're configured correctly. You also need to switch off services you don't need - e.g. if you have an FTP server running, an attacker who can brute force a password doesn't need root to get in.
The first thing you should probably do is make sure that the application code has no vulnerabilities, and that you have a strong password policy. Most "hacks" are not the result of evil geniuses worrying away at your server for months until they have "root" - they are the result of silly mistakes (e.g. SQL injection), or weak password ("admin/admin" anyone?).
Next, you want to make sure that if your webserver is compromised - but not at "root" level - you can prevent the attacker from executing arbitrary SQL scripts. This means restricting the permissions of your web server to "read and execute" if at all possible so they can't upload new PHP files. It also means removing things like CPanel and phpMyAdmin - an attacker who can compromise your production server could compromise those apps, and steal passwords from you (run them on a different server if you need them).
It's definitely worth looking at the way your database permissions are set up - though this can be hard, and may not yield much additional security. At the very least, create a "web user" for each client, and grant that user only "insert, update and delete" on their own database.
I have found a solution for PHP(Linux) On the root create a directory say db and create a class and define all the database connection variables and access methods in a class say DBConnection.php now your website is example.com you are storing your files in public_html directory create a php file under this directory to connect and do all database operations and include DBConnection.php file using following statement
require('../db/DBConnection.php');
this file cannot be accessed using 'www.example.com/db/DBConnection.php'
you can try this on your web site.

Securely storing data

I understand the concepts of securely storing data for the most part, including storing the data on a separate server that only allows connections from the application, key-pairs for encryption, etc. However, I'm still not understanding how separating the server makes it that much more secure.
For instance, suppose I have a web server, which is hardened and secure, and it captures the data from user input for storage. The data is encrypted and submitted via a db query or web service to the db server. The db server only allows connections from the web server and stores the data in an encrypted form. Therefore, if someone access the db, the data is worthless.
But, if someone access the web server, they will have access to the db as well as the encryption algorithm and keys, no? That being the case, why even have the data on a different server, as the transfer of the data is just another potential point of attack?
Is there someway to hide the connection information and encryption algorithms on the web server so that if it is compromised, access to the db server is not gained? Obfuscation isn't enough, I wouldn't think. Any ideas are welcome.
Thanks
Brian
There's a certain amount of magical thinking and folklore in the way people design for security, and you're right: storing data on a different server on its own doesn't necessarily make things more secure unless you've done all sorts of other things too.
Managing keys is a huge part of this; doing this in the context of web applications is a subject apart, and I'm not aware of any robust solutions for PHP. You're quite right - if your web application needs to be able to decrypt something, it needs access to the keys, and if the web app is compromized, the attacker also has access to the key.
This is why I've tended to use public key cryptography, and treated the public facing webserver as "write only" - i.e. the web server encrypts using the public key, stores in the database, and can never decrypt it; only a separate process (not available on the public internet) can use the private key to decrypt it. This way, you can store credit card details in your database, and only the application which charges the card has the private key to decrypt it; this app runs on a secure environment, not accessible from the internet.
Secondly, there are multiple levels of compromise - for instance, an attacker might get read-only access to your server's file system. If that file system includes the database, they could get hold of the data file, restore it to a server they control, and use the decryption key to steal your private data. If the database runs on a separate server(inaccessible from the internet), this attack route becomes impossible.
The fact that one route of attack leaves you open doesn't mean you can't protect against other attacks.
In most of my setups, the web server is in a DMZ of the firewall, and the DB is behind the firewall. I would never want to put the DB server outside the firewall. That extra level of security makes it much harder for someone to get to the data without authorization.
BTW, no web server on the net should be considered "hardened and secure". If it's available to the public, it can be hacked. It's just a matter of how hard they want to try.
You're right in your assumption that if someone hacks the webserver to the point they can log in as an admin, they can read and write the database. But that doesn't mean you should further weaken your setup by putting the DB on the web server. You want more security, not less.
EDIT:
Always think in terms of layers in your security. Separate critical parts into separate layers. This does two things. It makes it where the perp has more problems to solve, and it give you more time for detection and response.
So, in your scenario, access to the web server is one layer, you could then call an encryption server for a second layer (behind the firewall, which is another layer), and the encryption server could be the only machine allowed interaction with the DB server, which is another layer.
Layers make it more secure. They also, though, add burden, slowing the response time. So keep your solution balanced for your real-world requirements.
The problem here is that the keys are on the publicly-facing server which could be compromised - even if the server itself is "hardened", there may be a vulnerability in your app which gives an attacker access to keys or data.
To improve the security of your arrangement you could move just the code that handles encrypted data (along with the keys) onto a secure machine that can be accessed only by the web server, and only through a very restricted API (i.e. bare minimum that is needed). Each operation is logged in order to spot unusual behaviour, which could be symptomatic of an attempt to extract the secret data.
From a security perspective, putting the database into a separate server doesn't really help. If authentication tokens get compromised, it is game over.
However, it does make sense to separate database AND data access layer (DAL) from business logic and presentation. That way, if the application server falls prey to unscrupulous hands, database access is restricted to specific DAL operations which can go a long way of putting data out of harms way if properly implemented.
Other than that, there isn't much of a security benefit in segregating data storage into a separate server.

In what scenarios is it better to use tables for user sessions rather than native sessions?

That's about all that I need to ask
I am dealing with a site right now and I can't see a really significant difference in storing my sessions in a database table over and not doing so.
There are a couple reasons why I sometimes store session data in a DB. Here are the biggest two:
Security Concerns on a Shared Server If you're running on a shared server, the chances are that it's easy for other users of the server to meddle their way into your temp directory and have access to the session data you have stored there. This isn't too common, but it can happen.
Using Multiple Servers If you're upscaling and using more than one server, it's best to store the session data in a database. That way the data is easily available throughout your entire server stack (or farm depending on how big you're going). This is also attainable through a flat file system, but using a database is usually a more elegant, easy solution.
The only thing I can think of for not using a database is simply the number of queries you'll be running. For each page load, you'll have an extra query to gather the session data. However, one small extra query shouldn't make that much difference. The two points I outlined above outweigh this small cost.
Hope that helped a bit.
On a shared host when you have no control over who can access the directory where session files are stored. In this case storing sessions in the DB can offer better security.
And one scenario with which I have no experience myself, but I believe is a realistic scenario:
On a loadbalanced server farm where subsequent requests of one user can be dispatched over multiple servers. In this case you could choose to have one central DB server. In such a scenario, if you wouldn't have such a centralized session repository, session data of users would get lost because they could switch servers per request.
There is a huge difference when you are using several servers, with a load-balancing mecanism that doesn't guarantee that a given use will always be sent to the same server :
with file-based session, if the user is load-balanced to a server that is not the same as the one which served the previous page, the file containing its session will not be found (as it's on another server), and he will not have his session data
with databased-based or memcached-based sessions, the session data will be available from whatever server -- which is quite nice actually, in this quite of situation.
There's also a difference when you are using some shared hosting : with file-based session, if those are placed in the "temporaty" directory of the server (like /tmp), anyone might read your sessions files, depending on the configuration of the server. With DB-based sessions, this problem doesn't exists, as each user will have a different DB and DB user.
In addition to the above posts:
Database sessions (when session table is of Memory type) are faster.
When using file-based sessions, session file is locked until script ends. So, user cannot have two working at the same time scripts on server. This matters for example, when you write a download server. User downloads a file, script sends file to him, leaving session file locked. And user cannot at the same time browse the contents of file archive.

Securing DB and session-data on a PHP shared host

I wrote a PHP web-application using SQLite and sessions stored on filesystem.
This is functionally fine and attractively low maintenance. But, now it needs to run on a shared host.
All web-applications on the shared host run as the same user, so my users' session data is vulnerable, as is the database, code, etc.
Many recommend storing sessions in DBMS such as MySQL in this situation. So at first I thought I will just do that, and move the SQLite data into MySQL too. But then I realized the MySQL credentials need to be readable by the web application user, so I'm back to square one.
I think the best solution is to use PHP as a CGI so it runs as different user for each web-application. This sounds great, but my host does not do this it uses mod_php. Are there any drawbacks from an admin's point-of-view for enabling this? (performance, backward compatibility, etc)? If not then I will ask them to enable this.
Otherwise, is there anything I can do to secure my database and session data in this situation?
As long as your code is running as the shared web user, anything stored on the server is going to be vulnerable. Any other user could write a PHP script to examine any readable file on the server, including your data and PHP code.
If your hosting provider will allow it, running as PHP as a CGI under a different user will help, but I expect there will be a significant performance hit, as each request will require a new process to be created. (You could look at FCGI as a better-performing alternative.)
The other approach would be to set a cookie based on something the user provides, and use that to encrypt session data. For instance, when the user logs in, take a hash of their username, password (as just supplied by them) and the current time, encrypt the session data with the hash, set a cookie containing the hash. On the next request, you'll get the cookie back, which you can then use to decrypt the session data. Note however that this will only protect the current session data; your user table, other data, and code will still be vulnerable.
In this situation, you need to decide whether the tradeoff of the low cost of shared hosting is acceptable considering the reduced security it provides. This will depend on your application, and it may be that rather than trying to come up with a complex (and possibly not even very effective) way to add security, you're better off just accepting the risk.
I don't view security as all or nothing. There are steps you can take. Give the web db user only the permissions it needs. Store passwords as hashes. Use openid login so users provide their credentials over SSL.
PHP on cgi can be slower and some hosts may simply not want to support more than one environment.
You may need to stick with your host for some reason, but generally there are so many available that it is a good reminder for people to compare functionality and security as well as cost. I have noticed many companies starting to offer virtual machine hosting -- nearly dedicated server level security in terms of isolating your code from other users -- at what is to me reasonable cost.
A shared host is no way to run a web site if you are conscious about privacy and security of your data from the sites that you share the server with. Anything accessible to your web application is fair game for the others; it'll only be a matter of time before they can access it (assuming they do have incentive to do that to you).
"you can place your DB connection variables in a file below the web root. this will at least protect it from web access. if you're going to use file based sessions as well, you can set the session path in your user's directory and again outside the web root."
I don't have an account so I can't downvote that.. but seriously it is not even relevant to the question.
Duh you store stuff outside the webroot. That goes for any hosting scenario and is not specific to shared hosting. We're not talking about protecting from outsiders here. We're talking about protecting from other applications on the same machine.
To the OP I think PHP as CGI is the most secure solution, as you already suggested yourself. But as someone else said there is a performance hit with this.
Something you might look at is moving your sessions and db to MySQL and using safe_mode and/or open_basedir.
I would solve the problem with a infrasturcture change instead of a code one.
Consider upgrading to a VPS server. Nowdays you can get them very inexpensive. I've seen VPS's starting # 10$/mo.

Categories