I am working on a project that requires my server xyz.com to receive POST/GET responses to xyz.com/callback.php from abc.com(SMS gateway providers).
Below is how i currently handle the responses on callback.php
1. Receive the POST data from abc.com (This post contains a unique ID)
2. Connect to mysql database and UPDATE users WHERE messageId=Same_ID_from_POST_data
This works and data is being updated. However it is causing CPU overload due to the thousands of MYSQL connections. This is because whenever there is a delivery receipt from the api server for each individual message, callback.php connects to mysql and updates the database.
Is there a best practice to minimize the amount of times i connect to MYSQL?
I was thinking of doing the following, i however doubt if it makes any sense.
1. Receive post data from api server as before.
2. Instead of updating mysql, i simply write to a .txt file with the following code
$query.="Update users set status='$post_data_for_status' where unique='$post_unique';
Then after about 10mins i use cron to run a php file that uses mysqli_multi_query($connection, $query) to update the table thereby making it a single connection. After the update i simply unlink the .txt file
Even if the above method will work, i don't know how to do it. I am sure there is a better alternative.
Any ideas please and sorry for long epistle.
What you are looking for sounds like avoiding connection churning, which is one of the benefits we get from using a "connection pool", commonly used in Java/J2EE web applications.
PHP doesn't provide connection pooling (AFAIK). But PDO and mysqli do provide for "persistent" connections, which give some of the same benefit as connection pool (avoiding connection churning.)
For mysqli, prepend "p:" to the hostname in the mysqli_connect.
http://php.net/manual/en/mysqli.persistconns.php
For PDO, include PDO::ATTR_PERSISTENT => true in the connection.
http://php.net/manual/en/pdo.connections.php
Be aware that with persisted connections, the MySQL connection maintains it's state (session variables, temporary tables, user defined variables, et al.) For example, if a client issues a statement such as SET time_zone = '+00:00'; or SET #foo = 'bar';, a subsequent client reusing that MySQL connection is going to inherit the time_zone setting or the user defined variable #foo value; whatever changes to the MySQL session state the previous clients made.
Related
I am to deliver a PHP application to my client, but I don't want to share the database credentials with him. I only want the database to be accessible via my application and by no other means possible.
However, if I submit the PHP code, they can view the database credentials in database connectivity script and access the database from outside the script.
How can I possibly stop them to have database username and password as their is no "compiled" code?
EDIT: The application and database are on two different servers.. i.e. app uses remote database connection, and runs on client's server database is on my own server.
I don't want to share the database credentials with him.
If they have access to the client connecting to the DB, you have to. Otherwise, by definition, credentialed access would be impossible.
However you obfuscate the connection phase, once the connection is established anyone with access to the source code may hijack it however s/he desires. I don't think that even encrypting the whole modeling class would be enough.
This is particularly evident using the deprecated MySQL functions: once a connection is established, any other part of the code may issue a query and have it executed in the context of the connection, with no longer any need to know the credentials. But unless you do things very carefully (and, I suspect, depending on the attacker's ability, even if you do), the same will hold true with mysqli, PDO, and so on.
What you can do is:
limit privileges on the database to the barest possible minimum. This should always be done, BTW, because you never know what vulnerabilities might be in some code, yours or somebody else's that's connected to yours (libraries, plugins, ...), and who could thence gain access to the application.
limit connection IP to that of the client. Another thing you should always do whenever practical: there's no reason not to.
move whatever is possible to move into stored procedures and functions.
If you want (and really need) even more control, you need more work:
instead of using prepared statements (and, BTW, you will now have to have "static" statements - you won't be able to prepare, say, SELECT {$field} FROM table WHERE id = ? and bind it to id), move the statements themselves
to a thin layer on the database server, passing parameters via (e.g.) JSON.
So what was, say,
SQLExec("SELECT * FROM users WHERE userid = ?", "1")
can become very easily
HTTPExec("SELECT * FROM users WHERE userid = ?", 1);
but now, on the server (you'll need a HTTP server there), a script will verify that the requested query is indeed among the approved queries; and only if so, execute it and return the results. Depending on several factors, you might even be able to pass this as a "performance improvement". On the other hand, the "database" server is now more loaded.
Now the client cannot issue a "DROP TABLE Students;" statement, because this statement is not among the approved statement list.
To generate the approved statement list, you can instruct the server to approve everything and store the queries it receives; you then do an exhaustive review of all the web app, in order to trigger all queries at least once (you probably have a test integration script that already does this, and verifies that results are OK); when you've finished, all the queries just received, and only those, will be considered "valid".
Someone changes a comma in one of the queries in the client code -- and the server will reject the query as unauthorized.
This is in the end equivalent to moving (or duplicating) the relevant application code onto your server, so there might be issues: in effect, you're no longer giving your client the full application.
(At that point you could even replace all the queries in the client code with their, say, MD5 hash. The server does not really execute the SQL sent by the client anyway, it executes the good copy that's present server side for comparison purposes. So sending the query or its MD5 signature is perfectly equivalent).
You could ioncube the config file that you store the database connection information in. You can upload single files for free.
http://www.ioncube.com/online_encoder.php
To say I am new to PostgreSQL is an understatement. As such I have spent a great deal of time in the last couple of days going over the various manuals at http://www.php.net/manual/en/ref.pgsql.php and at http://www.postgresql.org/docs/9.1/interactive/index.html .
Short form of my question:
Do different users (logged in from separate IP addresses) utilize the same connection to a PostgreSQL data base behind the scenes?
Long form of the question:
In a given php script the database connection $connection is defined near the very beginning of script. That connection is then used throughout the rest of the script via $GLOBALS['connection']. Thus in that script a given user simply reuses the same connection over and over again.
A second user using the same script while logging in from a different location also uses a single copy of the connection.
From the manual (at http://www.php.net/manual/en/function.pg-connect.php):
If a second call is made to pg_connect() with the same connection_string as an existing connection, the existing connection will be returned unless you pass PGSQL_CONNECT_FORCE_NEW as connect_type.
So, does this mean that both users are sharing the same connection (unless the PGSQL_CONNECT_FORCE_NEW flag is sent)?
No, every time you run php script - you make new connection unless you're using persistent connections or connection pooler (like pgbouncer or pgpool).
PGSQL_CONNECT_FORCE_NEW flag means that if inside one php script you call twice pg_connect() with same params you really got one connection unless this flag is set.
I'm using a PHP web application to connect to MySQL, what I would like to do is set the userID of the client who has logged in and then use that MySQL variables within views and functions to limit data returned.
Currently, I'm simply using:-
SET #UserID = 3;
And then referencing this within views/functions.
Is this a suitable and reliable method to do this across multiple concurrent user sessions? Will this be present for the lifetime of that users MySQL connection (or page load) from PHP and I obviously want to ensure no other connections leverage this. It's set on every page load (or MySQL reconnection from my app).
Thanks all!
As it clearly states in the first paragraph of the mysql variables man page: http://dev.mysql.com/doc/refman/5.0/en/user-variables.html
User-defined variables are session-specific. That is, a user variable defined by one client cannot be seen or used by other clients. All variables for a given client session are automatically freed when that client exits.
e.g. they exist while the php<->mysql connection is kept alive, PER connection, and automatically removed when the connection is closed/terminated. Unless you're using persistent connections in PHP (which you shouldn't be anyways), the mysql variables would basically exist for the life of that particular script invocation, and will NOT be available when the same user comes back with another http request later.
So, strictly speaking, what you're doing shouldn't be a problem (each PHP connection exists more-or-less independantly). But, that said, it isn't the best approach.
I think you've got a "if all you have is a hammer, everything looks like a nail" problem. MySQL is not designed with the request lifecycle in mind, and it shouldn't need to be aware of it. PHP, on the other hand, is designed with exactly that idea.
Instead of:
mysqli_query('Set #UserID=' . $id);
$output = mysqli_query('SELECT * FROM FOO WHERE ID=#UserID');
Why not just use bound variables?
I'm working on a medium-sized (probably) PHP system which had MySQL connections being opened everywhere throughout different files and, made into global variables for the later included scripts to have access to. Since I'm creating another module, I'd like to avoid globals and keeping the same mysql connection for each page request. My current solution is this:
Class Db {
static public $dbConnectionArray = array();
}
For every request, the connections would be saved in the static array and referred back to at a later time. What do you think could go wrong? And why?
Would like to hear some opinions on how to best tackle this as I would love to reduce the number of opened connections per script run (currently, one page request invoked about 6-15 mysql connections to at least 3 different databases).
No need to reinvent the wheel. you can use mysql persistent connections to keep connections alive. (http://php.net/manual/en/function.mysql-pconnect.php)
By using persistent connections, your PHP scripts will reuse the same database connections (as long as the database name & credentials are the same)
Also, if your databases are on the same host, you should be able to use the same mysql connection by using mysql_select_db() function.
I'm a learner. Is there a way to stay connected in to the mysql database as the user is taken to the next page.
For example, the db connection is made, the user is logged in, and then goes to the next page to access a table in the database. Instead of having to make the db connection again, is there a way to keep the previous connection active?
Or does it matter at all in a low-traffic site?
I read a post yesterday about something related to sessions, and the responder talked about sending a "header-type" (?) file.
Thank you.
Yes and no. Once the user goes to the next page, for all intents and purposes they are not connected to the database anymore.
Your script (on the next page) will still need to open the connection for them. mysql_pconnect() will ensure the actual connection they used is still available when they want it next, however, it can also cause excess number of apache/mysql connections to wait around uselessly.
I'd strongly suggest not using it unless your benchmarks show that it provides a significant gain in performance. Typically, for most applications (especially when you're learning), I would not bother with persistent connections. Note the warning in the PHP Manual
it wont matter unless you're getting a ton of requests, but php has a mysql_pconnect (pconnect) for persistent connections to mysql. each instance of apache will keep around an active connection to mysql that can be used without reconnecting.
I believe you're looking for something like mysql_pconnect(), which establishes a persistent connection to the database.
I realy cant understand your question, if you have fetched datas from db you usualy do some stuff with it. And if you want fetch data from db you do usualy this points.
Some Framworks and Library makes this points a bit easiyer.
Here is the usual way of the process.
1. Make connection to the db.
2. Select a db.
3. Send a query to db.
4. Fetch the results.
5. Do some funy stuff with it.