I connect to MySQL using PHP's PDO like this:
$driver_options[PDO::ATTR_PERSISTENT] = true;
$db = new PDO('mysql:host='.$host.';dbname='.$db_name, $user, $pass, $driver_options);
I have 2 databases (let's call them database_A and database_B) on this server and sometimes very strange thing happens. Even though, $db_name is 100% set to 'database_A', connection is made to 'database_B'.
It's happening completely random. I can run the same script 10 times over again and everything is fine. And 11th time this problem happens.
I would never expect this to happen. It gave me a lot of headache. Can anyone explain it ? And is the only solution not to use persistence ?
PDO::ATTR_PERSISTENT is partially supported and is dependent on the the PHP version and SQL server you're using.
I would recommend never setting this attribute to true in the drive options due to it's instability.
I was able to replicate your case and i found that the ODBC Connection Pooling layer was caching the connection and since your connection is being set to persistent, the cache was being reset each time i made a new connection.
$driver_options[PDO::ATTR_PERSISTENT] = true;
$db = new PDO('mysql:host='.$host.';dbname='.$db_name, $user, $pass, $driver_options);
When you do the above, the PDO connection is placed in the "persistent connection pool", but the pool purpose is not to cache the database, but rather the memory allocation, authentication and setup groundwork. That is what uses time (not so much, at that).
Whatever else you supply in the new PDO() call is LOST.
And if you have two databases with the same credentials, you can get them swapped at random -- as you experienced.
So, do not specify the DB in the new PDO statement, but use the USE databasename SQL statement as soon as the new PDO object is ready.
Or, as PankajKumar suggests, set up different credentials for the two DBs. Then the mistaken cache hit will not happen (but is wont to happen again as soon as someone reuses those same credentials - such as 'ubuntu/ubuntu' or 'root/').
I suppose you are running this in production, or a dev system which is reloaded not so often.
Therefore please restart all your php workers/instances/threads and check if the problem occurs again.
I believe one of these processes is holding your old configuration. And silently causes your errors, randomly when your webserver actually chooses to use the old thing.
Related
I am working on a project that requires my server xyz.com to receive POST/GET responses to xyz.com/callback.php from abc.com(SMS gateway providers).
Below is how i currently handle the responses on callback.php
1. Receive the POST data from abc.com (This post contains a unique ID)
2. Connect to mysql database and UPDATE users WHERE messageId=Same_ID_from_POST_data
This works and data is being updated. However it is causing CPU overload due to the thousands of MYSQL connections. This is because whenever there is a delivery receipt from the api server for each individual message, callback.php connects to mysql and updates the database.
Is there a best practice to minimize the amount of times i connect to MYSQL?
I was thinking of doing the following, i however doubt if it makes any sense.
1. Receive post data from api server as before.
2. Instead of updating mysql, i simply write to a .txt file with the following code
$query.="Update users set status='$post_data_for_status' where unique='$post_unique';
Then after about 10mins i use cron to run a php file that uses mysqli_multi_query($connection, $query) to update the table thereby making it a single connection. After the update i simply unlink the .txt file
Even if the above method will work, i don't know how to do it. I am sure there is a better alternative.
Any ideas please and sorry for long epistle.
What you are looking for sounds like avoiding connection churning, which is one of the benefits we get from using a "connection pool", commonly used in Java/J2EE web applications.
PHP doesn't provide connection pooling (AFAIK). But PDO and mysqli do provide for "persistent" connections, which give some of the same benefit as connection pool (avoiding connection churning.)
For mysqli, prepend "p:" to the hostname in the mysqli_connect.
http://php.net/manual/en/mysqli.persistconns.php
For PDO, include PDO::ATTR_PERSISTENT => true in the connection.
http://php.net/manual/en/pdo.connections.php
Be aware that with persisted connections, the MySQL connection maintains it's state (session variables, temporary tables, user defined variables, et al.) For example, if a client issues a statement such as SET time_zone = '+00:00'; or SET #foo = 'bar';, a subsequent client reusing that MySQL connection is going to inherit the time_zone setting or the user defined variable #foo value; whatever changes to the MySQL session state the previous clients made.
I have database back-up kept in 3 different servers.
Whenever a database failure happens in the currently connected database server, I want my site to connect to the next specified database server automatically. Also the failure should be notified to the specified email.
Like that each database failure should be handled by connecting to the next available database server till the failure is handled. If all three servers fail, it can show Wordpress default message "Error establishing database connection".
Though I'd try to get to a more stable environment as well, you should be able to do this. Here's my idea:
$wpdb is set in require_wp_db() (wp-includes/load.php). If a file named "db.php" exists in WP_CONTENT_DIR (usually wp-content), it will be included before $wpdb is created.
Add a class in db.php that extends wpdb and override db_connect with custom code to change host, credentials etc depending on $this->reconnect_retries and then use parent::db_connect(). Instantiate $wpdb with your db-class.
I haven't tested this, but I don't see why it shouldn't work.
You can use many open source tools for failover, for MySQL automated failover I would recommend orchestrator
If you are trying to achieve from php side, then only thing I can tell is, implement something like load balance or kind of distributed system.
What that means ?
Assuming you have 3 database servers and they are all in sync. When you are establishing connection to database you can user different server on different user/request. This way, you can avoid database server being overloaded.
Implementation
You could maintain log of active user on each database server and according you open connection for new request/user.
This generally seems like an awkward solution, you should consider some sort of distributed system or cluster servers/databases.
But if you insist on implementing it programmatically in your codes, then you can include your connections in successive try/catch blocks.
e.g if you use PDO, you can
try {
$con = new PDO("mysql:host=$lang[dhost];dbname=$lang[db]", $lang['user'], $lang['pass']);
} catch(PDOException $e) {
try{second connection..........
......................
}
I think that this problem is bigger than you think.
I don't have any experience, but I can tell you that this kind of things affect the way your systems work currently, and a real investigation (of actual code and servers architecture) it's needed.
I don't think this is the kind of things StackOverflow should do for you.
To say I am new to PostgreSQL is an understatement. As such I have spent a great deal of time in the last couple of days going over the various manuals at http://www.php.net/manual/en/ref.pgsql.php and at http://www.postgresql.org/docs/9.1/interactive/index.html .
Short form of my question:
Do different users (logged in from separate IP addresses) utilize the same connection to a PostgreSQL data base behind the scenes?
Long form of the question:
In a given php script the database connection $connection is defined near the very beginning of script. That connection is then used throughout the rest of the script via $GLOBALS['connection']. Thus in that script a given user simply reuses the same connection over and over again.
A second user using the same script while logging in from a different location also uses a single copy of the connection.
From the manual (at http://www.php.net/manual/en/function.pg-connect.php):
If a second call is made to pg_connect() with the same connection_string as an existing connection, the existing connection will be returned unless you pass PGSQL_CONNECT_FORCE_NEW as connect_type.
So, does this mean that both users are sharing the same connection (unless the PGSQL_CONNECT_FORCE_NEW flag is sent)?
No, every time you run php script - you make new connection unless you're using persistent connections or connection pooler (like pgbouncer or pgpool).
PGSQL_CONNECT_FORCE_NEW flag means that if inside one php script you call twice pg_connect() with same params you really got one connection unless this flag is set.
I am building some new additions to an existing WebApp. The old code was written using mysql functions. Changing the entire application to use PDO would be a VERY man-hour-intense thing to do. However, for all new code, I'd like to begin using PDO.
Are there any concerns I need to be aware of for using PDO within an existing application that does NOT use PDO to interface with the database? It's no problem to connect to the DB using both/either of these options at the same time when a page is loaded, correct?
While I'm at it - I am interested to know how big the need is to close a PDO connection after a page loads - or is it fine to leave the connection open?
Thanks all.
There's no need to close any type of database connection when a page finishes loading - PHP always does this for you (unless you make the mistake of enabling persistent connections).
However, if you use PDO AND mysql_ in the same page, connecting to the same database, it will consume twice as many connections (while the page is executing) on the server. This may or may not be a problem.
Personally I would recommend that you remain consistent within the application if you don't want to refactor it to use PDO throughout.
I have a Class called User, and have 2 methods, one is "login", another is "register".
Both "login" and "register" requires connect the db. Do I need to connect db everytime ?? Can I have something which connect once only to reduce the time of connecting db? Thank you.
Keeping a connection to the database open requires dedicated resource on both the web server and the database server, and the number of open connections available is often very limited (in the range of 100). The connection process is usually very fast, and shouldn't be a problem. By opening and dropping connections as quickly as possible, it's usually no problem to scale up.
As always, try the simple and lightweight approach first (which would be connecting every time and dropping the connection as soon as possible). From there you can measure if there really is a problem.
Yes you need to connect every time. But you can actually reduce the load of connecting by using mysql_pconnect() as long as your username, host and password is same. This will check if there is any active resource with the same connection DSN. If found, it will then return the same object as long as it it alive, instead of creating a new connection.
Hope that helps.
In that case class User should expect a DB connection object provided in the constructor.
Then the constructor should save it as local variable, and both login() and register() methods would expect that this variable contains a valid connection object.
And, please, use PDO instead of old mysql_* methods. This will give you a connection object, that you can share among all the classes, which require a DB connections
Yes Sir. Everytime a user connects to your application to do a Login Action o Register Action a connection to the DB must be opened and closed at the end. A Keep Alive option will kill your server.