I have an application that uses PDO to connect to it's relevant MySQL database. I call my connection string every time a request is made. Note I use prepared statements. My connection string method (dev version) looks like this...
protected function ConnectionString()
{
try {
$dbh = new PDO("mysql:host=".__DB_HOSTNAME.";dbname=".__DB_NAME, __DB_USERNAME, __DB_PASSWORD);
return $dbh;
}
catch(PDOException $e)
{
echo $e->getMessage();
die();
}
}
Recently my app's traffic has increased massively and I'm noticing that my app is failing to make a lot of connections. So the Catch is being triggered a lot more than normal. I assume this is because my app is not very efficient when connecting to the database.
Would it be wise for me to implement persistent connections? Or should I restructure my code so less connections are requested. Or would this be a problem with the number of connections my MySQL databases allow? The max is currently set to 151, which I believe is the default.
Any help or advice would be much appreciated.
I haven't full application code... But it seems like for every single SQL query u make new connection.
If it so - u can realize Singleton pattern for DB connection.
Another way to decrease DB connections is to add DBH caching. Just store it in session and in next visit use already alive connection.
Related
This is my first time asking a question on here. I've scoured Google, stackoverflow, etc. looking for help with the issue I'm having. We're currently using PHP 5.3.10 & MySQL 5.0.95 with Apache 2.2.21 (CentOS).
We're in the process of starting to cut over from the old mysql library to mysqli in new code, and I'm leading the charge. I've tried
making sure I explicitly close the connection to the database when I'm done with it
freeing result sets when I'm done with them
upping the connection limit to 250 from 150
There are included files (having to do with session checking, etc.) that use the old style mysql_pconnect() to validate certain things. These are included in nearly all of our code.
something like the code:
$mysqli = new mysqli('p:'.DBHOST, DBUSER, DBPASS, $_SESSION['dbname']);
if ($mysqli->connect_error) {
throw new Exception($mysqli->connect_error, $mysqli->connect_errno);
exit;
}
// do my stuff here, a bunch of SQL queries like:
$sql = 'SELECT * FROM MyTable';
$result = $mysqli->query($sql);
if (!$result) {
throw new SQLException($sql, $mysqli);
exit;
// SQLException is an extension to mysqli_sql_exception that adds the
// query into the messaging internally
}
while ($result && $row = $result->fetch_assoc()) {
// do stuff here, like show it on screen, etc., all works normally
}
$result->free(); // free up the result
$mysqli->close(); // close the connection to the database
freeing the results and closing the connection were things I did after getting a "Too many connections" error. Before doing that, I would get 3-4 new database connections each time I ran my program. (viewed in the back end with SHOW PROCESSLIST)
The problem is lessened somewhat (it adds 0 to 3 new connections, rather than 3 new connections each time).
Some of my reading suggests that this could have something to do with Apache threading + the new persistent connections added if there are no existing idle ones in the current thread. Is it this? Are persistent connections not supported well with mysqli? (should I give up on persistence?)
Thanks for any suggestions you might have.
I have no experience with mysqli persistent connections but some of your questions and expectations looks strange to me.
mysqli opens multiple new processes with p: connect option
Yes, that's what permanent connections are for
making sure I explicitly close the connection to the database when I'm done with it
You cannot make sure you closed it explicitly as you just can't do that. Again because the only point of permanent connection is to lasts open
I would get 3-4 new database connections each time I ran my program.
So, you have to make sure you're opening only one.
You have at least two connects from your script - one old style and some from mysqli?
How many mysqli objects being instantiated?
How many php scripts being run to serve one HTTP request? Are you sure?
After all, if it bothers you so much, why you're using persistent connections? Are you any real (not imaginary) benefits from it?
After all, if your version is 5.3, why bother with rewriting from mysql at all?
I currently create a DB connection via PDO for my website like so:
try {
self::$dbh = new PDO("mysql:host={$host};dbname={$dbName}", $dbUser, $dbPass);
self::$dbh->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING );
}
catch (PDOException $e) {
return $e->getMessage();
}
I've been reading about persistent connections so I wanted to add the persistent flag like so:
self::$dbh = new PDO("mysql:host={$host};dbname={$dbName}", $dbUser, $dbPass,
array(PDO::ATTR_PERSISTENT => true
));
From what I've been reading this could be dangerous if something happens mid query etc. It sounds like this isn't really a recommended method.
Are there any other alternatives to maintain a persistent DB connection with MySQL?
The reason to use persistent connections is that you have a high number of PHP requests per second, and you absolutely need every last fraction of a percent of performance.
Even though creating a new MySQL connection is really pretty inexpensive (compared to connecting to Oracle or something), you may be trying to cut down this overhead. Keep in mind, though, that most sites get along just fine without doing this. It depends on how heavy your traffic is. Also, MySQL 5.6 and 5.7 have made it even more efficient to create a new connection, so the overhead is lower already if you upgrade.
The risk described in the post you linked to was that session-specific states didn't get cleaned up as a given DB connection was inherited by a subsequent PHP request.
Examples of session state include:
Unfinished transactions
Temporary tables
User variables
Connection character set
This can even be a security problem, for instance if one PHP user populates a temp table with privileged information, and then another PHP user finds they can read it.
Fortunately, in the 4 years since #Charles gave his answer, the mysqlnd driver has addressed this. It now uses mysql_change_user(), which is like a "soft disconnect" that resets all those session state details, but without releasing the socket. So you can get the benefit of persistent connections without risking leaking session state from one PHP request to another. See http://www.php.net/manual/en/mysqlnd.persist.php
This needs the mysqlnd driver to be enabled, which it should be if you use any reasonably up to date version of PHP.
Why do you need a persistent connection? PHP is stateless and reinitializes every time you make a request, so there is mostly no advantages and quite more disadvantages (i.e. sudden disconnectons with no handlers) in working with a persistent connections.
This is my first time asking a question on here. I've scoured Google, stackoverflow, etc. looking for help with the issue I'm having. We're currently using PHP 5.3.10 & MySQL 5.0.95 with Apache 2.2.21 (CentOS).
We're in the process of starting to cut over from the old mysql library to mysqli in new code, and I'm leading the charge. I've tried
making sure I explicitly close the connection to the database when I'm done with it
freeing result sets when I'm done with them
upping the connection limit to 250 from 150
There are included files (having to do with session checking, etc.) that use the old style mysql_pconnect() to validate certain things. These are included in nearly all of our code.
something like the code:
$mysqli = new mysqli('p:'.DBHOST, DBUSER, DBPASS, $_SESSION['dbname']);
if ($mysqli->connect_error) {
throw new Exception($mysqli->connect_error, $mysqli->connect_errno);
exit;
}
// do my stuff here, a bunch of SQL queries like:
$sql = 'SELECT * FROM MyTable';
$result = $mysqli->query($sql);
if (!$result) {
throw new SQLException($sql, $mysqli);
exit;
// SQLException is an extension to mysqli_sql_exception that adds the
// query into the messaging internally
}
while ($result && $row = $result->fetch_assoc()) {
// do stuff here, like show it on screen, etc., all works normally
}
$result->free(); // free up the result
$mysqli->close(); // close the connection to the database
freeing the results and closing the connection were things I did after getting a "Too many connections" error. Before doing that, I would get 3-4 new database connections each time I ran my program. (viewed in the back end with SHOW PROCESSLIST)
The problem is lessened somewhat (it adds 0 to 3 new connections, rather than 3 new connections each time).
Some of my reading suggests that this could have something to do with Apache threading + the new persistent connections added if there are no existing idle ones in the current thread. Is it this? Are persistent connections not supported well with mysqli? (should I give up on persistence?)
Thanks for any suggestions you might have.
I have no experience with mysqli persistent connections but some of your questions and expectations looks strange to me.
mysqli opens multiple new processes with p: connect option
Yes, that's what permanent connections are for
making sure I explicitly close the connection to the database when I'm done with it
You cannot make sure you closed it explicitly as you just can't do that. Again because the only point of permanent connection is to lasts open
I would get 3-4 new database connections each time I ran my program.
So, you have to make sure you're opening only one.
You have at least two connects from your script - one old style and some from mysqli?
How many mysqli objects being instantiated?
How many php scripts being run to serve one HTTP request? Are you sure?
After all, if it bothers you so much, why you're using persistent connections? Are you any real (not imaginary) benefits from it?
After all, if your version is 5.3, why bother with rewriting from mysql at all?
I'm wondering if there are any negative performance issues associated with using a Singleton class to connect to MySQL database. In particular I'm worried in the amount of time it will take to obtain a connection when the website is busy. Can the singleton get "bogged down"?
public static function obtain($server=null, $user=null, $pass=null, $database=null){
if (!self::$instance){
self::$instance = new Database($server, $user, $pass, $database);
}
return self::$instance;
}
Even if you write that, each PHP request will still be a different connection. Which is what you want.
You can use a singleton to handle your database connection because even in only one request, your app may send several database queries and you will loose performance if you re-open database connections each time.
But keep in mind that you always have to write clever queries and request your database only for the data you need and as few times as possible. That will make it smooth !
It doesn't really matter from a MySQL performance standpoint whether or not you use singletons in this case as each request to that page will create its own object (singleton or not) and connection.
I understand how transactions work and everything functions as expected, but I do not like the way I access connections to commit or rollback transactions.
I have 3 service classes that can access the same singleton connection object. I want to wrap these three things in a single transaction, so I do this:
try {
$service1 = new ServiceOne;
$service2 = new ServiceTwo;
$service3 = new ServiceThree;
$service1->insertRec1($data);
$service2->deleteRec2($data);
$service3->updateRec3($data);
$service1->getSingletonConnection()->commit();
}
catch(Exception $ex) {
$service1->getSingletonConnection()->rollback();
}
The connection object returned by getSingletonConnection is just a wrapper around the oci8 connection, and committing is oci_commit; rollback is oci_rollback.
As I said, this works because they are all accessing the same connection, but it feels wrong to access the connection through any arbitrary service object. Also, there are two different databases used in my app so I need to be sure that I retrieve and commit the correct one... not sure if there is any way around that though.
Is there a better way to handle transactions?
it feels wrong to access the
connection through any arbitrary
service object.
I agree with you 100%.
It seems to me that if each service only makes up part of a database transaction, then the service cannot be directly responsible for determining the database session to use. You should select and manage the connection at the level of code that defines the transaction.
So your current code would be modified to something like:
try {
$conn = getSingletonConnection();
$service1 = new ServiceOne($conn);
$service2 = new ServiceTwo($conn);
$service3 = new ServiceThree($conn);
$service1->insertRec1($data);
$service2->deleteRec2($data);
$service3->updateRec3($data);
$conn->commit();
}
catch(Exception $ex) {
$conn->rollback();
}
It seems like this would simplify dealing with your two-database issue, since there would only be one place to decide which connection to use, and you would hold a direct reference to that connection until you end the transaction.
If you wanted to expand from a singleton connection to a connection pool, this would be the only way I can think of to guarantee that all three service calls used the same connection.
There's nothing intrinsically wrong with a single connection.
If you have multiple connections, then each runs an independent transaction. You basically have two options.
Maintain the current single
connection object for each of the
three services
Maintain separate
connections (with related overheads)
for each service, and commit/rollback
each individual connection separately
(not particularly safe, because you
can't guarantee the ACID consistency
then)
As a way round the two separate database instances that you're connecting to: use db links so that you only connect to a single database