I'm wondering if there are any negative performance issues associated with using a Singleton class to connect to MySQL database. In particular I'm worried in the amount of time it will take to obtain a connection when the website is busy. Can the singleton get "bogged down"?
public static function obtain($server=null, $user=null, $pass=null, $database=null){
if (!self::$instance){
self::$instance = new Database($server, $user, $pass, $database);
}
return self::$instance;
}
Even if you write that, each PHP request will still be a different connection. Which is what you want.
You can use a singleton to handle your database connection because even in only one request, your app may send several database queries and you will loose performance if you re-open database connections each time.
But keep in mind that you always have to write clever queries and request your database only for the data you need and as few times as possible. That will make it smooth !
It doesn't really matter from a MySQL performance standpoint whether or not you use singletons in this case as each request to that page will create its own object (singleton or not) and connection.
Related
First, some background: I am working on a pre-2000 website that uses mysql_connect and mysql_* functions everywhere. It is not feasible to simply replace all of these at the moment.
I do, however, plan on slowly making the change to mysqli_* functions. I have run into an instance where I need to use mysqli_multi_query though, and was wondering if it would be better to:
Create a function that opens and closes the mysqli connection, while performing one mysqli_multi_query.
Create a function that opens a mysqli connection when needed, and only open the mysqli connection only on pages that need it.
Simply use the mysqli_connect() function the same way I am using the mysql_connect() function and have both connect at the beginning of my scripts and close at the end, on all pages.
The trouble I am having with deciding on these is that 1 limits the number of multi-queries I can do on one page (while also adding to the future code-cleanup that needs to be done), 2 also adds to the code-cleanup, albeit not quite as much as 1, and 3 might be either inefficient, or unsafe, although I would be able to clean-up as I run into the old queries.
This website gets over 1 million visitors per month.
Anyone know what would be "best-practice" in this scenario?
"best practice" seems to be to use PDO for your MySQL connections, according to recent articles turned up during a search on the topic (for example https://phpbestpractices.org/#mysql ) although I couldn't find any specific guidance on when to open those connections if they are not strictly required on that page.
I'd suggest going with your second choice, as the abstraction makes code more manageable and maintainable in the future, for you and for other devs. As far as I know, there is no specific drawback to using mysql_* and mysqli_* functions side-by-side, and it is recommended to use mysqli_* over mysql_* in all cases
( see http://www.php.net/manual/en/mysqlinfo.api.choosing.php, the section under 'recommended API').
However your code will not be as secure as it could be until you complete the transition.
I would say that whether or not to open connections when they are not strictly required is a judgement call for you to make - I'd lean towards only opening it when you need it on general principles of efficiency, however in practice when dealing with legacy code it may prove far more trouble than it's worth. If it doesn't slow your server down too much you could live with it, so long as you recognise it's not the most efficient way to go.
PHP offers three different APIs to connect to MySQL: mysql(outdated), mysqli, and PDO extensions.
You don't need to connect to the database on every request.
mysqli_connect() with p: host prefix
//or
PDO::__construct() with PDO::ATTR_PERSISTENT as a driver option
http://www.php.net/mysql_pconnect
In your case, I would use PDO type connection implemented as the singleton pattern with "permanent" connection options.
Included on top of the every script.
class Database {
private static $instance;
private $dsn = 'mysql:dbname=testdb;host=127.0.0.1';
private $user = 'dbuser';
private $password = 'dbpass';
public static function getInstance() {
if(!self::$instance) {
self::$instance =
new PDO($this->dsn,
$this->user,
$this->password,
array(PDO::ATTR_PERSISTENT)
);
}
return self::$instance;
}
}
That way you can get database instance with:
Database::getInstance();
...and don't flame me for using singletons in the legacy app! ;)
I currently create a DB connection via PDO for my website like so:
try {
self::$dbh = new PDO("mysql:host={$host};dbname={$dbName}", $dbUser, $dbPass);
self::$dbh->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING );
}
catch (PDOException $e) {
return $e->getMessage();
}
I've been reading about persistent connections so I wanted to add the persistent flag like so:
self::$dbh = new PDO("mysql:host={$host};dbname={$dbName}", $dbUser, $dbPass,
array(PDO::ATTR_PERSISTENT => true
));
From what I've been reading this could be dangerous if something happens mid query etc. It sounds like this isn't really a recommended method.
Are there any other alternatives to maintain a persistent DB connection with MySQL?
The reason to use persistent connections is that you have a high number of PHP requests per second, and you absolutely need every last fraction of a percent of performance.
Even though creating a new MySQL connection is really pretty inexpensive (compared to connecting to Oracle or something), you may be trying to cut down this overhead. Keep in mind, though, that most sites get along just fine without doing this. It depends on how heavy your traffic is. Also, MySQL 5.6 and 5.7 have made it even more efficient to create a new connection, so the overhead is lower already if you upgrade.
The risk described in the post you linked to was that session-specific states didn't get cleaned up as a given DB connection was inherited by a subsequent PHP request.
Examples of session state include:
Unfinished transactions
Temporary tables
User variables
Connection character set
This can even be a security problem, for instance if one PHP user populates a temp table with privileged information, and then another PHP user finds they can read it.
Fortunately, in the 4 years since #Charles gave his answer, the mysqlnd driver has addressed this. It now uses mysql_change_user(), which is like a "soft disconnect" that resets all those session state details, but without releasing the socket. So you can get the benefit of persistent connections without risking leaking session state from one PHP request to another. See http://www.php.net/manual/en/mysqlnd.persist.php
This needs the mysqlnd driver to be enabled, which it should be if you use any reasonably up to date version of PHP.
Why do you need a persistent connection? PHP is stateless and reinitializes every time you make a request, so there is mostly no advantages and quite more disadvantages (i.e. sudden disconnectons with no handlers) in working with a persistent connections.
BACKGROUND:
I am passing variables through AJAX to php file. The php file connects to a server and retrieves the result which it passes back to javascript. This happens every time a user clicks on the request button (about every 5 secs). Thus for each user, the php file (and so the mysql connection) is called once every 5 secs.
ISSUE:
As is apparent above, the number of mysql connections are impractically high.
QUESTION:
Is there a better architecture where instead of having so many mysql connections, I can rather have fewer connections.
I have read a little bit about mysql_pconnect. But what happens if I have to upgrade since I read somewhere that mysqli doesnt support it? How many queries can a single mysql_pconnect handle? If anyone suggests mysql_pconnect then how to implement it?
Is there a better architecture where instead of having so many
mysql connections, I can rather have fewer connections.
Don't really know, but I think that for you the proposed pconnect is the best available option. Unless you have either mysqli or PDO_mysql available now?
I have read a little bit about mysql_pconnect.
But what happens if I have to upgrade since I read somewhere that mysqli doesnt support it?
You would probably need to change method when upgrading beyond PHP 5.5.
How many queries can a single mysql_pconnect handle?
Unlimited, as long as the connection is kept alive. If there are no available free connections a new one is created.
If anyone suggests mysql_pconnect then how to implement it?
Change your current mysql_connect calls to mysql_pconnect. That should be all.
What you are looking for is singleton design pattern for database connections. But it has it's trade off too. Example code for singleton design for database would be as below.
define('DB_NAME', "test");
define('DB_USER', "root");
define('DB_PASS', "rootpass");
define('DB_HOST', "localhost");
class Connection{
private static $connection_;
private function __construct(){
$con = mysql_connect(DB_HOST, DB_USER, DB_PASS);
mysql_select_db(DB_NAME, $con);
Connection::$connection_ = $con;
}
public static function getConnection(){
if( !Connection::$connection_ )
new Connection;
return Connection::$connection_;
}
}
$con = Connection::getConnection();
Read more
php singleton database connection, is this code bad practice?
How static vs singleton classes work (databases)
You can find tons of example and information if you google. Hope this helps
I have an application that uses PDO to connect to it's relevant MySQL database. I call my connection string every time a request is made. Note I use prepared statements. My connection string method (dev version) looks like this...
protected function ConnectionString()
{
try {
$dbh = new PDO("mysql:host=".__DB_HOSTNAME.";dbname=".__DB_NAME, __DB_USERNAME, __DB_PASSWORD);
return $dbh;
}
catch(PDOException $e)
{
echo $e->getMessage();
die();
}
}
Recently my app's traffic has increased massively and I'm noticing that my app is failing to make a lot of connections. So the Catch is being triggered a lot more than normal. I assume this is because my app is not very efficient when connecting to the database.
Would it be wise for me to implement persistent connections? Or should I restructure my code so less connections are requested. Or would this be a problem with the number of connections my MySQL databases allow? The max is currently set to 151, which I believe is the default.
Any help or advice would be much appreciated.
I haven't full application code... But it seems like for every single SQL query u make new connection.
If it so - u can realize Singleton pattern for DB connection.
Another way to decrease DB connections is to add DBH caching. Just store it in session and in next visit use already alive connection.
I understand how transactions work and everything functions as expected, but I do not like the way I access connections to commit or rollback transactions.
I have 3 service classes that can access the same singleton connection object. I want to wrap these three things in a single transaction, so I do this:
try {
$service1 = new ServiceOne;
$service2 = new ServiceTwo;
$service3 = new ServiceThree;
$service1->insertRec1($data);
$service2->deleteRec2($data);
$service3->updateRec3($data);
$service1->getSingletonConnection()->commit();
}
catch(Exception $ex) {
$service1->getSingletonConnection()->rollback();
}
The connection object returned by getSingletonConnection is just a wrapper around the oci8 connection, and committing is oci_commit; rollback is oci_rollback.
As I said, this works because they are all accessing the same connection, but it feels wrong to access the connection through any arbitrary service object. Also, there are two different databases used in my app so I need to be sure that I retrieve and commit the correct one... not sure if there is any way around that though.
Is there a better way to handle transactions?
it feels wrong to access the
connection through any arbitrary
service object.
I agree with you 100%.
It seems to me that if each service only makes up part of a database transaction, then the service cannot be directly responsible for determining the database session to use. You should select and manage the connection at the level of code that defines the transaction.
So your current code would be modified to something like:
try {
$conn = getSingletonConnection();
$service1 = new ServiceOne($conn);
$service2 = new ServiceTwo($conn);
$service3 = new ServiceThree($conn);
$service1->insertRec1($data);
$service2->deleteRec2($data);
$service3->updateRec3($data);
$conn->commit();
}
catch(Exception $ex) {
$conn->rollback();
}
It seems like this would simplify dealing with your two-database issue, since there would only be one place to decide which connection to use, and you would hold a direct reference to that connection until you end the transaction.
If you wanted to expand from a singleton connection to a connection pool, this would be the only way I can think of to guarantee that all three service calls used the same connection.
There's nothing intrinsically wrong with a single connection.
If you have multiple connections, then each runs an independent transaction. You basically have two options.
Maintain the current single
connection object for each of the
three services
Maintain separate
connections (with related overheads)
for each service, and commit/rollback
each individual connection separately
(not particularly safe, because you
can't guarantee the ACID consistency
then)
As a way round the two separate database instances that you're connecting to: use db links so that you only connect to a single database