I want to implement connection pooling in Php in a similar way that works in java.
Why I need this :
Let's consider a flow
Step1: Connection To Db --- Resource Id #12
Step2: some computation... time taking .3 seconds
Step3: Query on Solr .... timing taking 2 seconds
Step4: Connection To Db --- Resource Id #12 (i am using same resource id)
Step5: Exit
Though in step4 I am using the same DB resource as of step1. However, the connection will go in the sleep state for both step2 and step3 and therefore can't be used by any other PHP process (other clients) until exit.
Solution:
use mysql_close every time after query get fired: Drawback: need to connect every time and hence time-consuming
Create a java service to handle queries (possible but too time-consuming and I am looking for other solution where I need to migrate queries )
Need to explore SQL relay like the third party but I am not sure will that be a success and not many good companies have used it
mysql_pconnect is not solving my case.
Please suggest
One way that you can apply scalability techniques to this pool model is to allow on the fly changes to your pool distribution. If you have a particular permalink that is extremely popular for some reason, you could move slaves from the primary pool to the comments pool to help it out. By isolating load, you’ve managed to give yourself more flexibility. You can add slaves to any pool, move them between pools, and in the end dial-in the performance that you need at your current traffic level.
There’s one additional benefit that you get from MySQL database pooling, which is a much higher hit rate on your query cache. MySQL (and most database systems) have a query cache built into them. This cache holds the results of recent queries. If the same query is re-executed, the cached results can be returned quickly.
If you have 20 database slaves and execute the same query twice in a row, you only have a 1/20th chance of hitting the same slave and getting a cached result. But by sending certain classes of queries to a smaller set of servers you can drastically increase the chance of a cache hit and get greater performance.
You will need to handle database pooling within your code - a natural extension of the basic load balancing code in Part 1. Let’s look at how we might extend that code to handle arbitrary database pools:
<?php
class DB {
// Configuration information:
private static $user = 'testUser';
private static $pass = 'testPass';
private static $config = array(
'write' =>
array('mysql:dbname=MyDB;host=10.1.2.3'),
'primary' =>
array('mysql:dbname=MyDB;host=10.1.2.7',
'mysql:dbname=MyDB;host=10.1.2.8',
'mysql:dbname=MyDB;host=10.1.2.9'),
'batch' =>
array('mysql:dbname=MyDB;host=10.1.2.12'),
'comments' =>
array('mysql:dbname=MyDB;host=10.1.2.27',
'mysql:dbname=MyDB;host=10.1.2.28'),
);
// Static method to return a database connection to a certain pool
public static function getConnection($pool) {
// Make a copy of the server array, to modify as we go:
$servers = self::$config[$pool];
$connection = false;
// Keep trying to make a connection:
while (!$connection && count($servers)) {
$key = array_rand($servers);
try {
$connection = new PDO($servers[$key],
self::$user, self::$pass);
} catch (PDOException $e) {}
if (!$connection) {
// Couldn’t connect to this server, so remove it:
unset($servers[$key]);
}
}
// If we never connected to any database, throw an exception:
if (!$connection) {
throw new Exception("Failed Pool: {$pool}");
}
return $connection;
}
}
// Do something Comment related
$comments = DB::getConnection('comments');
. . .
?>
Related
When starting my Yii2/PHP application, how can I check if / wait until the database is up?
Currently with MySQL I use:
$time = time();
$ok = false;
do {
try {
$pdo = new PDO($dsn,$username,$password);
if ($pdo->query("SELECT 1 FROM INFORMATION_SCHEMA.SCHEMATA"))
$ok=true;
} catch (\Exception $e) {
sleep(1);
}
} while (!$ok && time()<$time+30);
Now I want make my application running with MySQL and PostgreSQL.
But SELECT 1 FROM INFORMATION_SCHEMA.SCHEMATA does not work in PostgreSQL.
Is there a SQL-statement (using PDO database connectivity) that works on both database systems to check if the database is up and running?
Yii2 has a property to verify if a connection exists or not, it is really not necessary to create a script for this, since this framework has an abstraction implemented for the databases it supports ($isActive property).
$isActive public read-only property Whether the DB connection is
established
public boolean getIsActive ( )
You can do the check in your default controller in the following way:
<?php
class DefaultController extends Controller
{
public function init()
{
if (!Yii::$app->db->isActive) {
// The connection does not exist.
}
parent::init();
}
}
It is not good practice to force waiting for a connection to a database unless there are very specific requirements. The availability of a connection to the database must be a mandatory requirement for an application to start and the application should not "wait" for the database to be available.
There are ways to run containers in docker in an orderly manner or with a specific requirement, this link could give you a better option instead of delegating this to the application.
You could use SELECT 1 which is standard SQL.
You can use dbfiddle to test against various servers.
The server could go away an any time so checking the error response with every query is a much better approach.
I understand there is no connection pooling in PHP Connection pooling in PHP, and we are currently using Pear DB.
I have a legacy cron job code, which is using pear DB connection.
while (true) {
...
foreach ($keys as $key) {
$connection_string = get_connection_string_based_on_key($key);
$DB = & \DB::connect($connection_string);
...
// Avoid resource leakage.
$DB->disconnect();
}
}
We realize DB::connect does give us some performance hotspot. I plan to make a pseudo connection pool
$pool = array();
while (true) {
...
foreach ($keys as $key) {
$connection_string = get_connection_string_based_on_key($key);
if (array_key_exists ($connection_string, $pool) {
$DB = $pool[$connection_string];
} else {
$DB = & \DB::connect($connection_string);
$pool[$connection_string] = $DB;
}
...
// No $DB->disconnect(); As we want the
// DB connection remains valid inside the pool.
}
}
The cron job might run for several days, several weeks or several months. I was wondering, is there any catcha behind such pseudo connection pool? For instance,
Will DB connection remain valid, after it stays inside pool for a long period (Says a week)?
Possible run out of DB resource? If yes, what is a suitable mechanism to handle ever growing pool?
It is not the question concerning your PHP code. The connection timeout and max number of simultaneous connections have to be configured within your database system.
When using mysql:
connections:
http://www.electrictoolbox.com/update-max-connections-mysql/
timeout:
How can I change the default Mysql connection timeout when connecting through python?
I think connect_timeout=0 means that a mysql db will try to keep the connection open as long as possible.
There is no configuration option for unlimited connections (with respect to system resources) as far as I know.
I am running some tests on a test a cluster I have set up. Right now, I have a three node cluster with one master, one slave and one arbiter.
I have a connection string like
mongodb://admin:pass#the_slave_node,the_master_node
I was under the impression that one of the features inherent in the connection string was that supplying more than one host would introduce a certain degree of resiliency on the client side. I was expecting that when I took down the_slave_node that the php driver should have moved on and try connecting to the_master_node, however this doesn't seem to be the case and instead I get the error:
The MongoCursor object has not been correctly initialized by its constructor
I know that MongoClient is responsible for making the initial connections, and indeed it is that way in the code. So this error is an indication to me that the MongoClient didn't connect properly and I didn't implement correct error checking. However that is a different issue --
How do I guarantee that the MongoClient will connect to at least one of the hosts in the hosts csv in the event at least one host is up and some hosts are down?
Thank you
The MongoCursor object has not been correctly initialized by its constructor
This error should only ever happen if you are constructing your own MongoCursor and overwriting its constructor.
This would happen with for example
class MyCursor extends MongoCursor {
function __construct(MongoClient $connection, $ns , array $query = array(), array $fields = array()) {
/* Do some work, forgetting to call parent::__construct(....); */
}
}
If you are not extending any of the classes, then this error is definetly a bug and you should report it please :)
How do I guarantee that the MongoClient will connect to at least one of the hosts
Put at least one member of each datacenter into your seed list.
Tune the various timeout options, and plan for the case when the primary is down (e.g. which servers should you be reading from instead?)
I suspect you may have forgotten to specify the "replicaSet" option, since you mention your connection string without it?
The following snippet is what I recommend (adapt at will), especially when you require full consistency when possible (e.g. always reading from a primary).
<?php
$seedList = "hostname1:port,hostname2:port";
$options = array(
// If the server is down, don't wait forever
"connectTimeoutMS" => 500,
// When the server goes down in the middle of operation, don't wait forever
"socketTimeoutMS" => 5000,
"replicaSet" => "ReplicasetName",
"w" => "majority",
// Don't wait forever for majority write acknowledgment
"wtimeout" => 500,
// When the primary goes down, allow reading from secondaries
"readPreference" => MongoClient::RP_PRIMARY_PREFERRED,
// When the primary is down, prioritize reading from our local datacenter
// If that datacenter is down too, fallback to any server available
"readPreferenceTags" => array("dc:is", ""),
);
try {
$mc = new MongoClient($seedList, $options);
} catch(Exception $e) {
/* I always have some sort of "Skynet"/"Ground control"/"Houston, We have a problem" system to automate taking down (or put into automated maintenance mode) my webservers in case of epic failure.. */
automateMaintenance($e);
}
I am using Zend Framework for my PHP developments and here is a small function I used to execute a query. This is not about an error. The code and everything works fine. But I want to know some concept behind this.
/**
* Get dataset by executing sql statement
*
* #param string $sql - SQL Statement to be executed
*
* #return bool
*/
public function executeQuery($sql)
{
$this->sqlStatement = $sql;
if ($this->isDebug)
{
echo $sql;
exit;
}
$objSQL = $this->objDB->getAdapter()->prepare($sql);
try
{
return $objSQL->execute();
}
catch(Exception $error)
{
$this->logMessage($error->getMessage() . " SQL : " .$sql);
return false;
}
return false;
}
Bellow are unclear areas for me.
How Zend_Db_Table_Abstract Maintain database connections?
Is it creating new connection all the time when I call this function or Does it have some connection pooling?
I didn't write any coding to open or close database connection. So will zend framework automatically close connections?
If this open and close connection works all the time if I execute this function, Is there any performance issue?
Thank you and appreciate your suggestions and opinion on this.
Creating Connection
Creating an instance of an Adapter class does not immediately connect to the RDBMS server. The Adapter saves the connection parameters, and makes the actual connection on demand, the first time you need to execute a query. This ensures that creating an Adapter object is quick and inexpensive. You can create an instance of an Adapter even if you are not certain that you need to run any database queries during the current request your application is serving.
If you need to force the Adapter to connect to the RDBMS, use the getConnection() method. This method returns an object for the connection as represented by the respective PHP database extension. For example, if you use any of the Adapter classes for PDO drivers, then getConnection() returns the PDO object, after initiating it as a live connection to the specific database.
It can be useful to force the connection if you want to catch any exceptions it throws as a result of invalid account credentials, or other failure to connect to the RDBMS server. These exceptions are not thrown until the connection is made, so it can help simplify your application code if you handle the exceptions in one place, instead of at the time of the first query against the database.
Additionally, an adapter can get serialized to store it, for example, in a session variable. This can be very useful not only for the adapter itself, but for other objects that aggregate it, like a Zend_Db_Select object. By default, adapters are allowed to be serialized, if you don't want it, you should consider passing the Zend_Db::ALLOW_SERIALIZATION option with FALSE, see the example above. To respect lazy connections principle, the adapter won't reconnect itself after being unserialized. You must then call getConnection() yourself. You can make the adapter auto-reconnect by passing the Zend_Db::AUTO_RECONNECT_ON_UNSERIALIZE with TRUE as an adapter option.
Closing a Connection
Normally it is not necessary to close a database connection. PHP automatically cleans up all resources and the end of a request. Database extensions are designed to close the connection as the reference to the resource object is cleaned up.
However, if you have a long-duration PHP script that initiates many database connections, you might need to close the connection, to avoid exhausting the capacity of your RDBMS server. You can use the Adapter's closeConnection() method to explicitly close the underlying database connection.
Since release 1.7.2, you could check you are currently connected to the RDBMS server with the method isConnected(). This means that a connection resource has been initiated and wasn't closed. This function is not currently able to test for example a server side closing of the connection. This is internally use to close the connection. It allow you to close the connection multiple times without errors. It was already the case before 1.7.2 for PDO adapters but not for the others.
More information
I have a webpage with some mysql querys,
I must start new db connection and db close after every query or its more efficiently start an only one db connection at the top of the code and close at the bottom of the script?
Thanks!
Never create a new connection for every query; it's very wasteful and will overburden your MySQL server.
Creating one connection at the top of your script is enough - all subsequent queries will use that connection until the end of the page. You can even use that connection in other scripts that are include()ed into the script with your mysql_connect() call in it.
I prefer to create some sort of class called Database of which upon creation will create a database connection using mysqli (OOP version of mysql).
This way i do not have to worry about that and i have a static class that has access to the database.
class MyAPI {
private static $database = false;
public static function GetDatabase() {
if (MyAPI::$database === false) {
MyAPI::$database = new Database(credentials can go here)
}
return MyAPI::$database;
}
}
Now your system with the includsion of your API and your database can access the database and its initialization code does not have to be peppered around your files depending on what part of the program is running and your database/connection will not be created if it is not needed (only until it is called).
Just for fun i also like to have a function in my Database class that will perform a query and return its results, this way i do not have to have the SAME while loop over and over again throughout my code depending on where i make these calls.
I also forgot to answer your question. No multiple connections! its baaaad.
Only create one connection. You don't even have to close it manually.
multiple connexion is better for security and tracking.
if you're going though a firewall between front and back there will be a timeout defined so a single connection will be a problem.
but if your web server is on the same host as mysql, a single connection will be more efficient.