I'm using the CakePHP ORM package inside a Gearman Worker.
Package : https://packagist.org/packages/cakephp/orm
$connectionObject = ConnectionManager::get('Backend');
$usersTable = TableRegistry::get('Users', ['connection' => $connectionObject]);
$countActiveUsers = $usersTable->find()->where(['active' => 1])->count();
I'm trying to find a way to disconnect the database when a job finish to be processed because now, even if there is no job in the queue, the connection between the worker and the database remains open.
Thanks in advance!
You can use disconnect() on the Connection object.
$connectionObject->disconnect();
See: http://api.cakephp.org/3.2/source-class-Cake.Database.Connection.html#190-198
Related
I am trying to insert a record inside database coming to a react socket server. I am lost on how to do my operation in a non blocking way
$loop = Factory::create();
$server = new Server('127.0.0.1:4040', $loop);
$database = new Database();
$server->on('connection', function(ConnectionInterface $conn) use ($database) {
$conn->write('Welcome, you can start writing your notes now...');
$conn->on('data', function($data) use ($conn, $database) {
$database->write($data);
$conn->write('I am supposed to execute before database write');
});
});
$loop->run();
The write method in database has a sleep(10) seconds before executing the sql statement. So I am expecting the next message I am supposed to.. should be printed immediately.
My expectation was that when ever there is a I/O operation, the operation will be moved to Event Table and don't block the call stack. As per the definition of event loop and non blocking.
How can I perform the same operation in non blocking way.
Thanks
Hey ReactPHP core team member here. The loop expects everything to be asynchronous so putting a sleep in your $database->write($data); will block the loop. Your database connection has to utilise the event loop for it to be non-blocking. My suggestion would be to look at https://github.com/friends-of-reactphp/mysql or https://github.com/voryx/PgAsync or check the list here https://github.com/reactphp/react/wiki/Users#databases depending on your database. ReactPHP won't magically make everything non-blocking, you have to use packages that take care of that for you.
I want to implement connection pooling in Php in a similar way that works in java.
Why I need this :
Let's consider a flow
Step1: Connection To Db --- Resource Id #12
Step2: some computation... time taking .3 seconds
Step3: Query on Solr .... timing taking 2 seconds
Step4: Connection To Db --- Resource Id #12 (i am using same resource id)
Step5: Exit
Though in step4 I am using the same DB resource as of step1. However, the connection will go in the sleep state for both step2 and step3 and therefore can't be used by any other PHP process (other clients) until exit.
Solution:
use mysql_close every time after query get fired: Drawback: need to connect every time and hence time-consuming
Create a java service to handle queries (possible but too time-consuming and I am looking for other solution where I need to migrate queries )
Need to explore SQL relay like the third party but I am not sure will that be a success and not many good companies have used it
mysql_pconnect is not solving my case.
Please suggest
One way that you can apply scalability techniques to this pool model is to allow on the fly changes to your pool distribution. If you have a particular permalink that is extremely popular for some reason, you could move slaves from the primary pool to the comments pool to help it out. By isolating load, you’ve managed to give yourself more flexibility. You can add slaves to any pool, move them between pools, and in the end dial-in the performance that you need at your current traffic level.
There’s one additional benefit that you get from MySQL database pooling, which is a much higher hit rate on your query cache. MySQL (and most database systems) have a query cache built into them. This cache holds the results of recent queries. If the same query is re-executed, the cached results can be returned quickly.
If you have 20 database slaves and execute the same query twice in a row, you only have a 1/20th chance of hitting the same slave and getting a cached result. But by sending certain classes of queries to a smaller set of servers you can drastically increase the chance of a cache hit and get greater performance.
You will need to handle database pooling within your code - a natural extension of the basic load balancing code in Part 1. Let’s look at how we might extend that code to handle arbitrary database pools:
<?php
class DB {
// Configuration information:
private static $user = 'testUser';
private static $pass = 'testPass';
private static $config = array(
'write' =>
array('mysql:dbname=MyDB;host=10.1.2.3'),
'primary' =>
array('mysql:dbname=MyDB;host=10.1.2.7',
'mysql:dbname=MyDB;host=10.1.2.8',
'mysql:dbname=MyDB;host=10.1.2.9'),
'batch' =>
array('mysql:dbname=MyDB;host=10.1.2.12'),
'comments' =>
array('mysql:dbname=MyDB;host=10.1.2.27',
'mysql:dbname=MyDB;host=10.1.2.28'),
);
// Static method to return a database connection to a certain pool
public static function getConnection($pool) {
// Make a copy of the server array, to modify as we go:
$servers = self::$config[$pool];
$connection = false;
// Keep trying to make a connection:
while (!$connection && count($servers)) {
$key = array_rand($servers);
try {
$connection = new PDO($servers[$key],
self::$user, self::$pass);
} catch (PDOException $e) {}
if (!$connection) {
// Couldn’t connect to this server, so remove it:
unset($servers[$key]);
}
}
// If we never connected to any database, throw an exception:
if (!$connection) {
throw new Exception("Failed Pool: {$pool}");
}
return $connection;
}
}
// Do something Comment related
$comments = DB::getConnection('comments');
. . .
?>
i'm using Slim framework to provide an API to our clients. I'm focused on Java so that could the the point, because the problem is related with database connection.
Every request connect to the database throw PDO like
$dbh = new PDO($param1, $param2, $param3);//php 7.1
So as php don't have a connection pool i understand each request (maybe at the same time) starts a new connection throw the database.
But i'm experimenting a kind of situation here.
If one request starts a transaction for inserting or updating something, next requests will wait until this transaction finish, but it is on a different thread, so i'm thinking on table locks? or something like each request is getting the same connection.
THE PROBLEM:
So if they are in different threads with a different connection how
they need to wait until transaction ends for the first request.
My database source class is something like
class DataBaseConfig {
public static function getConn()
{
$dbh = new PDO($param1, $param2, $param3);
$dbh->exec("set names utf8");
return $dbh;
}
}
And i use it like
$db = DataBaseConfig::getConn();
$db->beginTransaction();
//code
$db->commit();
I'm missing something sure, so can someone help me with this problem?
Thanks!
NEWS! EDIT
#YourCommonSense thank you it was finally a session configuration. I
didn't know that php start_session(); locks the session file so that's
what was happening. Thanks a lot!
link: https://ma.ttias.be/php-session-locking-prevent-sessions-blocking-in-requests/
I have a long running daemon (Symfony2 Command) that gets work off a work queue in Redis, and performs those jobs and writes to the database using the orm.
I noticed that when that there is a tendency for the worker to die because the connection to MySQL timed out when worker is idling waiting for work.
Specifically, I see this in the log: MySQL Server has gone away.
Is there anyway I can have doctrine automatically reconnect? Or is there some way I can manually catch the exception and reconnect the doctrine orm?
Thanks
I'm using this in my symfony2 beanstalkd daemon Command worker:
$em = $this->getContainer()->get('doctrine')->getManager();
if ($em->getConnection()->ping() === false) {
$em->getConnection()->close();
$em->getConnection()->connect();
}
It appears that whenever there is any error/exception encountered by the EntityManager in Doctrine, the connection is closed and the EntityManager is dead.
Since generally everything is wrapped in a transaction and that transaction is executed when $entityManager->flush() is called, you can try and catch the exception and attempt to re-excute or give up.
You may wish to examine the exact nature of the exception with more specific catch on the type, whether PDOException or something else.
For a MySQL has Gone Away exception, you can try to reconnect by resetting the EntityManager.
$managerRegistry = $this->getContainer()->get('doctrine');
$em = $managerRegistry->getEntityManager();
$managerRegistry->resetEntityManager();
This should make the $em usable again. Note that you would have to re-persist everything again, since this $em is new.
I had the same problem with a PHP Gearman worker and Doctrine 2.
The cleanest solution that I came up with is: just close and reopen the connection at each job:
<?php
public function doWork($job){
/* #var $em \Doctrine\ORM\EntityManager */
$em = Zend_Registry::getInstance()->entitymanager;
$em->getConnection()->close();
$em->getConnection()->connect();
}
Update
The solution above doesn't cope with transaction status. That means the Doctrine\DBAL\Connection::close() method doesn't reset the $_transactionNestingLevel value, so if you don't commit a transaction, that will lead to Doctrine not being in sync on the translation status with the underlying DBMS. This could lead to Doctrine silently ignoring begin/commit/rollback statements and eventually to data not being committed to the DBMS.
In other words: be sure to commit/rollback transactions if you use this method.
This with this wrapper it worked for me:
https://github.com/doctrine/dbal/issues/1454
In your daemon you can add method to restart the connection possibly before every query. I was facing similar problmes using gaerman worker:
I keep me connection data in zend registry so it looks like this:
private function resetDoctrineConnection() {
$doctrineManager = Doctrine_Manager::getInstance();
$doctrineManager->reset();
$dsn = Zend_Registry::get('dsn');
$manager = Doctrine_Manager::getInstance();
$manager->setAttribute(Doctrine_Core::ATTR_AUTO_ACCESSOR_OVERRIDE, true);
Doctrine_Manager::connection($dsn, 'doctrine');
}
If it is damenon you need perhaps call it statically.
I'm using gearman to distribute long running tasks across multiple worker servers. For one of my worker tasks, I attempt to invoke another background job. The background job is performed by another worker successfully... but that worker process doesn't respond to any new jobs that are added to gearman afterwards.
Anyone know what might be going on? Is this a feature of gearman?
EDIT:
Also, if I restart my workers they repeat the task that was queued by the other worker. Gearman appears to not be recognizing the job has completed.
EDIT 2:
tried:
var_dump($this->conn);
var_dump($this->handle);
From within the worker function that's called from my other worker. This is the output I receive:
NULL
string(0) ""
EDIT 3:
Well I came up with a hackey way to solve this. The following is the relevant snippet of code. I'm using codeigniter for my project, and my gearman servers are stored in as an array. I simply test in my job code if the connection is null, and if so reestablish it using a random gearman server. I'm sure this sucks so if anyone has some improved insight I would very much appreciate it.
class Net_Gearman_Job_notification_vc_friends_new_user extends Net_Gearman_Job_Common{
private $CI;
function __construct(){
$this->CI =& get_instance();
if(!$this->conn){
$gearman = $this->CI->config->item('gearman');
$servers = $gearman['servers'];
$key = array_rand($servers);
$this->conn = Net_Gearman_Connection::connect($servers[$key]);
}
}
Figured it out! pretty stupid actually, forgot to call
parent::__construct(); in my constructor... oops.