I'm writing unit tests for a Symfony 2 / Doctrine 2 application and I'm encountering what looks like a concurrency issue.
The code looks like this:
$obj = new Obj();
$obj->setName('test');
... etc ...
$em->persist($obj);
$em->flush();
...
$qb = $em->getRepository('Obj');
// Select object using DQL
$this->assertTrue($obj !== null);
At this point $obj is often null. I.e. the DQL query has failed to find it. If I add breakpoint and pause execution somewhere before executing the DQL, $obj is always found. If not it's usually not found but occasionally it is found.
I have tried wrapping the insertion in a transaction:
$em->getConnection()->beginTransaction();
$obj = new Obj();
$obj->setName('test');
... etc ...
$em->persist($obj);
$em->flush();
$em->getConnection()->commit();
This doesn't seem to help.
I have tried adding a pause between the insert and the DQL query:
sleep(1);
This results in the expected behaviour consistently. Thus my conclusion that this is a concurrency issue. Or at least something to do with Doctrine not immediately writing through on flush.
Is there any way with Doctrine 2 to force a write to the database to complete? Or an event to listen for? Or am I doing something else wrong here?
Having followed zerkms' advice to enable logging I quickly tracked the problem down to a fault in my own code. There was no concurrency issue.
On my Mac I did the following:
Edit: /usr/local/etc/my.cnf
Add these lines:
general_log_file = /tmp/query.log
general_log = 1
Then:
brew services restart apache2
Next I tailed the file:
tail -f /tmp/query.log
I saw the two queries - an insert and a select - executing in the right order. Running those queries directly in MySQL client revealed the error in my code.
Note: putting the log into /tmp/ ensures the logs are deleted everytime I log out.
Related
I'm having a Symfony Command that uses the Doctrine Paginator on PHP 7.0.22. The command must process data from a large table, so I do it in chunks of 100 items. The issue is that after a few hundred loops it gets to fill 256M RAM. As measures against OOM (out-of-memory) I use:
$em->getConnection()->getConfiguration()->setSQLLogger(null); - disables the sql logger, that fills memory with logged queries for scripts running many sql commands
$em->clear(); - detaches all objects from Doctrine at the end of every loop
I've put some dumps with memory_get_usage() to check what's going on and it seems that the collector doesn't clean as much as the command adds at every $paginator->getIterator()->getArrayCopy(); call.
I've even tried to manually collect the garbage every loop with gc_collect_cycles(), but still no difference, the command starts using 18M and increases with ~2M every few hundred items. Also tried to manually unset the results and the query builder... nothing. I removed all the data processing and kept only the select query and the paginator and got the same behaviour.
Anyone has any idea where I should look next?
Note: 256M should be more than enough for this kind of operations, so please don't recommend solutions that suggest increasing allowed memory.
The striped down execute() method looks something like this:
protected function execute(InputInterface $input, OutputInterface $output)
{
// Remove SQL logger to avoid out of memory errors
$em = $this->getEntityManager(); // method defined in base class
$em->getConnection()->getConfiguration()->setSQLLogger(null);
$firstResult = 0;
// Get latest ID
$maxId = $this->getMaxIdInTable('AppBundle:MyEntity'); // method defined in base class
$this->getLogger()->info('Working for max media id: ' . $maxId);
do {
// Get data
$dbItemsQuery = $em->createQueryBuilder()
->select('m')
->from('AppBundle:MyEntity', 'm')
->where('m.id <= :maxId')
->setParameter('maxId', $maxId)
->setFirstResult($firstResult)
->setMaxResults(self::PAGE_SIZE)
;
$paginator = new Paginator($dbItemsQuery);
$dbItems = $paginator->getIterator()->getArrayCopy();
$totalCount = count($paginator);
$currentPageCount = count($dbItems);
// Clear Doctrine objects from memory
$em->clear();
// Update first result
$firstResult += $currentPageCount;
$output->writeln($firstResult);
}
while ($currentPageCount == self::PAGE_SIZE);
// Finish message
$output->writeln("\n\n<info>Done running <comment>" . $this->getName() . "</comment></info>\n");
}
The memory leak was generated by Doctrine Paginator. I replaced it with native query using Doctrine prepared statements and fixed it.
Other things that you should take into consideration:
If you are replacing the Doctrine Paginator, you should rebuild the pagination functionality, by adding a limit to your query.
Run your command with --no-debug flag or -env=prod or maybe both. The thing is that the commands are running by default in the dev environment. This enables some data collectors that are not used in the prod environment. See more on this topic in the Symfony documentation - How to Use the Console
Edit: In my particular case I was also using the bundle eightpoints/guzzle-bundle that implements the HTTP Guzzle library (had some API calls in my command). This bundle was also leaking, apparently through some middleware. To fix this, I had to instantiate the Guzzle client independently, without the EightPoints bundle.
I'm creating a console command for my bundle with Symfony 2. This command execute several request to database (Mysql). In order to debug my command I need to know how much SQL query has been executed during the command execution. And if it's possible, show these requests (like the Symfony profiler do)
I have the same problem with AJAX requests. When I make an AJAX request, I can't know how much query have been executed during the request.
You can enable the doctrine logging like :
$doctrine = $this->get('doctrine');
$doctrine = $this->getDoctrine();
$em = $doctrine->getConnection();
// $doctrine->getManager() did not work for me
// (resulted in $stack->queries being empty array)
$stack = new \Doctrine\DBAL\Logging\DebugStack();
$em->getConfiguration()->setSQLLogger($stack);
... // do some queries
var_dump($stack->queries);
You can go to see that : http://vvv.tobiassjosten.net/symfony/logging-doctrine-queries-in-symfony2/
To return to Cesar what Cesar own. I find it here : Count queries to database in Doctrine2
You can put all this logic into domain model and treat command only as an invoker. Then you can use the same domain model with controller using www and profiler to diagnose.
Second thing is that you should have integration test for this and you can verify execution time with this test.
I'm using Symfony 2.7
Instead of using doctrine, i've build a service which do all raw sql statement and is called RawSQLManager.
Randomly, when i pull the same URL multiple time, i've a critical error in my dev.log file. I don't understand how it can be possible.
At the beginning of my project i've build this function :
public function getFolderInfo($id_folder)
{
$sql = 'SELECT zs.ID, zs.Nom, zs.Principale, zs.Archive, zs.DateHeureModification
FROM Zones_Stockages zs
WHERE zs.ID = :id_folder;';
$params['id_folder'] = $id_folder;
$stmt = $this->conn->prepare($sql);
$stmt->execute($params);
return $stmt->fetch();
}
Now, i don't use this function, she's deleted and no one use this function in my whole project.
But here my log :
[2016-01-05 10:16:09] doctrine.DEBUG: SELECT t0.Principale AS Principale_1, t0.Archive AS Archive_2, t0.Nom AS Nom_3, t0.DateHeureModification AS DateHeureModification_4, t0.ID AS ID_5, t0.Abonnements_ID AS Abonnements_ID_6 FROM Zones_Stockages t0 WHERE t0.ID = ? ["7385"] []
as you can see, that's the exact same statement and nowhere in my project there is a similar statement.
[EDIT] : There is no Abonnements_ID in my initial statement.
I've cleared my cache with the following :
app/console doctrine:cache:clear-metadata
app/console doctrine:cache:clear-query
app/console doctrine:cache:clear-result
even delete my cache folder. This statement still be called.
And this weird call is called at every request i do on the server and sometimes is followed by a critical error onto my Abonnements Entity, but i don't understand what does this have to do with the previous call and how to start debugging what's wrong with this Entity.
[2016-01-05 10:16:09] request.CRITICAL: Uncaught PHP Exception Symfony\Component\Debug\Exception\ContextErrorException: "Warning: rename(C:\wamp\www\app\cache\dev/doctrine/orm/Proxies\__CG__AppBundleEntityAbonnements.php.568b89d923b5a9.13683491,C:\wamp\www\app\cache\dev/doctrine/orm/Proxies\__CG__AppBundleEntityAbonnements.php): " at C:\wamp\www\vendor\doctrine\common\lib\Doctrine\Common\Proxy\ProxyGenerator.php line 306 {"exception":"[object] (Symfony\\Component\\Debug\\Exception\\ContextErrorException(code: 0): Warning: rename(C:\\wamp\\www\\app\\cache\\dev/doctrine/orm/Proxies\\__CG__AppBundleEntityAbonnements.php.568b89d923b5a9.13683491,C:\\wamp\\www\\app\\cache\\dev/doctrine/orm/Proxies\\__CG__AppBundleEntityAbonnements.php): at C:\\wamp\\www\\vendor\\doctrine\\common\\lib\\Doctrine\\Common\\Proxy\\ProxyGenerator.php:306)"} []
So two issues in a row :
A call to a ghost statement.
A critical error that append randomly on the same url, with same parameters.
What did i miss please?
About the "ghost statement", the reason can only be: the code is somehow still present in the application ... or some cache is in place (for example Opcache or something like that in the web server).
About the "critical error", the line of code that triggers it is rename($tmpFileName, $fileName); inside the code that creates Doctrine proxies for entities. Since this is just a file rename, this could be a permission issue in your cache directory.
The problem is that the Doctrine's proxy class generation code doesn't handle concurrent requests very well. It works on Unix-like systems, but not on Windows, where you can't just rename over a file that is open.
See configuration of the doctrine bundle. You'll most like have auto_generate_proxy_classes set to "%kernel.debug%" (this is the default setting in symfony standard edition).
Try changing auto_generate_proxy_classes to false. You'll now have to manually clear the cache if you change your entities, but that error should be gone.
I have a long running daemon (Symfony2 Command) that gets work off a work queue in Redis, and performs those jobs and writes to the database using the orm.
I noticed that when that there is a tendency for the worker to die because the connection to MySQL timed out when worker is idling waiting for work.
Specifically, I see this in the log: MySQL Server has gone away.
Is there anyway I can have doctrine automatically reconnect? Or is there some way I can manually catch the exception and reconnect the doctrine orm?
Thanks
I'm using this in my symfony2 beanstalkd daemon Command worker:
$em = $this->getContainer()->get('doctrine')->getManager();
if ($em->getConnection()->ping() === false) {
$em->getConnection()->close();
$em->getConnection()->connect();
}
It appears that whenever there is any error/exception encountered by the EntityManager in Doctrine, the connection is closed and the EntityManager is dead.
Since generally everything is wrapped in a transaction and that transaction is executed when $entityManager->flush() is called, you can try and catch the exception and attempt to re-excute or give up.
You may wish to examine the exact nature of the exception with more specific catch on the type, whether PDOException or something else.
For a MySQL has Gone Away exception, you can try to reconnect by resetting the EntityManager.
$managerRegistry = $this->getContainer()->get('doctrine');
$em = $managerRegistry->getEntityManager();
$managerRegistry->resetEntityManager();
This should make the $em usable again. Note that you would have to re-persist everything again, since this $em is new.
I had the same problem with a PHP Gearman worker and Doctrine 2.
The cleanest solution that I came up with is: just close and reopen the connection at each job:
<?php
public function doWork($job){
/* #var $em \Doctrine\ORM\EntityManager */
$em = Zend_Registry::getInstance()->entitymanager;
$em->getConnection()->close();
$em->getConnection()->connect();
}
Update
The solution above doesn't cope with transaction status. That means the Doctrine\DBAL\Connection::close() method doesn't reset the $_transactionNestingLevel value, so if you don't commit a transaction, that will lead to Doctrine not being in sync on the translation status with the underlying DBMS. This could lead to Doctrine silently ignoring begin/commit/rollback statements and eventually to data not being committed to the DBMS.
In other words: be sure to commit/rollback transactions if you use this method.
This with this wrapper it worked for me:
https://github.com/doctrine/dbal/issues/1454
In your daemon you can add method to restart the connection possibly before every query. I was facing similar problmes using gaerman worker:
I keep me connection data in zend registry so it looks like this:
private function resetDoctrineConnection() {
$doctrineManager = Doctrine_Manager::getInstance();
$doctrineManager->reset();
$dsn = Zend_Registry::get('dsn');
$manager = Doctrine_Manager::getInstance();
$manager->setAttribute(Doctrine_Core::ATTR_AUTO_ACCESSOR_OVERRIDE, true);
Doctrine_Manager::connection($dsn, 'doctrine');
}
If it is damenon you need perhaps call it statically.
I'm having problems with a batch insertion of objects into a database using symfony 1.4 and doctrine 1.2.
My model has a certain kind of object called "Sector", each of which has several objects of type "Cupo" (usually ranging from 50 up to 200000). These objects are pretty small; just a short identifier string and one or two integers. Whenever a group of Sectors are created by the user, I need to automatically add all these instances of "Cupo" to the database. In case anything goes wrong, I'm using a doctrine transaction to roll back everything. The problem is that I can only create around 2000 instances before php runs out of memory. It currently has a 128MB limit, which should be more than enough for handling objects that use less than 100 bytes. I've tried increasing the memory limit up to 512MB, but php still crashes and that doesn't solve the problem. Am I doing the batch insertion correctly or is there a better way?
Here's the error:
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 71 bytes) in /Users/yo/Sites/grifoo/lib/vendor/symfony/lib/log/sfVarLogger.class.php on line 170
And here's the code:
public function save($conn=null){
$conn=$conn?$conn:Doctrine_Manager::connection();
$conn->beginTransaction();
try {
$evento=$this->object;
foreach($evento->getSectores() as $s){
for($j=0;$j<$s->getCapacity();$j++){
$cupo=new Cupo();
$cupo->setActivo($s->getActivo());
$cupo->setEventoId($s->getEventoId());
$cupo->setNombre($j);
$cupo->setSector($s);
$cupo->save();
}
}
$conn->commit();
return;
}
catch (Exception $e) {
$conn->rollback();
throw $e;
}
Once again, this code works fine for less than 1000 objects, but anything bigger than 1500 fails. Thanks for the help.
Tried doing
$cupo->save();
$cupo->free();
$cupo = null;
(But substituting my code) And I'm still getting memory overflows. Any other ideas, SO?
Update:
I created a new environment in my databases.yml, that looks like:
all:
doctrine:
class: sfDoctrineDatabase
param:
dsn: 'mysql:host=localhost;dbname=.......'
username: .....
password: .....
profiler: false
The profiler: false entry disables doctrine's query logging, that normally keeps a copy of every query you make. It didn't stop the memory leakage, but I was able to get about twice as far through my data importing as I was without it.
Update 2
I added
Doctrine_Manager::connection()->setAttribute(Doctrine_Core::ATTR_AUTO_FREE_QUERY_OBJECTS, true );
before running my queries, and changed
$cupo = null;
to
unset($cupo);
And now my script has been churning away happily. I'm pretty sure it will finish without running out of RAM this time.
Update 3
Yup. That's the winning combo.
I have just did "daemonized" script with symfony 1.4 and setting the following stopped the memory hogging:
sfConfig::set('sf_debug', false);
For a symfony task, I also faced to this issue and done following things. It worked for me.
Disable debug mode. Add following before db connection initialize
sfConfig::set('sf_debug', false);
Set auto query object free attribute for db connection
$connection->setAttribute(Doctrine_Core::ATTR_AUTO_FREE_QUERY_OBJECTS, true );
Free all object after use
$object_name->free()
Unset all arrays after use unset($array_name)
Check all doctrine queries used on task. Free all queries after use. $q->free()
(This is a good practice for any time of query using.)
That's all. Hope it may help someone.
Doctrine leaks and there's not much you can do about it. Make sure you use $q->free() whenever applicable to minimize the effect.
Doctrine is not meant for maintenance scripts. The only way to work around this problem is to break you script to parts which will perform part of the task. One way to do that is to add a start parameter to your script and after a certain amount of objects had been processed, the script redirects to itself with a higher start value. This works well for me although it makes writing maintenance scripts more cumbersome.
Try to unset($cupo); after every saving. This should be help. An other thing is to split the script and do some batch processing.
Try to break circular reference which usually cause memory leaks with
$cupo->save();
$cupo->free(); //this call
as described in Doctrine manual.
For me , I've just initialized the task like that:
// initialize the database connection
$databaseManager = new sfDatabaseManager($this->configuration);
$connection = $databaseManager->getDatabase($options['connection'])->getConnection();
$config = ProjectConfiguration::getApplicationConfiguration('frontend', 'prod', true);
sfContext::createInstance($config);
(WITH PROD CONFIG)
and use free() after a save() on doctrine's object
the memory is stable at 25Mo
memory_get_usage=26.884071350098Mo
with php 5.3 on debian squeeze
Periodically close and re-open the connection - not sure why but it seems PDO is retaining references.
What is working for me is calling the free method like this:
$cupo->save();
$cupo->free(true); // free also the related components
unset($cupo);