How to get all the queries executed by Doctrine? - php

I'm creating a console command for my bundle with Symfony 2. This command execute several request to database (Mysql). In order to debug my command I need to know how much SQL query has been executed during the command execution. And if it's possible, show these requests (like the Symfony profiler do)
I have the same problem with AJAX requests. When I make an AJAX request, I can't know how much query have been executed during the request.

You can enable the doctrine logging like :
$doctrine = $this->get('doctrine');
$doctrine = $this->getDoctrine();
$em = $doctrine->getConnection();
// $doctrine->getManager() did not work for me
// (resulted in $stack->queries being empty array)
$stack = new \Doctrine\DBAL\Logging\DebugStack();
$em->getConfiguration()->setSQLLogger($stack);
... // do some queries
var_dump($stack->queries);
You can go to see that : http://vvv.tobiassjosten.net/symfony/logging-doctrine-queries-in-symfony2/
To return to Cesar what Cesar own. I find it here : Count queries to database in Doctrine2

You can put all this logic into domain model and treat command only as an invoker. Then you can use the same domain model with controller using www and profiler to diagnose.
Second thing is that you should have integration test for this and you can verify execution time with this test.

Related

PHP apcu not persistent in Laravel queued/dispatched jobs

(Laravel 8, PHP 8)
Hi. I have a bunch of data in the PHP APC cache that I can access across my Laravel application with the apcu commands.
I decided I should fire an async job to process some of that data for the user during a session and throw the results in the database.
So I made a middleware that fires (correctly) when the user accesses the page, and (correctly) dispatches a job called "MemoryProvider".
The dispatch command promply instantiates the MemoryProvider class, running its constructor, and then queues the job for execution.
About a second later, the queue is processed and the handle method in MemoryProvider is run.
I check the content of the php cache with "apcu_cache_info()" and "apcu_exists()" in the middleware and both in the MemoryProvider constructor and in its handle method.
The problem:
The PHP cache appears populated throughout my Laravel app.
The PHP cache appears populated in the middleware.
The PHP cache appears populated in the job's constructor.
The PHP cache appears EMPTY in the job's handle method.
Here's the middleware:
{
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
MemoryProvider::dispatch($request);
return $next($request);
}
Here's the job's (MemoryProvider) constructor:
{
$this->request = $request->all();
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
}
And here's the job's (MemoryProvider) handle method:
{
$a = apcu_cache_info(); // 0 entries
$b = apcu_exists('the:2:0'); // false
}
Question: is this a PHP limitation or a bad Laravel problem? And how can I access the content of my PHP cache in an async class?
p.s. I have apc.enable_cli=1 in php.ini
I found the answer. Apparently, it's a PHP limitation.
According to a good explanation given by gview back in 2017, a cli process doesn't share state or memory with other cli processes. So the apc memory space will never be shared this way.
I did find a workaround for my specific case: instead of running an async process to handle the heavy work in the background, I can get the same effect by simply issuing an AJAX request. The request is handled independently by PHP, with full access to the APC cache, and I can populate my database and let the user know when it's all done (or gradually done, as is the case).
I wish I had thought of this sooner.

Laravel Mock should be called at least once but called 0 times

I have an artisan command that fires a job called PasswordResetJob which iterates as it calls a method forcePasswordReset in a repository class OrgRepository, the method updates a user's table. The whole process works fine.
Now I'm trying to write a Laravel test to mock the OrgRepository class and assert that the forcePasswordReset method is called at least once, which should be the case, based on the conditions I provided to the test. In the test, I call the artisan command to fire job; (I'm using sync queue for testing) this works fine as the job gets called and the user's table gets updated as I can view my database updates directly.
However, the test fails with the error: Mockery\Exception\InvalidCountException : Method forcePasswordReset() from Mockery_2_Repositories_OrgRepository should be called
at least 1 times but called 0 times.
The artisan call within the test is:
Artisan::call('shisiah:implement-org-password-reset');
I have tried to make the artisan call before, as well as after this mock initialization, but I still get the same errors. Here is the mock initialization within the test
$this->spy(OrgRepository::class, function ($mock) {
$mock->shouldHaveReceived('forcePasswordReset');
});
What am I missing? I have gone through the documentation and searched through Google for hours. Please let me know if you need any additional information to help. I'm using Laravel version 6.0
edit
I pass the OrgRepository class into the handle method of the job class, like this:
public function handle(OrgRepository $repository)
{
//get orgs
$orgs = Org::where('status', true)->get();
foreach ($orgs as $org){
$repository->forcePasswordReset($org);
}
}
The problem is that you are initializing your spy after your job has already run, which means during the job it will use the real class instead of the spy.
You have to do something like this in your test:
$spy = $this->spy(OrgRepository::class);
// run your job
$spy->shouldHaveReceived('forcePasswordReset');
We tell laravel to use the spy instead of the repository, run the job and then assert that the method was called.
Jeffrey Way explains it pretty well in this screencast.

Doctrine Paginator fills up memory

I'm having a Symfony Command that uses the Doctrine Paginator on PHP 7.0.22. The command must process data from a large table, so I do it in chunks of 100 items. The issue is that after a few hundred loops it gets to fill 256M RAM. As measures against OOM (out-of-memory) I use:
$em->getConnection()->getConfiguration()->setSQLLogger(null); - disables the sql logger, that fills memory with logged queries for scripts running many sql commands
$em->clear(); - detaches all objects from Doctrine at the end of every loop
I've put some dumps with memory_get_usage() to check what's going on and it seems that the collector doesn't clean as much as the command adds at every $paginator->getIterator()->getArrayCopy(); call.
I've even tried to manually collect the garbage every loop with gc_collect_cycles(), but still no difference, the command starts using 18M and increases with ~2M every few hundred items. Also tried to manually unset the results and the query builder... nothing. I removed all the data processing and kept only the select query and the paginator and got the same behaviour.
Anyone has any idea where I should look next?
Note: 256M should be more than enough for this kind of operations, so please don't recommend solutions that suggest increasing allowed memory.
The striped down execute() method looks something like this:
protected function execute(InputInterface $input, OutputInterface $output)
{
// Remove SQL logger to avoid out of memory errors
$em = $this->getEntityManager(); // method defined in base class
$em->getConnection()->getConfiguration()->setSQLLogger(null);
$firstResult = 0;
// Get latest ID
$maxId = $this->getMaxIdInTable('AppBundle:MyEntity'); // method defined in base class
$this->getLogger()->info('Working for max media id: ' . $maxId);
do {
// Get data
$dbItemsQuery = $em->createQueryBuilder()
->select('m')
->from('AppBundle:MyEntity', 'm')
->where('m.id <= :maxId')
->setParameter('maxId', $maxId)
->setFirstResult($firstResult)
->setMaxResults(self::PAGE_SIZE)
;
$paginator = new Paginator($dbItemsQuery);
$dbItems = $paginator->getIterator()->getArrayCopy();
$totalCount = count($paginator);
$currentPageCount = count($dbItems);
// Clear Doctrine objects from memory
$em->clear();
// Update first result
$firstResult += $currentPageCount;
$output->writeln($firstResult);
}
while ($currentPageCount == self::PAGE_SIZE);
// Finish message
$output->writeln("\n\n<info>Done running <comment>" . $this->getName() . "</comment></info>\n");
}
The memory leak was generated by Doctrine Paginator. I replaced it with native query using Doctrine prepared statements and fixed it.
Other things that you should take into consideration:
If you are replacing the Doctrine Paginator, you should rebuild the pagination functionality, by adding a limit to your query.
Run your command with --no-debug flag or -env=prod or maybe both. The thing is that the commands are running by default in the dev environment. This enables some data collectors that are not used in the prod environment. See more on this topic in the Symfony documentation - How to Use the Console
Edit: In my particular case I was also using the bundle eightpoints/guzzle-bundle that implements the HTTP Guzzle library (had some API calls in my command). This bundle was also leaking, apparently through some middleware. To fix this, I had to instantiate the Guzzle client independently, without the EightPoints bundle.

Doctrine 2 Concurrency issues

I'm writing unit tests for a Symfony 2 / Doctrine 2 application and I'm encountering what looks like a concurrency issue.
The code looks like this:
$obj = new Obj();
$obj->setName('test');
... etc ...
$em->persist($obj);
$em->flush();
...
$qb = $em->getRepository('Obj');
// Select object using DQL
$this->assertTrue($obj !== null);
At this point $obj is often null. I.e. the DQL query has failed to find it. If I add breakpoint and pause execution somewhere before executing the DQL, $obj is always found. If not it's usually not found but occasionally it is found.
I have tried wrapping the insertion in a transaction:
$em->getConnection()->beginTransaction();
$obj = new Obj();
$obj->setName('test');
... etc ...
$em->persist($obj);
$em->flush();
$em->getConnection()->commit();
This doesn't seem to help.
I have tried adding a pause between the insert and the DQL query:
sleep(1);
This results in the expected behaviour consistently. Thus my conclusion that this is a concurrency issue. Or at least something to do with Doctrine not immediately writing through on flush.
Is there any way with Doctrine 2 to force a write to the database to complete? Or an event to listen for? Or am I doing something else wrong here?
Having followed zerkms' advice to enable logging I quickly tracked the problem down to a fault in my own code. There was no concurrency issue.
On my Mac I did the following:
Edit: /usr/local/etc/my.cnf
Add these lines:
general_log_file = /tmp/query.log
general_log = 1
Then:
brew services restart apache2
Next I tailed the file:
tail -f /tmp/query.log
I saw the two queries - an insert and a select - executing in the right order. Running those queries directly in MySQL client revealed the error in my code.
Note: putting the log into /tmp/ ensures the logs are deleted everytime I log out.

Doctrine2 connection timeout in daemon

I have a long running daemon (Symfony2 Command) that gets work off a work queue in Redis, and performs those jobs and writes to the database using the orm.
I noticed that when that there is a tendency for the worker to die because the connection to MySQL timed out when worker is idling waiting for work.
Specifically, I see this in the log: MySQL Server has gone away.
Is there anyway I can have doctrine automatically reconnect? Or is there some way I can manually catch the exception and reconnect the doctrine orm?
Thanks
I'm using this in my symfony2 beanstalkd daemon Command worker:
$em = $this->getContainer()->get('doctrine')->getManager();
if ($em->getConnection()->ping() === false) {
$em->getConnection()->close();
$em->getConnection()->connect();
}
It appears that whenever there is any error/exception encountered by the EntityManager in Doctrine, the connection is closed and the EntityManager is dead.
Since generally everything is wrapped in a transaction and that transaction is executed when $entityManager->flush() is called, you can try and catch the exception and attempt to re-excute or give up.
You may wish to examine the exact nature of the exception with more specific catch on the type, whether PDOException or something else.
For a MySQL has Gone Away exception, you can try to reconnect by resetting the EntityManager.
$managerRegistry = $this->getContainer()->get('doctrine');
$em = $managerRegistry->getEntityManager();
$managerRegistry->resetEntityManager();
This should make the $em usable again. Note that you would have to re-persist everything again, since this $em is new.
I had the same problem with a PHP Gearman worker and Doctrine 2.
The cleanest solution that I came up with is: just close and reopen the connection at each job:
<?php
public function doWork($job){
/* #var $em \Doctrine\ORM\EntityManager */
$em = Zend_Registry::getInstance()->entitymanager;
$em->getConnection()->close();
$em->getConnection()->connect();
}
Update
The solution above doesn't cope with transaction status. That means the Doctrine\DBAL\Connection::close() method doesn't reset the $_transactionNestingLevel value, so if you don't commit a transaction, that will lead to Doctrine not being in sync on the translation status with the underlying DBMS. This could lead to Doctrine silently ignoring begin/commit/rollback statements and eventually to data not being committed to the DBMS.
In other words: be sure to commit/rollback transactions if you use this method.
This with this wrapper it worked for me:
https://github.com/doctrine/dbal/issues/1454
In your daemon you can add method to restart the connection possibly before every query. I was facing similar problmes using gaerman worker:
I keep me connection data in zend registry so it looks like this:
private function resetDoctrineConnection() {
$doctrineManager = Doctrine_Manager::getInstance();
$doctrineManager->reset();
$dsn = Zend_Registry::get('dsn');
$manager = Doctrine_Manager::getInstance();
$manager->setAttribute(Doctrine_Core::ATTR_AUTO_ACCESSOR_OVERRIDE, true);
Doctrine_Manager::connection($dsn, 'doctrine');
}
If it is damenon you need perhaps call it statically.

Categories