Parallel PHPUnit testing in integration tests - php

As the time needed for run complete PHPUnit suite raises, our team starts wondering if there is a possibility to run Unit tests in parallel. Recently I read an article about Paraunit, also Sebastian Bergman wrote, he'll add parallelism into PHPUnit 3.7.
But there remains the problem with integration tests, or, more generally, tests that interact with DB. For the sake of consistency, the testDB has to be resetted and fixtures loaded after each test. But in parallel tests there is a problem with race conditions, because all processes use the same DB.
So to be able to run integration tests in parallel, we have to assign own database to each process. I would like to ask, if someone has some thoughts about how this problem can be solved. Maybe there are already implemented solutions to this problem in another xUnit implementation.
In my team we are using MongoDB, so one solution would be to programmatically create a config file for each PHPUnit process, with generated DB name(for this process), and in setUp() method we could clone the main TestDb into this temporary one. But before we start to implement this approach I would like to ask for your ideas about the topic.

This is a good question: preparing for parallel unit tests is going to require learning some new Best Practices, and I suspect some of them are going to slow our tests down.
At the highest level, the advice is: avoid testing with a database wherever possible. Abstract all interactions with your database, and then mock that class. But you've already noted your question is about integration tests, where this is not possible.
When using PDO, I generally use sqlite::memory: Each test gets its own database. It is anonymous and automatically cleaned up when the test ends. (But I noted some problems with this when you real application is not using sqlite: Suggestions to avoid DB deps when using an in-memory sqlite DB to speed up unit tests )
When using a database that does not have an in-memory choice, create the database with a random name. If the parallelization is at the PHPUnit process level, quite coarse, you could use the process pid. But that has no real advantages over a random name. (I know PHP is single-threaded, but perhaps in future we would have a custom phpUnit module, that uses threads to run tests in parallel; we might as well be ready for that.)
If you have the xUnit Test Patterns book, chapter 13 is about testing databases (relatively short). Chapters 8 and 9 on transient vs. persistent fixtures are useful too. And, of course, most of the book is on abstraction layers to make mocking easier :-)

There is also this awesome library (fastest) for executing tests in parallel. It is optimized for functional/integration tests, giving an easy way to work with N databases in parallel.
Our old codebase run in 30 minutes, now in 7 minutes with 4 Processors.
Features
Functional tests could use a database per processor using the
environment variable.
Tests are randomized by default.
Is not coupled with PhpUnit you could run any command.
Is developed in PHP with no dependencies.
As input you could use a phpunit.xml.dist file or use pipe.
Includes a Behat extension to easily pipe scenarios into fastest.
Increase Verbosity with -v option.
Usage
find tests/ -name "*Test.php" | ./bin/fastest "bin/phpunit -c app {};"

But there remains the problem with integration tests, or, more
generally, tests that interact with DB. For the sake of consistency,
the testDB has to be resetted and fixtures loaded after each test. But
in parallel tests there is a problem with race conditions, because all
processes use the same DB.
So to be able to run integration tests in parallel, we have to assign
own database to each process. I would like to ask, if someone has some
thoughts about how this problem can be solved. Maybe there are already
implemented solutions to this problem in another xUnit implementation.
You can avoid integration test conflicts 2 ways:
running only those tests parallel, which uses very different tables of your database, so they don't conflict
create a new database for conflicting tests
Ofc. you can combine these 2 solutions. I don't know about any phpunit test runner which supports any of these approaches, so I think you have to write your own test runner to speed up the process... Btw you can still group your integration tests, and run only a few of them at once, if you are using them by development...
Be aware, that the same conflicts can cause concurrency issues under heavy loading in PHP. For example if you lock 2 files in reverse order under 2 separate controller action, then your application can end up in a deadlock... I am seeking a way to test concurrency issues in PHP, but no luck so far. I don't have time currently to write my own solution, and I am not sure I can manage it, it's pretty hard stuff... :S

In case that your application is coupled with a specific vendor eg. postgresql you can create separate stacks with docker and docker-compose. Then group together tests by purpoce eg. model tests, controller tests etc etc.
For each group deploy in your pipeline a specific stack using docker-compos and run the tests via docker. The idea is to have seperate environment with seperate databases hence you avoid the confict.

Related

Lowering Laravel's unit test memory usage

I have an application written in PHP using the Laravel framework.
And I have a test suite consisting of approximately 200 phpUnit test methods.
The problem I have is that this test suite is taking a painfully long time to run (~10 mins), and has also now started running out of memory on our test server.
I could adjust the max memory in php.ini to solve the latter issue, but that's not really the point: I shouldn't have to -- I don't want to have to allocate 500mb or a gig to a single PHP process. (because, you know, the server might need to do other stuff at the same time)
I've done some investigation, and it seems like the main reason that it's so slow and uses so much RAM is because it sets up a whole new Laravel environment for every test.
That's understandable, but I'm looking for ways to cut it down - either the memory usage or the time taken, or better yet both.
Can anyone suggest ways of streamlining my code or my tests that will help with this?
Thank you.
Are all of your tests functional, or some of them are unit? If you're unit testing your classes, you don't need to initialize Laravel environment at all, just get an instance of the class involved and set the dependencies.
Check the "Am I Writing Unit Or Functional Tests?" section of the tutorial on writing Laravel controller tests:
https://medium.com/laravel-4/laravel-4-controller-testing-48414f4782d0
So you can extend the Laravel test class when you need the Laravel services, but if you're testing libraries, you can use plain PHPUnit tests.

Unit Test code generation

We have a project developed by 2 years with poorly designed architecture. Now a days there are no any unit tests at all.
Current version of system works satisfyingly but we vitally need refactoring of core modules.
The budget is also limited so we can not hire suffisient number of developers to write unit tests.
Is it the possible strategy to generate code automatically for unit tests which covers, for example, interaction with data, in assumpion that now system works fine and current system's output can be converted in XML-fixtures for unit testing?
This approach gives us a possibility to quickly start refactoring of existing code and receieve immediate feedback if some core functionality is corrupted because of changes.
I would be wary of any tools that claim to be able to automatically determine and encode an arbitrary application's requirements into nice unit tests.
Instead, I would spend a little time setting up at least some high-level functional tests. These might be in code, using the full stack to load a predefined set of inputs and checking against known results, for instance. Or perhaps even higher-level with an automation tool like Selenium or FitNesse (depending on what type of app you're building). Focus on testing the most important pieces of your system first, since time is always limited.
Moving forward, I'd recommend getting a copy of Michael Feathers' Working Effectively with Legacy Code, which deals with exactly the problem you face: needing to make updates to a large, untested codebase while making sure you don't break existing functionality in the process.

phpunit - testing is painfully slow

I am diving deeper and deeper in the world of unit testing.
One issue I encountered, and this is where I would like feedback, is when one runs multiple test suites, maybe it is just me but I need to use the parameter --process-isolation for my tests to pass. I can run any of my suites individually without a problem, but running the 6-7 suites I have so far with 180 assertions spread between them fails if I run without --process-isolation. The problem is that using this parameter makes the test run last for 35 mins, versus the usual 2.5 minutes. That's a loooong wait.
The problem is related to using mocked DI containers for specific tests and containers are not properly re-initialised when tests suites are running chained. Static properties set on DI-Container to test for expected failures make the tests in following suite fail. The container has a parameter that can hold contained object in a static var, to return the same instance at every call. A singleton in disguise. And this runs fine on application level, it's just a nuisance for testing.
I could avoid that container parameter and code the application to not use static properties, but avoiding a useful language construct for the sake of a methodology seems like overkill.
Maybe I am doing something wrong (I sure hope so!) but I have the impression if one wants to run tests with the SUT in a clean state for every test, there is no getting around using --process-isolation. This makes testing very time consuming and takes the joy out of it a little bit. I have bypassed the issue somewhat by running suites and tests individually when I am coding, and running the suite in background before major commits.
Is what I am experiencing normal, and is there a way to counter this? How do you testers out there ensure testing time is reasonable? How are statics handled so as to not influence testing?
Any insight appreciated/comment appreciated.
You have several problems.
The first is process isolation. Normally, it should not be necessary and you only want to use it to find out which specific test is the one that fatally breaks your tests. As you noted yourself, it's awfully slow, which is something you cannot fix.
You might want to disable backing up global vars which saves some milliseconds per test, though.
The second problem, which leads to your first problem, is that your code is not testable because static vars are kept during tests - my most-hated singleton problem.
You can solve that problem by providing a "cleanup" or "reset" method in your dependency containers. Those will get called from the setUp() method in your main test case class and reset everything to a clean state.
Speed
Regarding the runtime of tests - I recently wrote a blog entry about finding out which tests were too slow. Generally, tests are too slow if you can't run them after saving the file or each commit on your own box. 10 seconds is barely acceptable for me. The more tests you have, the slower will running them be.
If you really have 35 minutes, then split up your tests into sensible groups so that you can run the necessary ones on your own machine - only the tests that test the code you changed. Pyrus, the next-gen PEAR installer, has the nifty feature to automatically detect and run the tests that need to be run, depending on what files you changed.
PHPUnit does not have that, but you can emulate that by hand and phpunit --group .. :)
Always take care of mocking web services and databases, or at least running the database with only necessary data for each single test. Waiting 3 seconds for a web services response in a test that's verifying if you can save the user to database is something you never want.
One of the things I usually do when I test with MySQL instead of SQLite's :memory: is I add Hash::setRounds(5); inside tests/CreatesApplication.php Trait like this. I experienced this will make the tests especially with MySQL so much faster:
public function createApplication()
{
$app = require __DIR__ . '/../bootstrap/app.php';
$app->make(Kernel::class)->bootstrap();
// TODO: DON'T FORGET TO IMPORT HASH OBJECT ON TOP
Hash::setRounds(5);
return $app;
}
A few tricks;
filtering your test cases for example if you wanna test one file, just need to to
phpunit --filter 'Default_My_Test'
Removing code coverage in your phpunit.xml file. If you wanna get a code coverage do:
phpunit --coverage-html ./report reportTest

Speeding up PHP continuous integration build server on Hudson CI

I'm trying to speed up my builds some and was looking for some thoughts on how to do so. I currently use Hudson as a continuous integration server for a PHP project.
I use an Ant build.xml file to do the build, using a file similar to Sebastian Bergmann's php-hudson-template. At the moment, though (due to some weird problems with Hudson crashing otherwise), I'm only running phpDocumentor, phpcpd, and phpUnit. phpUnit does generate Clover code-coverage reports, too.
Here are some possible bottlenecks:
phpDocumentor: Takes 180 seconds. There are some large included libraries in my project, such as awsninja, DirectedEdge, oauthsimple, and phpMailer. I'm not sure that I really need to be developing documentation for these. I'm also not sure how to ignore whole subdirectories using my build.xml file.
phpUnit: Takes 120 seconds. This is the only portion of the build that's not run as a parallelTask. The more tests that get written, the longer this time will increase. Really not sure what to do about this, aside from maybe running multiple Hudson build slaves and doling out separate test suites to each slave. But I also have no idea how to go about that, either.
phpcpd: Takes 97 seconds. I'm sure that I can eliminate some parsing and conversion time by ignoring those included libraries. Not sure how to do this in my build.xml file.
My server: Right now I'm using a single Linode server. It seems to get pretty taxed by the whole process.
Any other possible bottlenecks you can think of I'll add to the list.
What are some solutions for reducing my build time?
I'm not a PHP expert at all, but you ought to be able to split your PHPUnit tests onto multiple Hudson slaves if you need to. I would just split your test suite up and run each subset as a separate, parallel Hudson job. If you have a machine with multiple CPUs / cores you can run multiple slaves on it.
One obvious thing you didn't mention - how about just upgrading your hardware, or taking a look at what else is running on the Hudson host and possibly taking up resources ?
phpDocumenter: phpdoc -h reveals the -i option which allows you to specify a comma separated list of files/directories to ignore. This can be added to the arguments tag of your phpdoc build.xml tag
phpUnit: I noticed it can be laggy if I am running tests against a database, but I am not aware of anyway to improve this.
One possible thing that might help would be to not run documenter every time and only run it as part of a build that only happens once a day (or something similar)
I just recently started using these tools and these are few things I discovered.
When we had a similar problem, we resorted to running the documentation in a separate overnight build (along with our functional test scripts in Selenium, as this is also pretty slow). This way, our main CI build wasn't slowed down by generating our API documentation.
However, I note that PHP Documentor has now been updated to version 2, which has significant speed improvements over the slow old version 1. It looks like it's in the region of two to three times faster than v1. This will make a big difference to your CI process. See http://phpdoc.org/ for more info.
Alternatively, you could take a look at apiGen and phpDox, both of which are alternatives to PHPDoc. They are both definitely faster than PHPDoc v1; I haven't compared them with v2 yet.

Symfony Unit Testing and Excessive Memory Leaks?

We're currently having issues with memory leaks when using unit tests with Symfony 1.x to the magnitude of a decent number of tests eating 512MB of memory.
Currently we've tried:
Using a phpunit plugin
Using lime
Restricting the tests to a few sfPropelData loads + functional tests repeated a few times
Switching to PHP 5.3.3 to handle circular references
Inspecting memory usage xdebug which didn't give much insight
Soon we'll be trying:
Only the functional tests
Replacing sfPropelData instead loading plain SQL files
Only functional tests without any ORM calls
Valgrind?
I'm thinking maybe the static variables within symfony aren't getting cleaned up or the PDO layer is itself leaking memory. Of course, the last option is to figure out a way to run a suite of tests in its own process.
We're progressing through the areas it could be in and I don't expect anyone to actually help us work out the details but I'm just throwing this question out there to see if anyone has experienced this and where they found the leak to be or what they did to get around this.
Also, any input on other tools that can assist, like valgrind?
Only functional tests without any ORM calls
That is one that I always try to prevent: Use Mock-Objects instead of accessing the database.
What you can try is the new PHPUnit-Version that has process isolation of the tests (activated via additional command line parameter). That should help you. It is slower but helps with the memory problem

Categories