Code is executed before PHPUnit's initialization - php

I was reading my project's code coverage report, and I noticed something strange: a line was uncovered, but I was sure that line got executed during the tests. So, I added a var_dump() before it and this is what I got when executing the tests:
bool(true)
PHPUnit 3.5.5 by Sebastian Bergmann.
...
This is weird. How is it possible that a line is executed before PHPUnit's initialization? I believe this is the reason why code coverage says that line is uncovered.
Any hints?
EDIT: Here's some code. It's an IRC framework that makes use of the Doctrine Common library to read annotations and also uses the ClassLoader and EventDispatcher Symfony components. This is the incriminated method:
/**
* Returns this module's reflection.
*
* #return \ReflectionClass
* #access public
*/
static public function getReflection()
{
// The var_dump() displaying bool(false) is executed before PHPUnit, while the other
// ones are correctly executed.
var_dump(is_null(self::$reflection));
if (null === self::$reflection) {
// This line is reported as uncovered, but it must be executed since I'm
// accessing the reflection!
self::$reflection = new \ReflectionClass(get_called_class());
}
return self::$reflection;
}
What do you think?

Then, why are all the others var_dump() (that method gets executed many times in the application) shown after PHPUnit's output? And why isn't that line reported in code coverage even if it's executed?
I assume (but that's just a guess as it's hard to say since you have not shown the code), that it's related to code that gets executed on file-inclusion, rather after actual test functions are executed or testcases get instantiated.

The accessor must be getting called outside of a unit test which initializes self::$reflection. After that, all further calls to getReflection() skip the if block so it'll never be counted as covered. PHPUnit instantiates all of the test case classes--one per test method, data provider method, and data provider argument array--before running any of the tests or tracing code coverage. Look for a test case that calls getReflection() from its constructor or outside the class itself where the code is executed upon loading.
I forget if the test cases are instantiated before PHPUnit outputs its version and cannot check now, but I believe this is the case. The other possibility is that you're calling getReflection() from bootstrap.php, but you probably already checked for that.

Related

PHPUnit false positives when running tests in separate processes

I have my test class like this:
/**
* #runTestsInSeparateProcesses
*/
class ProfileTest extends TestCase
{
public function testFalsePositive()
{
$this->assertFalse(true);
}
}
And the weird thing is, this test class passes successfully.
If I remove annotation "runTestsInSeparateProcesses" - then it gives me correct result (failing test).
Another weird thing is, that in my other test cases, there should be "call method on null" exception, and even though phpunit cheerfully informed me, that tests passed correctly.
I'm sure, that phpunit command catches my testclass.
I'm using PHPUnit v.7.4.1.
Can anybody tell me what's going on?

How to use PHPUnit to test a method that calls other methods of the same class, but returns no value

How do you write a unit test for a method that calls other methods of the same class, but doesn't return a value? (Let's say with PHPUnit.)
For example, let's say that I have the following class:
class MyClass {
public function doEverything() {
$this->doA();
$this->doB();
$this->doC();
}
public function doA() {
// do something, return nothing
}
public function doB() {
// do something, return nothing
}
public function doC() {
// do something, return nothing
}
}
How would you test doEverything()?
EDIT:
I'm asking this because from what I've read it seems like pretty much every method should have its own dedicated unit test. Of course, you also have functional and integration tests, but those target specific routines, so to speak (not on a per method level necessarily).
But if pretty much every method needs its own unit test, I'm thinking it would be "best practice" to unit test all of the above methods. Yes/no?
Okay! I've figured it out! As might be expected, mocking is what I need in this situation--and mocking a sibling method is called partial mocking. There's some pretty great info about PHPUnit mocking in this article by Juan Treminio.
So to test doEverything() in the above class, I would need to do something like this:
public function testDoEverything()
{
// Any methods not specified in setMethods will execute perfectly normally,
// and any methods that ARE specified return null (or whatever you specify)
$mock = $this->getMockBuilder('\MyClass')
->setMethods(array('doA', 'doB', 'doC'))
->getMock();
// doA() should be called once
$mock->expects($this->once())
->method('doA');
// doB() should be called once
$mock->expects($this->once())
->method('doB');
// doC() should be called once
$mock->expects($this->once())
->method('doC');
// Call doEverything and see if it calls the functions like our
// above written expectations specify
$mock->doEverything();
}
That's it! Pretty easy!
BONUS: If you use Laravel and Codeception...
I'm using the Laravel Framework as well as Codeception, which made it a little bit trickier to figure out. If you use Laravel and Codeception you'll need to do a little bit more to get it working, since the Laravel autoloading doesn't by default connect into the PHPUnit tests. You'll basically need to update your unit.suite.yml to include Laravel4, as shown below:
# Codeception Test Suite Configuration
# suite for unit (internal) tests.
class_name: UnitTester
modules:
enabled: [Asserts, UnitHelper, Laravel4]
Once you've updated your file, don't forget to call php codecept.phar build to update your configuration.
While your mocking test does achieve your goal, I would argue that you've decreased confidence in the code. Compare the original trivial method to the complicated method that tests it. The only way the method under test can fail is by forgetting to add one of the method calls or mistype a name. But you're now doubly-likely to do that with all that additional code, and it doesn't have any tests!
Rule: If your test code is more complicated than the code under test, it needs its own tests.
Given the above, you're better off finding another way to test the original code. For the method as written--three method calls with no parameters--inspection by eyeball is sufficient. But I suspect that the method does have some side-effects somewhere, otherwise you could delete it.
Unit testing is about testing the class as a unit, not each method individually. Testing each method alone is a good indication that you're writing your tests after the code. Employing Test Driven Development and writing your tests first will help you design a better class that is more-easily testable.

How to skip/mark incomplete entire test suite in PHPUnit?

Description
I have a TestSuite which I need to mark as skipped (the entire test suite - not the specific test cases within the suite).
class AllTests
{
public static function suite()
{
// this does not work same as within TestCase:
// throw new \PHPUnit_Framework_SkippedTestError("Out of order");
$Suite = new \PHPUnit_Framework_TestSuite(__NAMESPACE__);
$Suite->addTestSuite(translators\AllTests::cls());
$Suite->addTestSuite(TlScopeTest::cls());
$Suite->addTestSuite(TlNsTest::cls());
$Suite->addTestSuite(TlElementTest::cls());
$Suite->addTestSuite(TlItemTest::cls());
$Suite->addTestSuite(LangCodeTest::cls());
$Suite->addTestSuite(TlElemClassTagTest::cls());
return $Suite;
}
}
As you can see throwing the PHPUnit_Framework_SkippedTestError exception does not work. It is not caught by the PHPUnit, and is breaks the execution as any uncaught exception (which is understandable, as the suite() method is invoked while building tests hierarchy, before actually running the tests).
I've seen an exception class named PHPUnit_Framework_SkippedTestSuiteError, but have no clue how to take advantage of it. Any ideas?
Motivation
I have a TestSuite, which aggregates many test cases as well as other test suites. Almost every test in this fails, becouse of a change which I made in the core of my code.
The problem is that this package is not crutial, and is scheduled to be fixed later. Until then I have to run tests for every other package, but when I do the PHPUnit output becomes flooded with the errors coming from the package in question. This forces me to check every time if any of the failures is coming from any other package.
This, as you might suspect, is very susceptible to human error, i.e. I could miss a failure, which actually is important.
I could run only the test suite on which I am currently working, but I lose control of whether or not my changes in one package causes a failure in other package.
I do not want to comment out that test suite, because I'm afraid that I (or someone who will take over the code after me) could forget about it entirely.
Ok, so I'll put it together:
The AllTests class has to be refactored to extend PHPUnit_Framework_TestSuite.
This makes the class a fully valuable TestSuite and allows to implement a setUp method on the suite level.
The setUp method is called by the test runner (not by the builder), so it is safe to throw a SkippedTestError exception.
The corresponding method to do just that within a test suite is called markTestSuiteSkipped (notice the Suite in the method name).
The entire class would look like this:
class AllTests extends \PHPUnit_Framework_TestSuite
{
protected function setUp()
{
$this->markTestSuiteSkipped("zzz");
}
public static function suite()
{
$Suite = new self(__NAMESPACE__);
$Suite->addTestSuite(translators\AllTests::cls());
$Suite->addTestSuite(TlScopeTest::cls());
$Suite->addTestSuite(TlNsTest::cls());
$Suite->addTestSuite(TlElementTest::cls());
$Suite->addTestSuite(TlItemTest::cls());
$Suite->addTestSuite(LangCodeTest::cls());
$Suite->addTestSuite(TlElemClassTagTest::cls());
return $Suite;
}
}
The output is a pretty block of S letters, which definetly indicate, that we skipped a lot of tests. This cannot escape our attention and yet allows our tests to pass.
You could mark test as skipped.

Is there a way to have PHPUnit determine code coverage for #method declarations?

I have a number of methods being declared in a class with the standard phpdoc #method syntax, for example:
/**
* #method string magicMethod(int $arg1, array $arg2) Method description.
*/
class ... { }
Is it possible to configure PHPUnit to check against these comments when determining method-level code coverage? Currently, my coverage is at 100%, even though I've only touched about 10% of these magic methods so far.
Code coverage can only be calculated based on existing code, not on "virtual" methods.
To get a more realistic statistic, you should reduce the amount of coverage that gets generated unintentionally. PHPUnit does generate coverage for every line of code that gets executed when using the default configuration - which is bad, because if you unintentionally run along lines that do not get tested with an assertion, the coverage does not tell you anything at all (apart from the fact that no error occurred there).
When you have a look at the code coverage chapter in the manual, you see that you can specify which methods are tested with a test, and only those methods are generating coverage statistics (section "Specifying covered methods").
The method I prefer is to set the option mapTestClassNameToCoveredClassName="true" in the phpunit.xml file, and add all classes to be tested to the whitelist. That way, the coverage will automatically be restricted to only the class that has the same name (less the suffix "Test") as the test class. So if you have a test "MyGreatModelTest", it will only create coverage in any method of the "MyGreatModel" class, and not anywhere else.
And if you add the whole directory with your code to the whitelist, you will also catch all files that did generate 0 % coverage and thus were not included in the statistics so far.
Beware: These settings might hurt your feelings, but they will give you a more realistic picture of which lines of code get really run during a test, and which are only passed as a side effect.
PHPUnit uses its own annotations applied to your TestCase class. It doesn't parse annotations on tested class.
To restrict the source code lines used during code coverage analysis for a specific test, you have to use the #covers annotation.
In case of using magic methods in the tested class:
/**
* #covers My\Class::__call
*/
public function testMyMagicMethod()
{
$this->assertSomething($this->subject->magicMethod());
}
As __call() is the real method called in you tested class, the covered
source code lines should be in it.
I know this is super old, but the answer you're looking for is to mock the __call method.
$this->clientMock->expects(static::at(1))
->method('__call')
->with('get', [RedisAdapter::CONNECTION_TEST])
->will(static::returnValue(RedisAdapter::CONNECTION_TEST_VALUE));

TDD, PHPUnit doubts

class MyClass {
private $numeric;
public function MyMethod($numeric)
{
if (! is_numeric($numeric)) {
throw new InvalidArgumentException
}
$this->numeric = $numeric;
}
}
1 - It is necessary to test whether the class exists?
The PHPUnit has several methods that run automatically, such as assertPreConditions, setUp, and others. It is necessary within these methods check whether the class exists using the assertTrue and class_exists? Example:
protected function assertPreConditions()
{
$this->assertTrue(class_exists("MyClass"), "The class does not exists.");
}
2 - It is necessary to check if a method exist? If yes, this test should
be a separate test or within each unit test?
Suppose we have a method that accepts only numeric type parameters, so we have two tests, one test with the correct parameter and another with incorrect method expecting an exception, right? The correct way to write this method would be ...
This way:
public function testIfMyMethodExists()
{
$this->assertTrue(method_exists($MyInstance, "MyMethod"), "The method does not exists.");
}
/**
* #depends testIfMyMethodExists
* #expectedException InvalidArgumentExcepiton
*/
public function testMyMethodWithAValidArgument()
{
//[...]
}
/**
* #depends testIfMyMethodExists
* #expectedException InvalidArgumentExcepiton
*/
public function testMyMethodWithAnInvalidArgument()
{
//[...]
}
Or this way?
public function testMyMethodWithAValidArgument()
{
$this->assertTrue(method_exists($MyInstance, "MyMethod"), "The method does not exists.");
}
/**
* #expectedException InvalidArgumentExcepiton
*/
public function testMyMethodWithAnInvalidArgument()
{
$this->assertTrue(method_exists($MyInstance, "MyMethod"), "The method does not exists.");
//[...]
}
And why?
3 - What is the real purpose of the #covers and the #coversNothing?
I was reading a document that Sebastian Bergmann the creator of PHPUnit wrote as good practice we should always write #covers and #coversNothing in the methods and class and add these options in the xml:
mapTestClassNameToCoveredClassName="true"
forceCoversAnnotation="true"
and in the whitelist:
<whitelist addUncoveredFilesFromWhitelist="true">
<directory suffix=".php"></directory>
</whitelist>
But what is the real need for it?
4 - What is the correct way to test a constructor that calls another method?
It seems everything is fine, but not on tests.
Even if I do tests with valid arguments and invalid arguments expecting an exception on the method "MyMethod", that will not happen if I enter an incorrect value in the constructor (The test fails).
And if I test with valid argument, the code coverage does not result in 100%.
public function __construct($numeric)
{
$this->MyMethod($numeric);
}
I write tests for methods that should exist, to test that the code does what it is expected of it. I also test the InstanceOf() the class (and inherited class definitions) to ensure the object did create what was suppose to be created. If FOO() extends BAR(), then I test that my object created is an InstanceOf(FOO) and InstanceOf(BAR). If the class gets changed to inherit from something else, or has the extends removed, my test will once again inform the developer to check the code to ensure this change is desired. Potentially some inherited functions are being called on FOO, and without the extends from BAR, this code will break.
I write tests on the various code paths that are there to be executed. Therefore, if I expect that the function should throw an exception when bad data is passed, I write a test for that. I do this to also help document our source code. The tests show the expected behavior. If someone removes the exception (new functionality to accept a different parameter type) then the test should be updated to show that this is allowed. Potentially, the change to the parameters could cause a problem elsewhere. Testing this way ensures that years later, I know that code required this to be a number, and I should definitely re-factor the code carefully if I am changing the parameter types.
Using Test Driven Development (TDD) may cause you to not write the code to throw the exception, as you write a test, then the code to make the test pass. As such, you might not be testing all the parameters and their types or values, but I try to do this as best as I can to validate reasonable data input to try to avoid the Garbage In/Garbage Out (GIGO) problem.
All these tests also give me a good code coverage metric, as the majority of the code base is tested, and the code does step through all the lines in the class files. However, testing to this level, and trying to achieve a high code coverage metric, is really a choice for your team to make if this is desired or not.
I can't see any reason nor advantage of writing this kind of tests. If the class or the method doesn't exist your tests will fail anyway. So you don't have any profit from writing them.
Maybe the one exception from above (point 1) could be a situation where you always build SUT using PHPUnit mock framework (theoretically a situation where your SUT is a mock with mocked other method you don't need in particular test is possible, but I can't imagine real situation which leads to it).
In my opinion if the test covers flow path which is completely included in another test - this means that the test is redundant and unnecessary.
edit:
ad 3. Because you want to know which exactly flow path was invoked by particular unit test. Let's say you have two methods A and B. Method B invokes method A. If you don't have #covers annotation test for method B could generate code coverage for method A, and you can't say if your unit tests for A covers code in 100%.

Categories