I am writing a Zend Framework application and unit testing it with PHPUnit. In general, things are going swimmingly however I have a small, but annoying issue with PHPUnit and code coverage - it sometimes tells me that a particular line is not tested, and I don't know how to force it to be tested.
In the following code, for example, I launch two tests: one with a GET request, one with a POST request. The tests pass, and that's all fine. When I look at the code coverage, however, it shows me that the 'else' line is not executed.
public function editAction()
{
if ($request->isPost()) {
// do actions related to POST
} else {
// do action related to GET
}
}
Any ideas? As a side issue, do you usually persevere with unit tests until you get 100% code coverage? Or is this not really practical?
Thanks muchly...
I was the project lead on the Zend Framework a few years ago, through the release of ZF 1.0. I worked pretty hard on raising the coverage of testing for all components, and we had a policy that a component must have a certain minimum code coverage to be adopted into ZF from the incubator.
However, you're right, trying to get 100% code coverage from tests for all your classes isn't really practical. Some of the classes in ZF have 100% coverage, but for these, one or more of the following was true:
The class was trivially simple.
The tests took extraordinary work to write. E.g. complex setup code to create conditions to exercise all the obscure corner cases. Look at the unit tests for Zend_Db that I wrote! Though it's beneficial to force yourself to test these corner cases, because I guarantee that it'll lead you to code that you need to fix.
The class had to be refactored to be more "testable". This is often a good thing anyway, because you end up with better OO code, with less coupling, fewer static elements, etc. See the classes and tests for Zend_Log.
But we also realized that 100% code coverage is sometimes an artificial goal. A test suite that achieves less tan 100% coverage may nevertheless be adequate. And a test suite that does get to 100% coverage doesn't necessarily assure quality.
It would be very easy to get 100% code coverage for the following function. But did we test for division by zero?
function div($numerator, $denominator) {
return $numerator / $denominator;
}
So you should use code coverage as one metric of testing, but not the ultimate goal.
The code you have only comments for is what matters. The closing brace of a block will be shown as executable in the code coverage report if it's possible to fall through to the end. Look for a branch that you aren't actually testing.
if ($request->isPost()) {
if ($x < 5) {
return '<';
}
elseif ($x > 5) {
return '>';
}
// Do you have a test for $x == 5?
}
As for the 100% code coverage goal, I agree wholeheartedly with Bill. For some framework classes that I write I will strive to make 100%, but I know that doesn't mean I've truly tested every possibility. Often, when I find myself working overly hard to achieve 100% coverage, it's probably OCD kicking in. :)
Just . . . one . . . more . . . test . . .
If that is all there is to your test than i would assume your tests looks like Matthew described:
class UserControllerTest extends Zend_Test_PHPUnit_ControllerTestCase {
// [...]
public function testSomething()
{
$this->request
->setMethod('POST')
->setPost(array(
'username' => 'foobar',
'password' => 'foobar'
));
$this->editAction();
// assertThatTheRightThingsHappend
}
}
and in that case i don't see any reason why it shouldn't be easy to get 100% Code Coverage.
But yes: It is pretty hard to test Zend Framework Controllers and at some point you either have to try really hard to get all your application logic out of your controllers or just live with it.
The same thing doesn't apply for your models though. Those should be really easy to test, even in a ZF Application.
The purpose that code coverage serves is that it tells you what parts of your code base are not even getting executed. It doesn't tell you what is really tested and can only serve as a "minimum" to get an idea about the quality of your test suite (if you don't use #covers even that might lie to you).
On short: If you have big controllers and an not so easy to change architecture just setting with so and so tested controllers but don't apply the same logic to your models. Nothing in ZF prevents you to properly test those
Related
For example, I have a class that takes raw data, analyses it and returns numbers for a report. Let's call it SomeReportDataProvider
class SomeReportDataProvider
{
public function report(array $data)
{
$data = $this->prepare($data);
$report = [];
foreach ($data as $item) {
if ($row->somefield == 'somevalue') {
$report = $this->doThis($report, $row);
} else {
$report = $this->doThat($report, $row);
}
}
// ... do something else
$report = $this->postProcess($report);
return $report;
}
protected function doThis($item)
{
// ... do some math, probably call some other methods
}
protected function doThat($item)
{
// ... do other math
}
// ... and so on
}
So the class really does only one thing, step-by-step proccessing of raw data for some report. It's not big, most likely 4-6 methods with 5-10 lines. All of its methods are tightly related and serve one purpose, so I would say that it has high cohesion. But what's the best way to test the class?
If I try to approach it with mentality "test behavior, not implementation", I should test only it's single public method. The advantage of this approach is that it's easy to refactor the class later without even touching the tests, and I would be sure that it still exibits exactly same behavior. But it's also can be really hard to cover all cases through single method, too many possible code paths.
I can make most (probably all) methods public and test them in isolation, but then I'm breaking encapsulation of class and algorithm and testing implementation details of it. Any refactoring would be harder and unwanted changes in behavior more likely.
As a last option, I can break it into small classes, most likely having 1 or 2 methods. But is it really a good idea to break higly cohesive class into smaller, tightly coupled classes, that do very specific thing and aren't reused anywhere else? Also it's still will be more difficult to refactor. And probably harder for other developers to quickly get a full picture of how it works.
I always try to go for splitting up the classes as much as possible, like you say in your last option, because in my experience that’s the best way to simplify tests, which is by itself a very valid reason...
Also, I don't agree with you - I think this approach makes it easier to refactor in the future, and easier to understand, rather that a big class with everything together...
Looking at your code, I can see a bunch of separate “roles”: ReporterInterface, Reporter, ReportDataPreparator, ThisDoer, ThatDoer, ReportPostProcessor (obviously you can find better names :)
You might want to reuse some of them in the future, but even if that's not the case and all them are very specific to reporting, you can just put them in a separate namespace and folder (like a “reporting module”).
This reporting module has one unique API, which is your ReporterInterface, and all the rest of your system only needs to care about that interface, regardless of whether that reporter uses private methods, other classes, or a whole system in the background - they just have to call $reporter->report($data)...
So, from the perspective of the rest of the system, nothing changes, your reporting services are still all together, and your unit tests are much easier to write and maintain…
I am new to PHPUnit, infact I started today. And, as far as I have been reading, I came to understand only what this script does.
class UserTest extends PHPUnit_Framework_TestCase
{
protected $user;
// test the talk method
protected function setUp() {
$this->user = new User();
$this->user->setName("Tom");
}
protected function tearDown() {
unset($this->user);
}
public function testTalk() {
$expected = "Hello world!";
$actual = $this->user->talk();
$this->assertEquals($expected, $actual);
}
}
For this class:
<?php
class User {
protected $name;
public function getName() {
return $this->name;
}
public function setName($name) {
$this->name = $name;
}
public function talk() {
return "Hello world!";
}
}
Ok, so I have established that the test returns a Ok/Fail statement based on the equality of the test, but what I am looking for is more. I need an actually ways to test a more complex class that, its outcome, un-like in this example can not be guessed easily.
Say, I write a script that does polling. How would, or in what ways can I test if the methods/classes can work? The above code only shows if a methods outcome will only be 'Hello World' But, that is too easy thing to test, because I need to test complex things and there aren't much tutorials for that.
Testing a class in a unit test is supposed to NOT be complex.
And the reason for this to be true is that in a well designed system, you can test that class in isolation, and do not add the complexity of the system behind that class - because that system does not exist during the test, it is mocked.
The classic example for this is that a User class usually asks a database for info, but in the test case, that database is really hard to set up, prepare with data, and destroy afterwards. And using a real database also slows things down. So you design the User class to accept a database object from the outside, and in the test case you give to it a mock object of the database.
That way, it is really simple to simulate all kinds of return values from the database, and additionally you can check whether the database object gets the right parameters without the complexity of dealing with a real database.
You can only do proper mock object injection if your classes are designed to allow dependency injection. That is a principle to NOT create objects inside of other objects, but require the outside world to supply them. Just have a quick look at some video for explanation:
Dependency Injection - Programming With Anthony (Jan 2013)
And remember that creating good tests needs some experience. Good for you to start experimenting.
But, that is too easy thing to test, because I need to test complex things and there aren't much tutorials for that.
The more complex, the more complex the tests are. It's a bit like a snake biting in its own end. You normally want to prevent that, so to make it golden: Write simple tests to ensure that a complex software is tested and runs.
This does not always work 100% but it works better than without tests. PHPUnit has been designed for Unit-Tests (Xunit test patterns) but you can also use it to run different tests. For that you group tests. This is done differently by different people depending on different things. For example:
Small tests
Medium tests
Large tests
or (not equivalent):
Unit tests
Integration tests
Acceptance tests
or (perhaps equivalent TestPyramid):
Unit tests
Service tests
UI tests
and what not. When you start with testing, it's probably good to start with the unit-tests and as Sven already answered, keep them simple. Get comfortable with TDD, read some slides and books. And welcome to the world of automating tests.
What is Unit test, Integration Test, Smoke test, Regression Test?
P.S. Yes, simple getter setters are too easy to test. If all a setter does is storing to a private member and the getter gets it back, you can trust that PHP is working, writing a test for that is a waste of time and will only lead to cruft. It clearly shows that somebody wrote the test after the code has been written. Instead write first the test and see it fail (red), than hack together the code as quick as possible to get the test to pass (green). You can improve the code later on as the test already shows it's working.
Context
I recently inherited the development and maintenance of a well-programmed PHP application (sarcasm). The application is based on a commercial software (which I will not name), and there is a layer of customization (ours) that is built on top of it.
Unfortunately, this application uses a ton of globals and singletons (pun-intended). I have built test cases for all the things we've overridden. However, a lot of things relies on some global state of something, which can cause race conditions and all sorts of weird stuff.
Randomizing the tests
In order to catch most of these weird-o-tons (I like to call them that), I have built a PHPUnit TestDecorator, [as documented in the manual][1]. This one:
class PHPUnit_Extensions_Randomizer extends PHPUnit_Extensions_TestDecorator
{
public function __construct(PHPUnit_Framework_Test $test)
{
$tests = $test->tests();
$shuffle = array();
foreach ($tests as $t) {
if ($t instanceof PHPUnit_Framework_TestSuite) {
$shuffle = array_merge($shuffle, $t->tests());
} else {
$shuffle[] = $t;
}
}
shuffle($shuffle);
$suite = new PHPUnit_Framework_TestSuite();
foreach ($shuffle as $t) $suite->addTest($t);
parent::__construct($suite);
}
}
It basically randomizes the tests order to make sure a test doesn't depend on a global state that may or may not be correct.
The question
The problem arose when came the time to test my custom decorator. I have not found anywhere in the manual, Google, or Stack Overflow how to load it.
When analyzing the code, I saw that PHPUnit itself was instantiating the RepeatedTest decorator in the TextUI_TestRunner::doRun() method. I know I can subclass the TestRunner, override doRun(), arrange for my decorator to be created and then just call the parent::doRun() with my decorator instance as the argument, override TextUI_Command and create a new CLI script to use my stuff instead of the built-in stuff.
Before I (re-)invent the wheel, I was just wondering, is it possible do load a custom decorator without subclassing the TestRunner?
Thanks.
With current PHPUnit Versions there is no easy way to plug in in your own code for the most part. The only thing that offers interchangeability is the TestRunner and what you described makes sense to me.
I'm not aware of any other way to provide test decorators or change out most of the other classes phpunit uses.
The way you want to change the test execution order seems to work out even so I'm not sure how well it would shuffle suits.
Another way to achieve this thats maybe less hassle would be to create a subclass of PHPUnit_Framework_TestSuite randomly addTest your code.
If that doesn't work out another option would be to use the xml configuration file and build the test suite from <file> tags and before every execution have some code shuffle those tags. Afaik phpunit doesn't sort them in any way so execution will be random
Are you looking to see if every tests on its on really works and want to find interdependent tests?
Or do you really want to see if something breaks horribly when you do a lot of stuff that should not change anything in the wrong order?
I'm asking just in case you haven't considered --process-isolation yet. (Which i assume you have but asking isn't going to hurt and might save you some time and effort :) )
When you run every test with a fresh set of globals you will at least find all test interdependencies and thats just one cli switch away to make sure every test in your suite works fine on its own.
I understand that 100% code coverage is just a goal to shoot for, but it's annoying to have a line containing a closing brace counted as not covered because it follows a method call whose sole purpose is to throw an exception. Here's a simple example from my base test case class to demonstrate:
function checkForSkipAllTests() {
if (self::$_skipAllTests) {
self::markTestSkipped(); // [1] always throws an exception
} // [2] shown as executable but not covered
}
Since [1] always exits the method, line [2] is not actually reachable. Is there any way to tell Xdebug this by annotating the markTestSkipped() method itself?
Your pull request got merged so starting with php-code-coverage 1.1.2, which should come around rather soon (with PHPUnit 3.6.3 or 3.6.4) one will be able to write:
private static function checkForSkipAllTests() {
if (self::$_skipAllTests) {
self::markTestSkipped();
} // #codeCoverageIgnore
}
Also in the further away future when xDebug will be able to provide 'Conditionals' coverage i think i remember discussion about making the whole issue going away with that refactoring as the closing brace will just count as 'covered' when the last statement in a function terminates the function... But I might be wrong on that
You can surround the line with stard/end comments to have PHP_CodeCoverage ignore it, but that means doing it everywhere the method is called.
function checkForSkipAllTests() {
if (self::$_skipAllTests) {
self::markTestSkipped();
// #codeCoverageIgnoreStart
}
// #codeCoverageIgnoreEnd
}
This is a maintenance nightmare and prone to error. I would really like to avoid this solution.
I understand that 100% code coverage is just a goal to shoot for, but it's annoying to have a line containing a closing brace counted as not covered because it follows a method call whose sole purpose is to throw an exception. Here's a simple example from my base test case class to demonstrate:
Indeed, 100% code coverage is not a goal, but it's nice to have, especially if it takes you zero time to make it so. I do wonder though; your tests are not the files that are to be tested. I never test my tests, nor am I interested in their code coverage. I already know which tests are done, which succeeded, which failed and which are skipped. This is what PHPUnit brings to the table for me; .....S...F is enough feedback.
My tests are in a separate directory, which isn't included in code coverage; it just seems useless to do so, in my eyes. Anyway, if you're sold on having code coverage reports on your testcases, you might want to simply get rid of the }, like so:
function checkForSkipAllTests() {
if (self::$_skipAllTests)
self::markTestSkipped();
}
Yeah, I know that having an if without curly brackets will make me the least cool person answering your question, but it seems like a much easier solution than having some annotations which magically work.
I'm currently working on unit testing of a custom class I made, which is based on the singleton design pattern. Based on the code coverage report I have 95.45% of it covered. I am using PHPUnit to do the unit testing and I have been through this article
by Sebastian Bergmann.
The only problem I am left with is testing against class cloning throught the magic method __clone(). I have set that method as private to avoid instantiation
private final function __clone()
{}
What would be the best way to write a test to make sure that the singleton isn't "clonable". (The same test could eventually be used to test the __constructor())
Not really a question but is it just me or the tests runs awfully slow on a windows box compared to a *nix box?
Keep in mind that code coverage is not a measure of how correct your program is, nor does 100% coverage mean you've executed every code path. For example, the ternary operator
a ? b : c
and compound boolean expressions
if (a < 1 || b > 6)
are counted as single statements even though you may execute only a portion of them due to short-circuiting. Also, omitting the braces around single-statement if, while, etc. blocks turns the whole thing into a single statement.
The following will appear as a single statement in the code coverage report so you can't tell if you've executed both cases (true and false).
if (...)
foo();
I feel that
private final function __clone() { }
is too simple to fail. Testing that the method throws an exception (using reflection no less which your clients won't do) is testing the PHP interpreter--out of scope in my book.
[For the record, I too get a little OC when it comes to reaching 100% code coverage, but keeping the above facts in mind helps to alleviate it so I can move on to writing better code.]
Call clone or constructor and check if excpetion has been thrown.