Reaching 100% Code Coverage with PHPUnit - php

I've been in the process of creating a test suite for a project, and while I realize getting 100% coverage isn't the metric one should strive to, there is a strange bit in the code coverage report to which I would like some clarification.
See screenshot:
Because the last line of the method being tested is a return, the final line (which is just a closing bracket) shows up as never being executed, and as a consequence the whole method is flagged as not executed in the overview. (Either that, or I'm not reading the report correctly.)
The complete method:
static public function &getDomain($domain = null) {
$domain = $domain ?: self::domain();
if (! array_key_exists($domain, self::$domains)) {
self::$domains[$domain] = new Config();
}
return self::$domains[$domain];
}
Is there a reason for this, or is it a glitch?
(Yes, I read through How to get 100% Code Coverage with PHPUnit, different case although similar.)
Edit:
Trudging on through the report, I noticed the same is true for a switch statement elsewhere in the code. So this behaviour is at least to some extent consistent, but baffling to me none the less.
Edit2:
I'm running on: PHPUnit 3.6.7, PHP 5.4.0RC5, XDebug 2.2.0-dev on a OS X

First off: 100% code coverage is a great metric to strive for. It's just not always achievable with a sane amount of effort and it's not always important to do so :)
The issue comes from xDebug telling PHPUnit that this line is executable but not covered.
For simple cases xDebug can tell that the line is NOT reachable so you get 100% code coverage there.
See the simple example below.
2nd Update
The issue is now fixed xDebug bugtracker so building a new version of xDebug will solve those issues :)
Update (see below for issues with php 5.3.x)
Since you are running PHP 5.4 and the DEV version of xDebug I've installed those and tested it. I run into the same issues as you with the same output you've commented on.
I'm not a 100% sure if the issue comes from php-code-coverage (the phpunit module) for xDebug. It might also be an issue with xDebug dev.
I've filed a bug with php-code-coverage and we'll figure out where the issue comes from.
For PHP 5.3.x issues:
For more complex cases this CAN fail.
For the code you showed all I can say is that "It works for me" (complex sample below).
Maybe update xDebug and PHPUnit Versions and try again.
I've seen it fail with current versions but it depends on how the whole class looks sometimes.
Removing ?: operators and other single-line multi-statement things might also help out.
There is ongoing refactoring in xDebug to avoid more of those cases as far as I'm aware. xDebug once wants to be able to provide "statement coverage" and that should fix a lot of those cases. For now there is not much one can do here
While //#codeCoverageIgnoreStart and //#codeCoverageIgnoreEnd will get this line "covered" it looks really ugly and is usually doing more bad than good.
For another case where this happens see the question and answers from:
what-to-do-when-project-coding-standards-conflicts-with-unit-test-code-coverage
Simple example:
<?php
class FooTest extends PHPUnit_Framework_TestCase {
public function testBar() {
$x = new Foo();
$this->assertSame(1, $x->bar());
}
}
<?php
class Foo {
public function bar() {
return 1;
}
}
produces:
phpunit --coverage-text mep.php
PHPUnit 3.6.7 by Sebastian Bergmann.
.
Time: 0 seconds, Memory: 3.50Mb
OK (1 test, 1 assertion)
Generating textual code coverage report, this may take a moment.
Code Coverage Report
2012-01-10 15:54:56
Summary:
Classes: 100.00% (2/2)
Methods: 100.00% (1/1)
Lines: 100.00% (1/1)
Foo
Methods: 100.00% ( 1/ 1) Lines: 100.00% ( 1/ 1)
Complex example:
<?php
require __DIR__ . '/foo.php';
class FooTest extends PHPUnit_Framework_TestCase {
public function testBar() {
$this->assertSame('b', Foo::getDomain('a'));
$this->assertInstanceOf('Config', Foo::getDomain('foo'));
}
}
<?php
class Foo {
static $domains = array('a' => 'b');
static public function &getDomain($domain = null) {
$domain = $domain ?: self::domain();
if (! array_key_exists($domain, self::$domains)) {
self::$domains[$domain] = new Config();
}
return self::$domains[$domain];
}
}
class Config {}
produces:
PHPUnit 3.6.7 by Sebastian Bergmann.
.
Time: 0 seconds, Memory: 3.50Mb
OK (1 test, 2 assertions)
Generating textual code coverage report, this may take a moment.
Code Coverage Report
2012-01-10 15:55:55
Summary:
Classes: 100.00% (2/2)
Methods: 100.00% (1/1)
Lines: 100.00% (5/5)
Foo
Methods: 100.00% ( 1/ 1) Lines: 100.00% ( 5/ 5)

Much of the problem here is the insistence on getting 100% execution coverage of "lines".
(Managers like this idea; it is a simple model they can understand). Many lines aren't "executable" (whitespace, gaps between function declarations, comments, declarations, "pure syntax" e.g., the closing "}" of a switch or class declaration, or complex statements split across multiple source lines).
What you really want to know is, "is all the executable code covered?" This distinction seems silly yet leads to a solution. XDebug tracks what gets executed, well, by line number and your XDebug-based scheme thus reports ranges of executed lines. And you get the troubles discussed in this thread, including the klunky solutions of having to annotate the code with "don't count me" comments, putting "}" on the same line as the last executable statement, etc. No programmer is really willing to do that let alone maintain it.
If one defines executable code as that code which which can be called or is controlled by a conditional (what the compiler people call "basic blocks"), and the coverage tracking is done that way, then the layout of the code and the silly cases simply disappear. A test coverage tool of this type collects what is called "branch coverage", and you can get or not get 100% "branch coverage" literally by executing all the executable code. In addition, it will pick up those funny cases where you have a conditional within a line (using "x?y:z") or in which you have two conventional statements in a line (e.g.,
if (...) { if (...) stmt1; else stmt2; stmt3 }
Since XDebug tracks by line, I beleive it treats this as one statment, and considers it coverage if control gets to the line, when in fact there are 5 parts to actually test.
Our PHP Test Coverage tool implements these ideas. In particular, it understands that code following a return statement isn't executable, and it will tell you that you haven't executed it, if it is non-empty. That makes the OP's original problem just vanish. No more playing games to get "real" coverage numbers.
As with all choices, sometimes there is a downside. Our tool has a code instrument component that only runs under Windows; instrumented PHP code can run anywhere and the processing/display is done by a platform independent Java program. So this might be awkward for OP's OSX system. The instrumenter works fine across NFS-capable file systems, so he could arguably run the instrumenter on a PC and instrument his OSX files.
This particular problem was raised by someone trying to push his coverage numbers up; the problem was IMHO artificial and can be cured by stepping around the artificiality. There's another way to push up your numbers without writing more tests, and that's finding and removing duplicate code. If you remove duplicates, there's less code to test and testing one (non)copy in effects tests the (now nonexistent other copy) so it is easier to get higher numbers. You can read more about this here.

With regards to your switch statement code coverage issue, simply add a "default" case which doesn't do anything and you'll get full coverage.

Here is what to do to get switch statement 100% coverage:
Ensure there is at least one test that sends a case that doesn't exist.
So, if you have:
switch ($name) {
case 'terry':
return 'blah';
case 'lucky':
return 'blahblah';
case 'gerard':
return 'blahblah';
}
ensure at least one of your tests sends a name that is neither terry nor lucky nor gerard.

Related

PHPUnit: creating custom code coverage reporter / logger

To contextualise, in similar tools, PHPMD and PHPCS, one can specify a custom formatter for result output, eg:
PHPMD:
vendor/bin/phpmd test \\my\\namespace\\renderers\\phpmd\\AdamFormat phpmd.xml
PHPCS:
vendor/bin/phpcs --standard=phpcs.xml --report=./src/renderers/phpcs/AdamFormat.php
I'm looking to do the same thing for PHPUnit, but have thusfar drawn a blank (investigation, googling, searching here). Looking at the code of PHPUnit, it all seems a bit hard-codey to me:
Code coverage handler:
if (isset($arguments['coverageClover'])) {
$this->printer->write(
"\nGenerating code coverage report in Clover XML format ..."
);
try {
$writer = new CloverReport;
Logging:
if (isset($arguments['testdoxHTMLFile'])) {
$result->addListener(
new HtmlResultPrinter(
I have not spotted anywhere in the docs that suggest otherwise. Seems like an odd shortfall to me.
So two questions:
Am I reading things right? PHPUnit doesn't support this?
Presuming that's "yes: not supported", has anyone had any success with a tactic to circumventing this in a non Heath Robinson manner?
I realise that one can use the --coverage-php option to output the results as PHP variables for another process to then utilise to do [whatever], but that seems like an inside out approach to me, and falls into the Heath Robinson category.

Should I be unit testing every piece of code

I have been starting unit testing recently and am wondering, should I be writing unit tests for 100% code coverage?
This seems futile when I end up writing more unit testing code than production code.
I am writing a PHP Codeigniter project and sometimes it seems I write so much code just to test one small function.
For Example this Unit test
public function testLogin(){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
$this->realAuth=new $this->CI->auth;
$this->CI->auth=$this->getMock("Auth",array("logIn"));
$this->CI->auth->expects($this->once())
->method("logIn")
->will($this->returnValue(TRUE));
//test
$this->CI->form_validation->expects($this->once())
->method("run")
->will($this->returnValue(TRUE));
$_POST["login"]=TRUE;
$this->CI->login();
$out = $this->CI->output->get_headers();
//check new header ends with dashboard
$this->assertStringEndsWith("dashboard",$out[0][0]);
//tear down
$this->CI->form_validation=$this->realFormValidation;
$this->CI->auth=$this->realAuth;
}
public function badLoginProvider(){
return array(
array(FALSE,FALSE),
array(TRUE,FALSE)
);
}
/**
* #dataProvider badLoginProvider
*/
public function testBadLogin($formSubmitted,$validationResult){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
//test
$this->CI->form_validation->expects($this->any())
->method("run")
->will($this->returnValue($validationResult));
$_POST["login"]=$formSubmitted;
$this->CI->login();
//check it went to the login page
$out = output();
$this->assertGreaterThan(0, preg_match('/Login/i', $out));
//tear down
$this->CI->form_validation=$this->realFormValidation;
}
For this production code
public function login(){
if($this->input->post("login")){
$this->load->library('form_validation');
$username=$this->input->post('username');
$this->form_validation->set_rules('username', 'Username', 'required');
$this->form_validation->set_rules('password', 'Password', "required|callback_userPassCheck[$username]");
if ($this->form_validation->run()===FALSE) {
$this->load->helper("form");
$this->load->view('dashboard/login');
}
else{
$this->load->model('auth');
echo "valid";
$this->auth->logIn($this->input->post('username'),$this->input->post('password'),$this->input->post('remember_me'));
$this->load->helper('url');
redirect('dashboard');
}
}
else{
$this->load->helper("form");
$this->load->view('dashboard/login');
}
}
Where am I going so wrong?
In my opinion, it's normal for test code to be more than production code. But test code tends to be straightforward, once you get the hang of it, it's like a no brainer task to write tests.
Having said that, if you discover your test code is too complicated to write/to cover all the execution paths in your production code, that's a good indicator for some refactoring: your method may be too long, or attempts to do several things, or has so many external dependencies, etc...
Another point is that it's good to have high test coverage, but does not need to be 100% or some very high number. Sometimes there are code that has no logic, like code that simply delegates tasks to others. In that case you can skip testing them and use #codeCoverageIgnore annotation to ignore them in your code coverage.
In my opinion its logical that test are much more code because you have to test multiple scenarios, must provide test data and you have to check that data for every case.
Typically a test-coverage of 80% is a good value. In most cases its not necessary to test 100% of the code because you should not test for example setters and getter. Dont test just for the statistics ;)
The answer is it depends, but generally no. If you are publishing a library then lots of tests are important and can even help create the examples for the docs.
Internal projects you would probably want to focus your code around complex functions and things which would be bad if they were to go wrong. For each test thing what is the value in having the test here rather than in the parent function?
What you would want to avoid is testing too much is anything that relies on implementation details, or say private method/functions, otherwise if you change the structure you'll find you'll have to repeatedly rewrite the entire suite of tests.
It is better to test at a higher level, the public functions or anything which is at the boundary between modules, a few tests at the highest level you can should yield reasonable converge and ensure the code works in the way that your code is actually called. This is not to say lower level functions shouldn't have tests but at that level it's more to check edge cases than to have a test for the typical case against every function.
Instead of creating tests to increase coverage, create tests to cover bugs as you find and fix them, create tests against new functionality when you would have to manually test it anyway. Create tests to guard against bad things which must not happen. Fragile tests which easily break during refactors should be removed or changed to be less dependant on the implementation of the function.

Explicit waits in phpunit selenium2 extension

For C# there is a way to write a statement for waiting until an element on a page appears:
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10));
IWebElement myDynamicElement = wait.Until<IWebElement>((d) =>
{
return d.FindElement(By.Id("someDynamicElement"));
});
But is there a way to do the same in phpunit's selenium extension?
Note 1
The only thing I've found is $this->timeouts()->implicitWait(), but obviously it's not what I'm looking for.
Note 2
This question is about Selenium2 and PHPUnit_Selenium2 extension accordingly.
The implicitWait that you found is what you can use instead of waitForCondition.
As the specification of WebDriver API (that you also found ;)) states:
implicit - Set the amount of time the driver should wait when searching for elements. When searching for a single element, the driver should poll the page until an element is found or the timeout expires, whichever occurs first.
For example, this code will wait up to 30 seconds for an element to appear before clicking on it:
public function testClick()
{
$this->timeouts()->implicitWait(30000);
$this->url('http://test/test.html');
$elm = $this->clickOnElement('test');
}
The drawback is it's set for the life of the session and may slow down other tests unless it's set back to 0.
From my experience it is very hard to debug selenium test case from phpunit, not to mention maintaining them. The approach I use in my projects is to use Selenium IDE, store test cases as .html files and only invoke them via phpunit. If there is anything wrong, I can lunch them from IDE and debug in a much easier why. And Selenium IDE has waitForElementPresent, waitForTextPresent and probably some other method, which can solve your issue. If you want to give it a try, you can use this method in your class inheriting from Selenium Test Case class.
$this->runSelenese("/path/to/test/case.html");

PHPUnit: Multiple assertions in a single test, only first failure seen

The next weirdness I'm seeing with PHPUnit:
class DummyTest extends PHPUnit_Framework_TestCase {
public function testDummy() {
$this->assertTrue(false, 'assert1');
$this->assertTrue(false, 'assert2');
}
public function testDummy2() {
$this->assertTrue(false, 'assert3');
}
}
As soon as the first assertion fails in a test, the rest of the test is ignored.
So (with a simple call of phpunit DummyTest.php):
The above code will display 2 tests,
2 assertions, 2 failures. What?
If I make all the tests pass, then
I'll get OK (2 tests, 3 assertions).
Good.
If I only make all the tests pass
except for assert2, I get 2 tests, 3
assertions, 1 failure. Good.
I don't get it, but PHPUnit's been around for ages, surely it has to be me?
Not only are the counts not what I'd expect, only the error message for the first failed assert in the code above is displayed.
(BTW, I'm analyzing the xml format generated by PHPUnit for CI rather than testing real code, hence the practice of multiple assertions in the one test.)
First off: That is expected behavior.
Every test method will stop executing once an assertion fails.
For an example where the opposite will be very annoying*:
class DummyTest extends PHPUnit_Framework_TestCase {
public function testDummy() {
$foo = get_me_my_foo();
$this->assertInstanceOf("MyObject", $foo);
$this->assertTrue($foo->doStuff());
}
}
if phpunit wouldn't stop after the first assertion you'd get an E_FATAL (call to a non member function) and the whole PHP process would die.
So to formulate nice assertions and small tests it's more practical that way.
For another example:
When "asserting that an array has a size of X, then asserting that it contains a,b and c" you don't care about the fact that it doesn't contain those values if the size is 0.
If a test fails you usually just need the message "why it failed" and then, while fixing it, you'll automatically make sure the other assertions also pass.
On an additional note there are also people arguing that you should only have One Asssertion per Test case while I don't practice (and I'm not sure if i like it) I wanted to note it ;)
Welcome to unit testing. Each test function should test one element or process (process being a series of actions that a user might take). (This is a unit, and why it is called "unit testing.") The only time you should have multiple assertions in a test function is if part of the test is dependent on the previous part being successful.
I use this for Selenium testing web pages. So, I might want to assert that I am in the right place every time I navigate to a new page. For instance, if I go to a web page, then login, then change my profile, I would assert that I got to the right place when I logged in, because the test would no longer make sense if my login failed. This prevents me from getting additional error messages, when only one problem was actually encountered.
On the other side, if I have two separate processes to test, I would not test one, then continue on to test the other in the same function, because an error in the first process would mask any problems in the second. Instead, I would write one test function for each process. (And, if one process depended on the success of the other, for instance, post something to a page, then remove the post, I would use the #depends annotation to prevent the second test from running if the first fails.)
In short, if your first assert failing does not make the second one impossible to test, then they should be in separate functions. (Yes, this might result in redundant code. When unit testing, forget all that you have learned about eliminating redundant code. That, or make non-test functions and call them from the test functions. This can make unit tests harder to read, and thus harder to update when changes are made to the subject of the tests though.)
I realize that this question is 2 years old, however the only answer was not very clear about why. I hope that this helps others understand unit testing better.

simple (non-unit) test framework, similar to .phpt, should evaluate output/headers/errors/results

I'm looking for a simpler test framework. I had a look at a few PHPUnit and SimpleTest scripts and I find the required syntactic sugar appalling. SnapTest sounded nice, but was as cumbersome. Apache More::Test was too procedural, even for my taste. And Symfony lime-test was ununique in that regard.
BDD tools like http://everzet.com/Behat/#basics are very nice, but even two abstraction levels higher than desired.
Moreover I've been using throwaway test scripts till now. And I'm wondering if instead of throwing them away, there is a testing framework/tool which simplifies using them for automated tests. Specifically I'd like to use something that:
evaluates output (print/echo), or even return values/objects
serializes and saves it away as probe/comparison data
allows to classify that comparison output as passed test or failure
also collects headers, warning or error messages (which might also be expected output)
in addition to a few $test->assert() or test::fail() states
Basically I'm too lazy to do the test frameworks work, manually pre-define or boolean evaluate and classify the expected output. Also I don't find it entertaining to needlessly wrap test methods into classes, plain include scripts or functions should suffice. Furthermore it shouldn't be difficult to autorun through the test scripts with a pre-initialized base and test environment.
The old .phpt scripts with their --expect-- output come close, but still require too much manual setup. Also I'd prefer a web GUI to run the tests. Is there a modern rehersal of such test scripts? (plus some header/error/result evalation and eventually unit test::assert methods)
Edit, I'll have to give an example. This is your typical PHPUnit test:
class Test_Something extends PHPUnit_Test_Case_Or_Whatever {
function tearUp() {
app::__construct(...);
}
function testMyFunctionForProperResults() {
$this->assertFalse(my_func(false));
$this->assertMatch(my_func("xyzABC"), "/Z.+c/");
$this->assertTrue(my_func(123) == 321);
}
}
Instead I'd like to use plain PHP with less intermingled test API:
function test_my_function_for_proper_results() {
assert::false(my_func(false));
print my_func("xyz_ABC");
return my_func(123);
}
Well, that's actually three tests wrapped in one. But just to highlight: the first version needs manual testing. What I want is sending/returning the test data to the test framework. It's the task of the framework to compare results, and not just spoon-feeded booleans. Or imagine I get a bloated array result or object chain, which I don't want to manually list in the test scripts.
For the record, I've now discovered Shinpuru.
http://arkanis.de/projects/shinpuru/
Which looks promising for real world test cases, and uses PHP5.3-style anonymous functions instead of introspection-class wrappers.
Have to say - it isn't obvious how your example of a simplified test case would be possible to implement. Unfortunately the convolutedness is - more or less - something that has to be lived with. That said, I've seen cases where PHPUnit is extended to simplify things, as well as adding web test runners, tests for headers, output etc (thinking SilverStripe here - they're doing a lot of what you want with PHPUnit). That might be your best bet. For example:
evaluates output (print/echo):
enable output buffering and assert against the buffer result
collect headers, warning or error messages
register your own handler that stores the error message
wget against urls and compare the result (headers and all)
Etc.

Categories