I have been starting unit testing recently and am wondering, should I be writing unit tests for 100% code coverage?
This seems futile when I end up writing more unit testing code than production code.
I am writing a PHP Codeigniter project and sometimes it seems I write so much code just to test one small function.
For Example this Unit test
public function testLogin(){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
$this->realAuth=new $this->CI->auth;
$this->CI->auth=$this->getMock("Auth",array("logIn"));
$this->CI->auth->expects($this->once())
->method("logIn")
->will($this->returnValue(TRUE));
//test
$this->CI->form_validation->expects($this->once())
->method("run")
->will($this->returnValue(TRUE));
$_POST["login"]=TRUE;
$this->CI->login();
$out = $this->CI->output->get_headers();
//check new header ends with dashboard
$this->assertStringEndsWith("dashboard",$out[0][0]);
//tear down
$this->CI->form_validation=$this->realFormValidation;
$this->CI->auth=$this->realAuth;
}
public function badLoginProvider(){
return array(
array(FALSE,FALSE),
array(TRUE,FALSE)
);
}
/**
* #dataProvider badLoginProvider
*/
public function testBadLogin($formSubmitted,$validationResult){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
//test
$this->CI->form_validation->expects($this->any())
->method("run")
->will($this->returnValue($validationResult));
$_POST["login"]=$formSubmitted;
$this->CI->login();
//check it went to the login page
$out = output();
$this->assertGreaterThan(0, preg_match('/Login/i', $out));
//tear down
$this->CI->form_validation=$this->realFormValidation;
}
For this production code
public function login(){
if($this->input->post("login")){
$this->load->library('form_validation');
$username=$this->input->post('username');
$this->form_validation->set_rules('username', 'Username', 'required');
$this->form_validation->set_rules('password', 'Password', "required|callback_userPassCheck[$username]");
if ($this->form_validation->run()===FALSE) {
$this->load->helper("form");
$this->load->view('dashboard/login');
}
else{
$this->load->model('auth');
echo "valid";
$this->auth->logIn($this->input->post('username'),$this->input->post('password'),$this->input->post('remember_me'));
$this->load->helper('url');
redirect('dashboard');
}
}
else{
$this->load->helper("form");
$this->load->view('dashboard/login');
}
}
Where am I going so wrong?
In my opinion, it's normal for test code to be more than production code. But test code tends to be straightforward, once you get the hang of it, it's like a no brainer task to write tests.
Having said that, if you discover your test code is too complicated to write/to cover all the execution paths in your production code, that's a good indicator for some refactoring: your method may be too long, or attempts to do several things, or has so many external dependencies, etc...
Another point is that it's good to have high test coverage, but does not need to be 100% or some very high number. Sometimes there are code that has no logic, like code that simply delegates tasks to others. In that case you can skip testing them and use #codeCoverageIgnore annotation to ignore them in your code coverage.
In my opinion its logical that test are much more code because you have to test multiple scenarios, must provide test data and you have to check that data for every case.
Typically a test-coverage of 80% is a good value. In most cases its not necessary to test 100% of the code because you should not test for example setters and getter. Dont test just for the statistics ;)
The answer is it depends, but generally no. If you are publishing a library then lots of tests are important and can even help create the examples for the docs.
Internal projects you would probably want to focus your code around complex functions and things which would be bad if they were to go wrong. For each test thing what is the value in having the test here rather than in the parent function?
What you would want to avoid is testing too much is anything that relies on implementation details, or say private method/functions, otherwise if you change the structure you'll find you'll have to repeatedly rewrite the entire suite of tests.
It is better to test at a higher level, the public functions or anything which is at the boundary between modules, a few tests at the highest level you can should yield reasonable converge and ensure the code works in the way that your code is actually called. This is not to say lower level functions shouldn't have tests but at that level it's more to check edge cases than to have a test for the typical case against every function.
Instead of creating tests to increase coverage, create tests to cover bugs as you find and fix them, create tests against new functionality when you would have to manually test it anyway. Create tests to guard against bad things which must not happen. Fragile tests which easily break during refactors should be removed or changed to be less dependant on the implementation of the function.
Related
I'm trying to get familiar with unit testing in PHP with a small API in Lumen.
Writing the first few tests was pretty nice with the help of some tutorials but now I encountered a point where I have to mock/ stub a dependency.
My controller depends on a specific custom interface type hinted in the constructor.
Of course, I defined this interface/implementation-binding within a ServiceProvider.
public function __construct(CustomValidatorContract $validator)
{
// App\Contracts\CustomValidatorContract
$this->validator = $validator;
}
public function resize(Request $request)
{
// Illuminate\Contracts\Validation\Validator
$validation = $this->validator->validate($request->all());
if ($validation->fails()) {
$response = array_merge(
$validation
->errors() // Illuminate\Support\MessageBag
->toArray(),
['error' => 'Invalid request data.']
);
// response is global helper
return response()->json($response, 400, ['Content-Type' => 'application/json']);
}
}
As you can see, my CustomValidatorContract has a method validate() which returns an instance of Illuminate\Contracts\Validation\Validator (the validation result). This in turn returns an instance of Illuminate\Support\MessageBag when errors() is called. MessageBag then has a toArray()-method.
Now I want to test the behavior of my controller in case the validation fails.
/** #test */
public function failing_validation_returns_400()
{
$EmptyErrorMessageBag = $this->createMock(MessageBag::class);
$EmptyErrorMessageBag
->expects($this->any())
->method('toArray')
->willReturn(array());
/** #var ValidationResult&\PHPUnit\Framework\MockObject\MockObject $AlwaysFailsTrueValidationResult */
$AlwaysFailsTrueValidationResult = $this->createStub(ValidationResult::class);
$AlwaysFailsTrueValidationResult
->expects($this->atLeastOnce())
->method('fails')
->willReturn(true);
$AlwaysFailsTrueValidationResult
->expects($this->atLeastOnce())
->method('errors')
->willReturn($EmptyErrorMessageBag);
/** #var Validator&\PHPUnit\Framework\MockObject\MockObject $CustomValidatorAlwaysFailsTrue */
$CustomValidatorAlwaysFailsTrue = $this->createStub(Validator::class);
$CustomValidatorAlwaysFailsTrue
->expects($this->once())
->method('validate')
->willReturn($AlwaysFailsTrueValidationResult);
$controller = new ImageResizeController($CustomValidatorAlwaysFailsTrue);
$response = $controller->resize(new Request);
$this->assertEquals(400, $response->status());
$this->assertEquals(
'application/json',
$response->headers->get('Content-Type')
);
$this->assertJson($response->getContent());
$response = json_decode($response->getContent(), true);
$this->assertArrayHasKey('error', $response);
}
This is a test that runs ok - but can someone please tell me if there is a better way to write this? It doesn't feel right.
Is this big stack of moc-objects needed because of the fact that I'm using a framework in the background? Or is there something wrong with my architecture so that this feels so "overengineered"?
Thanks
What you are doing is not unit testing because you are not testing a single unit of your application. This is an integration test, performed with unit testing framework, and this is the reason it looks intuitively wrong.
Unit testing and integration testing happen at different times, at different places and require different approaches and tools - the former tests every single class and function of your code, while latter couldn't care less about those, they just request APIs and validate responses. Also, IT doesn't imply mocking anything because it's goal is to test how well your units integrate with each other.
You'll have hard time supporting tests like that because every time you change CustomValidatorContract you'll have to fix all the tests involving it. This is how UT improves code design by requiring it to be as loosely coupled as possible (so you could pick a single unit and use it without the need to boot entire app), respecting SRP & OCP, etc.
You don't need to test 3rd party code, pick an already tested one instead. You don't need to test side effects either, because environment is just like 3rd party service, it should be tested separately (return response() is a side effect). Also it seriously slows down the testing.
All that leads to the idea that you only want to test your CustomValidatorContract in isolation. You don't even need to mock anything there, just instantiate the validator, give it few sets of input data and check how it goes.
This is a test that runs ok - but can someone please tell me if there is a better way to write this? It doesn't feel right. Is this big stack of moc-objects needed because of the fact that I'm using a framework in the background? Or is there something wrong with my architecture so that this feels so "overengineered"?
The big stack of mock objects indicates that your test subject is tightly coupled to many different things.
If you want to support simpler tests, then you need to make the design simpler.
In other words, instead of Controller.resize being one enormous monolithic thing that knows all of the details about everything, think about a design where resize only knows about the surface of things, and how to delegate work to other (more easily tested) pieces.
This is normal, in the sense that TDD is a lot about choosing designs that support better testing.
I am trying to test some of my controllers through Unit Testing. But there is something strange happening. With the following code in my testcase:
public function test_username_registration_too_short()
{
$result = $this->action('POST', 'App\\Controllers\\API\\UserController#store', null, [
'username' => 'foo'
]);
$this->assertEquals('not_saved', $result->getContent());
// $result = $this->action('POST', 'App\\Controllers\\API\\UserController#store', null, [
// 'username' => 'foo'
// ]);
// $this->assertEquals('not_saved', $result->getContent());
}
public function test_username_registration_too_short_run_2()
{
$result = $this->action('POST', 'App\\Controllers\\API\\UserController#store', null, [
'username' => 'foo'
]);
$this->assertEquals('not_saved', $result->getContent());
}
When I run this, the initial too_short test passes, but the exact same code on run 2 does not pass (it even manages to save the user). But if I put that same code twice in the same method (what is commented out now) it works perfectly? I have nothing in my setUp or tearDown methods. And I am a bit lost here.
The code in the controller is the following:
$user = new User(Input::all());
if($user->save())
{
return 'saved';
}
return 'not_saved';
I'm not going to stop repeating myself over this question. There's a similar answer to a (somewhat) similar question. TL;DR: don't use unit testing framework for functional / integration testing.
This is area of functional testing and there is a fabulous framework
called Behat. You should do your own research, but essentially, while
PHPUnit is great at testing more or less independent blocks of
functionality it sucks at testing bigger things like full request
execution. Later you will start experiencing issues with session
errors, misconfigured environment, etc., all because each request is
supposed to be executed in it's own separate space and you force it
into doing the opposite. Behat on the other hand works in a very
different way, where for each scenario (post robot, view non-existing
page), it sends a fresh request to the server and checks the result.
It is mostly used for final testing of everything working together by
making assertions on the final result (response object / html / json).
If you want to test your code the proper way consider using the right tools for that. Once you know your way around with Behat you'll fall in love with it + you can use PHPUnit from within the Behat, to make individual assertions.
Firstly, I will say that I come from the Java world (this is important, really).
I have been coding PHP for a while, one of the problems that I have encountered is that due to the lack of compilation, sometimes errors that could be easily detected at compilation time (for example, wrong number of parameters for a given function), can silently pass.
That could be easily detected as code coverage increases by adding unit tests. The question is, does it make sense for example to tests constructors in order to check that the passed parameters are correct? I do not mean only the number of parameters, but also the content of such parameters (for example, if a parameter is null, certain objects should launch an exception in order to avoid creating a "dirty" object).
Question is, am I too contaminated by years of Java code? Because after all, increasing the code coverage to "discover" missued functions feels like a (really) primitive way of compiling.
Also, I would like to note that I already use a development environment (PHPStorm), we are also using tools like PHPCodeSniffer.
Any ideas/suggestions?
This is a good question that can be answered on a number of levels:
Language characteristics
Test coverage
CASE tools
1. Language characteristics
As you have pointed out the characteristics of the PHP language differ markedly from the more strongly-typed languages such as Java. This raises a serious issue where programmers coming from the more strongly-typed languages such as Java and C# may not be aware of the implications of PHP's behaviour (such as those you have described). This introduces the possibility of mistakes on the part of the programmer (for example, a programmer who may have been less careful using Java because they know the compiler will catch incorrect parameters may not apply the appropriate care when developing in PHP).
Consequently, better programmer education/supervision is needed to address this issue (such as in-house company coding standards, pair programming, code review). It also (as you have pointed out) raises the question of whether test coverage should be increased to check for such mistakes as would have been caught by a compiler.
2. Test Coverage
The argument for test coverage is very project-specific. In the real world, the level of test coverage is primarily dictated by the error tolerance of the customer (which is dictated by the consequences of an error occuring in your system). If you are developing software that is to run on a real-time control system, then obviously you will test more. In your question you identify PHP as the language of choice; this could apply equally to the ever-increasing number of web-enabled frontends for critical systems infrastructure. On the other side of the coin, if you are developing a simple website for a model railroad club and are just developing a newsletter app then your customer may not care about the possibility of a bug in the constructor.
3. CASE Tools
Ultimately it would be desirable for a CASE tool to be available which can detect these errors, such as missing parameters. If there are no suitable tools out there, why not create one of your own. The creation of a CASE tool is not out of reach of most programmers, particularly if you can hook into an open-source parsing engine for your language. If you are open-source inclined this may be a good project to kick start, or perhaps your company could market such a solution.
Conclusion
In your case whether or not to test the constructors basically comes down to the question: what will the consequences of a failure in my system be? If it makes financial sense to expend extra resources on testing your constructors in order to avoid such failures, then you should do so. Otherwise it may be possible to get by with lesser testing such as pair programming or code reviews.
Do you want the constructor to throw an exception if invalid parameters set? Do you want it to behave that same way tomorrow and next week and next year? Then you write a test to verify that it does.
Tests verify that your code behaves as you want it to. Failing on invalid parameters is code behavior just as much as calculating sales tax or displaying a user's profile page.
We test constructors, as well as the order of the parameters, the defaults when not provided, and then some actual settings. For instance:
class UTIL_CATEGORY_SCOPE extends UTIL_DEPARTMENT_SCOPE
{
function __construct($CategoryNo = NULL, $CategoryName = NULL)
{
parent::__construct(); // Do Not Pass fields to ensure that the array is checked when all fields are defined.
$this->DeclareClassFields_();
$this->CategoryName = $CategoryName;
$this->CategoryNo = $CategoryNo;
}
private function DeclareClassFields_()
{
$this->Fields['CategoryNo'] = new UTIL_ICAP_FIELD_PAIR_FIRST('CCL', 6, ML('Category'), 8);
$this->Fields['CategoryName'] = new UTIL_ICAP_FIELD_PAIR_SECOND('CCL', 32, ML('Name'), 15, array(), array(), NULL, UTIL_ICAP_FIELD::EDIT_DENY, UTIL_ICAP_FIELD::UPDATE_DENY, 'DES');
}
}
We then create our tests to not only check the constructor and its order, but that class and inheritance has not changed.
public function testObjectCreation()
{
$CategoryInfo = new UTIL_CATEGORY_SCOPE();
$this->assertInstanceOf('UTIL_CATEGORY_SCOPE', $CategoryInfo);
$this->assertInstanceOf('UTIL_DEPARTMENT_SCOPE', $CategoryInfo);
$this->assertInstanceOf('UTIL_DATA_STRUCTURE', $CategoryInfo); // Inherited from UTIL_DEPARTMENT_SCOPE
}
public function testConstructFieldOrder()
{
$CategoryInfo = new UTIL_CATEGORY_SCOPE(1500, 'Category Name');
$this->assertEquals(1500, $CategoryInfo->CategoryNo);
$this->assertEquals('Category Name', $CategoryInfo->CategoryName);
}
public function testConstructDefaults()
{
$CategoryInfo = new UTIL_CATEGORY_SCOPE();
$this->assertNull($CategoryInfo->CategoryNo);
$this->assertNull($CategoryInfo->CategoryName);
}
public function testFieldsCreated()
{
$CategoryInfo = new UTIL_CATEGORY_SCOPE();
$this->assertArrayHasKey('CategoryNo', $CategoryInfo->Fields);
$this->assertArrayHasKey('CategoryName', $CategoryInfo->Fields);
$this->assertArrayHasKey('DeptNo', $CategoryInfo->Fields); // Inherited from Parent
$this->assertArrayHasKey('DeptName', $CategoryInfo->Fields); // Inherited from Parent
}
The next weirdness I'm seeing with PHPUnit:
class DummyTest extends PHPUnit_Framework_TestCase {
public function testDummy() {
$this->assertTrue(false, 'assert1');
$this->assertTrue(false, 'assert2');
}
public function testDummy2() {
$this->assertTrue(false, 'assert3');
}
}
As soon as the first assertion fails in a test, the rest of the test is ignored.
So (with a simple call of phpunit DummyTest.php):
The above code will display 2 tests,
2 assertions, 2 failures. What?
If I make all the tests pass, then
I'll get OK (2 tests, 3 assertions).
Good.
If I only make all the tests pass
except for assert2, I get 2 tests, 3
assertions, 1 failure. Good.
I don't get it, but PHPUnit's been around for ages, surely it has to be me?
Not only are the counts not what I'd expect, only the error message for the first failed assert in the code above is displayed.
(BTW, I'm analyzing the xml format generated by PHPUnit for CI rather than testing real code, hence the practice of multiple assertions in the one test.)
First off: That is expected behavior.
Every test method will stop executing once an assertion fails.
For an example where the opposite will be very annoying*:
class DummyTest extends PHPUnit_Framework_TestCase {
public function testDummy() {
$foo = get_me_my_foo();
$this->assertInstanceOf("MyObject", $foo);
$this->assertTrue($foo->doStuff());
}
}
if phpunit wouldn't stop after the first assertion you'd get an E_FATAL (call to a non member function) and the whole PHP process would die.
So to formulate nice assertions and small tests it's more practical that way.
For another example:
When "asserting that an array has a size of X, then asserting that it contains a,b and c" you don't care about the fact that it doesn't contain those values if the size is 0.
If a test fails you usually just need the message "why it failed" and then, while fixing it, you'll automatically make sure the other assertions also pass.
On an additional note there are also people arguing that you should only have One Asssertion per Test case while I don't practice (and I'm not sure if i like it) I wanted to note it ;)
Welcome to unit testing. Each test function should test one element or process (process being a series of actions that a user might take). (This is a unit, and why it is called "unit testing.") The only time you should have multiple assertions in a test function is if part of the test is dependent on the previous part being successful.
I use this for Selenium testing web pages. So, I might want to assert that I am in the right place every time I navigate to a new page. For instance, if I go to a web page, then login, then change my profile, I would assert that I got to the right place when I logged in, because the test would no longer make sense if my login failed. This prevents me from getting additional error messages, when only one problem was actually encountered.
On the other side, if I have two separate processes to test, I would not test one, then continue on to test the other in the same function, because an error in the first process would mask any problems in the second. Instead, I would write one test function for each process. (And, if one process depended on the success of the other, for instance, post something to a page, then remove the post, I would use the #depends annotation to prevent the second test from running if the first fails.)
In short, if your first assert failing does not make the second one impossible to test, then they should be in separate functions. (Yes, this might result in redundant code. When unit testing, forget all that you have learned about eliminating redundant code. That, or make non-test functions and call them from the test functions. This can make unit tests harder to read, and thus harder to update when changes are made to the subject of the tests though.)
I realize that this question is 2 years old, however the only answer was not very clear about why. I hope that this helps others understand unit testing better.
I'm looking for a simpler test framework. I had a look at a few PHPUnit and SimpleTest scripts and I find the required syntactic sugar appalling. SnapTest sounded nice, but was as cumbersome. Apache More::Test was too procedural, even for my taste. And Symfony lime-test was ununique in that regard.
BDD tools like http://everzet.com/Behat/#basics are very nice, but even two abstraction levels higher than desired.
Moreover I've been using throwaway test scripts till now. And I'm wondering if instead of throwing them away, there is a testing framework/tool which simplifies using them for automated tests. Specifically I'd like to use something that:
evaluates output (print/echo), or even return values/objects
serializes and saves it away as probe/comparison data
allows to classify that comparison output as passed test or failure
also collects headers, warning or error messages (which might also be expected output)
in addition to a few $test->assert() or test::fail() states
Basically I'm too lazy to do the test frameworks work, manually pre-define or boolean evaluate and classify the expected output. Also I don't find it entertaining to needlessly wrap test methods into classes, plain include scripts or functions should suffice. Furthermore it shouldn't be difficult to autorun through the test scripts with a pre-initialized base and test environment.
The old .phpt scripts with their --expect-- output come close, but still require too much manual setup. Also I'd prefer a web GUI to run the tests. Is there a modern rehersal of such test scripts? (plus some header/error/result evalation and eventually unit test::assert methods)
Edit, I'll have to give an example. This is your typical PHPUnit test:
class Test_Something extends PHPUnit_Test_Case_Or_Whatever {
function tearUp() {
app::__construct(...);
}
function testMyFunctionForProperResults() {
$this->assertFalse(my_func(false));
$this->assertMatch(my_func("xyzABC"), "/Z.+c/");
$this->assertTrue(my_func(123) == 321);
}
}
Instead I'd like to use plain PHP with less intermingled test API:
function test_my_function_for_proper_results() {
assert::false(my_func(false));
print my_func("xyz_ABC");
return my_func(123);
}
Well, that's actually three tests wrapped in one. But just to highlight: the first version needs manual testing. What I want is sending/returning the test data to the test framework. It's the task of the framework to compare results, and not just spoon-feeded booleans. Or imagine I get a bloated array result or object chain, which I don't want to manually list in the test scripts.
For the record, I've now discovered Shinpuru.
http://arkanis.de/projects/shinpuru/
Which looks promising for real world test cases, and uses PHP5.3-style anonymous functions instead of introspection-class wrappers.
Have to say - it isn't obvious how your example of a simplified test case would be possible to implement. Unfortunately the convolutedness is - more or less - something that has to be lived with. That said, I've seen cases where PHPUnit is extended to simplify things, as well as adding web test runners, tests for headers, output etc (thinking SilverStripe here - they're doing a lot of what you want with PHPUnit). That might be your best bet. For example:
evaluates output (print/echo):
enable output buffering and assert against the buffer result
collect headers, warning or error messages
register your own handler that stores the error message
wget against urls and compare the result (headers and all)
Etc.