Unit testing accessors (getters and setters) - php

Given the following methods:
public function setFoo($foo) {
$this->_foo = $foo;
return $this;
}
public function getFoo() {
return $this->_foo;
}
Assuming, they may be changed to be more complex in the future:
How would you write unit tests for those methods?
Just one test method?
Should I skip those tests?
What about code coverage?
How about #covers annotation?
Maybe some universal test method to implement in the abstract test case?
(I use Netbeans 7)
This seems like a waste of time, but I wouldn't mind if IDE would generate those test methods automatically.
To qoute from the comment of Sebastian Bergman's blog:
(it's like testing getters and setters -- fail!). In any case, if they were to fail; wouldn't the methods that depend on on them fail?
So, what about the code coverage?

If you do TDD you should write a test for getter and setter. too. Do not
write a single line of code without a test for it - even if your code is
very simple.
Its a kind of religious war to use a tandem of getter and setter for
your test or to isolate each by accessing protected class members using
your unit test framework capabilities. As a black box tester i prefer to
tie my unit test code to the public api instead of tie it to the
concrete implementation details. I expect change. I want to encourage
the developers to refactor existing code. And class internals should not
effect "external code" (unit tests in this case). I don want to break
unit tests when internals change, i want them to break when the public
api changes or when behavior changes. Ok, ok, in case of a failing unit
test do not pin-point to the one-and-only source of problem. I do have
to look in the getter AND the setter to figure out what caused the
problem. Most of the time your getter is very simple (less then 5 lines
of code: e.g. a return and an optional null-check with an exception). So
checking this first is no big deal and not time consuming. And checking
the happy path of the setter is most of the time only a little more
complex (even if you have some validation checks).
Try to isolate your test cases - write a test for a SUT (Subject under
test) that validates its correctness without reley on other methods
(except my example above). The more you isolate the test, the more your
tests spot the problem.
Depending on your test strategy you may be want to cover happy path only
(pragmatic programmer). Or sad pathes, too. I prefer to cover all
execution pathes. When i think i discovered all execution pathes i check
code coverage to identify dead code (not to identify if there are
uncovered execution pathes - 100% code coverage is a missleading indicator).
It is best practice for black box testers to use phpunit in strict mode
and to use #covers to hide collateral coverage.
When you write unit test your test on class A should be executed independent from class B. So your unit tests for class A should not call / cover method of class B.
If you want to identify obsolete getter/setter and other "dead" methods (which are not used by production code) use static code analysis for that. The metric you are interested in is called "Afferent coupling at method level (MethodCa)". Unfortunately this metric (ca) is not available at method-level in PHP Depend (see: http://pdepend.org/documentation/software-metrics/index.html and http://pdepend.org/documentation/software-metrics/afferent-coupling.html). If you realy need it, feel free to contribute it to PHP Depend. An option to exclude calls from the same class would be helpful to get a result without "collateral" calls. If you identify a "dead method" try to figure out if it is meant to be used in near future (the counterpart for an other method that has a #depricated annotation) else remove it. In case it is used in the same class only, make it privat / protected. Do not apply this rule to library code.
Plan B:
If you have acceptance tests (integration test, regression test, etc.) you can run that test without running unit tests at the same time and without phpunits strict mode. This can result in a very similar code coverage result as if you had analysed your production code. But in most cases your non-unit tests are not as strong as your production code is. It depends on your discipline if this plan B is "equal enought" to production code to get a meaningful result.
Further reading:
- Book: Pragmatic Programmer
- Book: Clean Code

Good Question,
i usually try not to test getters&setters directly since i see a greater benefit in testing only the methods that actually do something.
Especially when not using TDD this has the added benefit of showing me setters that i don't use in my unittests showing me that ether my tests are incomplete or that the setter is not used/needed. "If i can execute all the "real" code without using that setter why is it there."
When using fluent setter i sometimes write a test checking the 'fluent' part of the setters but usually that is covered in other tests.
To answer your list:
Just one test method?
That is my least favorite option. All or none. Testing only one is not easy for other people to understand and looks 'random' or needs to be documented in a way.
Edit after comment:
Yes, for "trivial" get/set testing I'd only use one method per property maybe depending on the case even only one method for the whole class (for value objects with many getters and setters I don't want to write/maintain many tests)
How would you write unit tests for those methods?
Should I skip those tests?
I wouldn't skip them. Maybe the getters depending on how many you have (i tend to write only getters i actually need) but the task of having a class completely covered shouldn't fail because of getters.
What about code coverage?
How about #covers annotation?
With #covers my take is always "use it everywhere or don't use it at all". Mixing the two 'styles' of testing takes away some of the benefits of the annotation and looks 'unfinished' to me.
Maybe some universal test method to implement in the abstract test case?
For something like value objects that could work nicely. It might break (or gets more complicated) once you pass in objects / array with type hinting but I'd presonally prefer it over writing manual tests for 500 getters and setters.

This is a common question but strangely can't find a dupe on SO.
You could write unit tests for accessors but the majority of practioners do not. i.e. if the accessors do not have any custom logic, I would not write unit tests to verify if field access works.
Instead I would rely on the consumers of these accessors to ensure that the accessors work. e.g. If getFoo and setFoo don't work, the callers of these method should break. So by writing unit tests for the calling methods, the accessors get verified.
This also means that code coverage should not be a problem. If you find accessors that are not covered after all test suites are run, maybe they are redundant / unused. Delete them.
Try to write a test that illustrates a scenario where a client will use that accessor. e.g. The below snippet shows how the Tooltip (property) for the Pause Button toggles based on its current mode.
[Test]
public void UpdatesTogglePauseTooltipBasedOnState()
{
Assert.That(_mainViewModel.TogglePauseTooltip, Is.EqualTo(Strings.Main_PauseAllBeacons));
_mainViewModel.TogglePauseCommand.Execute(null);
Assert.That(_mainViewModel.TogglePauseTooltip, Is.EqualTo(Strings.Main_ResumeAllBeacons));
_mainViewModel.TogglePauseCommand.Execute(null);
Assert.That(_mainViewModel.TogglePauseTooltip, Is.EqualTo(Strings.Main_PauseAllBeacons));
}

Related

phpspec unit testing - using ioc / service registry for delivering the concrete class to test

I am new to testing and I'm not sure I go about this the right way:
I want to not to do a unit test on a specific class but on whatever class get resolved out of my ioc container. In the ioc container I bind my interfaces to concrete classes, like so:
Example (I'm using Laravel 5):
// in a service provider
public function register(){
$this->app->bind('FooInterface', function() {
return new SomeConcreteFoo;
});
}
Then I want to write unit test against FooInterface and not SomeConcreteFoo which could be swapped out with some other class at a later point.
The reason I want to do this is that it seems to me that the relevant testing should target whatever my ioc container returns, since that is what I'll be using in the application. It also seems to me that the testing should be done on the interface level, since that is where I define the expectations the rest of the app will have to my class.
I'm having a hard time finding any information on how to do this, which suggests to me that I might think about this the wrong way. For instance maybe what I'm trying to accomplish is more like an integration test rather than a unit test.
So the first question is: am I thinking about testing this the right way? in case I'm not - do you have any suggestions regarding best practice for an alternative test path.
The second question: In case my thinking is sound - how do I setup phpspec to make use of the Laravel IOC container, so that I can test against whatever the IOC returns..
I want to not to do a unit test on a specific class but on whatever class get resolved out of my ioc container. In the ioc container I bind my interfaces to concrete classes, like so [...]
This is not how unit tests are written. Unit testing is about describing a class behaviour in isolation, so the only real object actually created is the class under test (and sometimes simple value objects).
Then I want to write unit test against FooInterface and not SomeConcreteFoo which could be swapped out with some other class at a later point.
This is indeed how you should write your unit tests. Prefer interfaces for collaborators.
Each mocking framework supports this feature, and will create test doubles for you, without forcing you to provide specific implementations.
class BarSpec extends ObjectBehavior
{
function it_does_amazing_things(FooInterface $foo)
{
$results = ['a', 'b', 'c'];
$foo->find('something')->willReturn($results);
$this->findMeSometing()->shouldReturn($results);
}
}
In this particular example, PhpSpec will use Prophecy (its mocking framework), to create a test double of FooInterface and will inject it to the example method. What you do with this object determines if it's a fake, stub or a mock.
The reason I want to do this is that it seems to me that the relevant testing should target whatever my ioc container returns, since that is what I'll be using in the application.
As explained above, unit tests focus on behaviour of a single class. Its collaborators are usually faked. This is for several reasons. One of them is speed. Another one is feedback. If a test fails we'll get a clear feedback on which class is broken. If you were creating all collaborators, instead of using test doubles, a single bug could make your whole test suite red. I won't even mention how hard it would be to maintain and create all the needed objects (although containers could help here).
Remember that writing unit tests is more a design activity, rather than a testing activity.
For instance maybe what I'm trying to accomplish is more like an integration test rather than a unit test.
Indeed. Read more about test pyramid. Most of your tests should be unit tests. Then, you should have some number of integration and acceptance tests, which would exercise more than a single class at once. The reason why you'd want more unit tests than integration tests is that the later are more fragile and it's harder to maintain/change them.
Use PHPUnit for integration tests. PhpSpec is not the right tool for this job. PhpSpec is great for designing your classes (writing unit tests), especially if you do it test-first.
The second question: In case my thinking is sound - how do I setup phpspec to make use of the Laravel IOC container, so that I can test against whatever the IOC returns..
You don't. You could think of using the container in integration tests though.
Some reading:
Test Driven Development by Example by Kent Beck
Growing Object-Oriented Software, Guided by Tests by Steve Freeman
PHP Test Doubles Patterns with Prophecy
My top ten favourite PhpSpec limitations
Conceptual difference between Mockery and Prophecy
Economy of Tests

Unit Testing (PHP): When to fake/mock dependencies and when not to

is it better to fake dependencies (for example Doctrine) for unit-tests or to use the real ones?
In a unit test, you use only ONE real instance of a class, and that is the class that you want to test.
ALL dependencies of that class should be mocked, unless there is a reason not to.
Reasons not to mock would be if data objects are being used that have no dependencies itself - you can use the real object and test if it received correct data afterwards.
Another reason not to mock would be if the configuration of the mock is too complicated - in that case, you have a reason to refactor the code instead, because if mocking a class is too complicated, the API of that class might be too complicated, too.
But the general answer: You want to always mock every dependency, every time.
I'll give you an example for the "too-complicated-so-refactor" case.
I was using a "Zend_Session_Namespace" object for internal data storage of a model. That instance got injected into the model, so mocking was not an issue.
But the internal implementation of the real "Namespace" class made me mock all the calls to __set and __get in the correct order of how they were used in the model. And that sucked. Because every time I decided to reorder the reading and writing of a value in my code, I had to change the mocking in the tests, although nothing was broken. Refactoring in the code should not lead to broken tests or force you to change them.
The refactoring added a new object that separates the "Zend_Session_Namespace" from the model. I created an object that extends "ArrayObject" and contains the "Namespace". On creation, all the values got read from the Namespace and added to the ArrayObject, and on every write, the value also gets passed to the Namespace object as well.
I now had the situation that I could use a real extended ArrayObject for all my tests, which in itself only needed an unconfigured mocked instance of "Zend_Session_Namespace", because I did not need to test whether the values were correctly stored in the session when I tested the model. I only needed a data store that gets used inside the model.
To test that the session gets correctly read and written, I have tests for that ArrayObject itself.
So in the end I am using a real instance of the model, and a real instance of the data store together with a mocked instance of "Zend_Session_Namespace" which does nothing. I deliberately chose to separate "model stuff" and "session save stuff" which had been mixed into the model class before -> "single responsibility principle".
The testing really got easier that way. And I'd say that this is also a code smell: If creating and configuring the mock classes is complicated, or needs a lot of changes when you change the tested class, it is time to think about refactoring. There is something wrong there.
Mocking should be done for a reason. Good reasons are:
You can not easily make the depended-on-component (DOC) behave as intended for your tests.
Does calling the DOC cause any non-derministic behaviour (date/time, randomness, network connections)?
The test setup is overly complex and/or maintenance intensive (like, need for external files)
The original DOC brings portability problems for your test code.
Does using the original DOC cause unnacceptably long build / execution times?
Has the DOC stability (maturity) issues that make the tests unreliable, or, worse, is the DOC not even available yet?
For example, you (typically) don't mock standard library math functions like sin or cos, because they don't have any of the abovementioned problems.
Why is it recommendable to avoid mocking where unnecessary?
For one thing, mocking increases test complexity.
Secondly, mocking makes your tests dependent on the inner workings of your code, namely on how the code interacts with the DOCs. Would be acceptable for white box tests where the implemented algorithm is tested, but not desirable for black box tests.

Writing mocks/stubs for an object before you have written the class for that object?

I'm designing a class that has two dependencies. One of the dependency classes has been written and tested. The other has not yet been written.
It has occurred to me because the remaining dependency will be written to facilitate the class that will use it, that I should write the latter first, and design the interface of the former as I go along, learning what it should do.
That seems to me to be a great way to make code. After all, as long as the main class gets a mock in its constructor, I can write it and test it without it being aware that its dependency doesn't exists, then I can create the dependency once I am sure I know what I need.
So: how do I do this? Create a skeleton class that I modify as I go along. Perhaps something like:
class NonExistantSkeleton
{
public function requiredMethod1()
{
}
public function newlyDiscoveredRequirement()
{
}
}
and then mock it using PHPUnit, and setting up stubs, etc, to keep my class under development happy?
Is this the way to go?
It seems like a nice way to develop code - and seems to me to make more sense than developing a dependency, without really knowing for sure how it's going to be used.
In short:
Yes. At least thats what I'm doing right now.
Longer Version:
If the expected collaborators of your class don't exist at the point in time where you need them in your tests for the class you are building you have a few options:
Mock non existing classes (which phpunit can do)
Create class skeletions and mock those
Just create interfaces and get mocks for those (which phpunit can do too)
Maybe you don't need any of the above depending on the object
If you programm against an interface anyways than all you need to do is to create that interface and tell PHPUnit to create a stub/mock from it
+No new class without a test
+Using interfaces when appropriate is considered nicer/better than just hinting against classes
When mocking non existing classes you get some drawbacks that I don't like:
-High mock maintenance cost
-Chaning the methods on that classes is slow and tedious
-If you created the class you should rework the mocks again
so I'd advice against that.
The middle way would be to just create the empty class skeleton with its method and use those for mocking.
I quite like that way in cases where there is no interface to hint against as It is fast and creates stable test code.
Having barebone classes with public apis, for me, is no violation of TDD.
There are classes you don't need to mock.
Data transfer objects and Value Objects can always be created anywhere using the new in your production code so your tests also can just the the real objects.
It helps to keep your tests a little cleaner as you don't need to mock/expect a lot of getter/setter methods and so on.
If you follow a test-driven development methodology then the usual approach is as follows:
Figure out what your classes are meant to do, and what their public-facing APIs should be.
Implement "empty" classes that consist of nothing but the public methods signitures with empty bodies (as you have done in the code example you gave).
Work out an implementation strategy. This means working out which classes are dependant on each other and implementing them in an order that means that dependant classes aren't implemented until the classes it depends on are finished, or at least sufficiently functional to develop against. This means doing the classes with no dependencies first, then the classes that depend only on the completed classes, and so on.
Write your tests. It's possible to write the tests now because you know what the black box for your classes look like, what they need to take as input and what they're supposed to return as output.
Run the tests. You should get 0% success, but also 100% code coverage. This is now your baseline.
Start to implement your classes according to your implementation strategy. Run your unit tests from time to time during this process, say once you've got a class completed, to make sure that it meets its specification as laid down in the unit test. Ideally, each test should show an increase in test passes whilst maintaining 100% code coverage.
EDIT: As edorian pointed out, PHP interfaces are a huge help here because PHPUnit can generate mocks and stubs from interfaces as well as from classes. They're also an excellent tool in reducing coupling and improving substitutability in general. They allow you to substitute any class that implements the expected interface, instead of just subclasses of the expected class.

Unit Testing: Specific testing & Flow of Control

I am quite new to unit testing and testing in general. I am developing with phpUnit, but as my question is more general / a design question, the actual environment shouldn't be of too much importance.
I assume, that it is a good practice, to write your testcases as specific as possible. For example (the later the better):
assertNotEmpty($myObject); // myObject is not Null
assertInternalType('array', $myObject); // myObject is an array
assertGreaterThan(0, count($myObject)); // myObject actually has entries
If that is correct, here is my question:
Is it a accepted practice to write some flow control inside a testcase, if the state of the object one is testing against depends on external sources (i.e. DB) - or even in general?
Like:
if (myObject !== null) {
if (count(myObject) > 0) {
// assert some Business Logic
}
else {
// assert some different Business Logic
}
}
Is this kind of flow control inside a testcase admissible or is a "code smell" and should be circumvented? If it is ok, are there any tips or practices, which should be kept in mind here?
Paul's answer addresses test method scope and assertions, but one thing your question's code implied is that you would test A if the returned object had value X but test B if it had value Y. In other words, your test would expect multiple values and test different things based on what it actually got.
In general, you will be a happier tester if each of your tests has a single, known, correct value. You achieve this by using fixed, known test data, often by setting it up inside the test itself. Here are three common paths:
Fill a database with a set of fixed data used by all tests. This will evolve as you add more tests and functionality, and it can become cumbersome to keep it up-to-date as things move. If you have tests that modify the data, you either need to reset the data after each test or rollback the changes.
Create a streamlined data set for each test. During setUp() the test drops and recreates the database using its specific data set. It can make writing tests easier initially, but you still must update the datasets as the objects evolve. This can also make running the tests take longer.
Use mocks for your data access objects when not testing those DAOs directly. This allows you to specify in the test exactly what data should be returned and when. Since you're not testing the DAO code, it's okay to mock it out. This makes the tests run quickly and means you don't need to manage data sets. You still have to manage the mock data, however, and write the mocking code. There are many mocking frameworks depending on your preference, including PHPUnit's own built-in one.
It's okay to have SOME control flow within your testcases, but in general, understand that your unit tests will work out best if they are disjoint, that is, they each test for different things. The reason this works out well is that when your test cases fail, you can see precisely what the failure is from the testcase that failed, as opposed to needing to go deeper inside a larger test case to see what went wrong. The usual metric is a single assertion per unit test case. That said, there are exceptions to every rule, and that's certainly one of them; there's nothing necessarily wrong with having a couple of assertions in your test case, particularly when the setup / teardown of the test case scenario is particularly difficult. However, the real code smell you want to avoid is the situation where you have one "test" which tests all the code paths.

Unit testing and encapsulation

I'm trying to get into unit testing, but there's one thing bothering me.
I have a php class which I want to unit test. It takes some parameters, and then spits out HTML. The problem is that the main functionality is calculating some values and conditions, and these I want to test. But I have put this in a private method, because normally, nobody needs to know about this method. But this way I am not possible to unit test the class because I have no means of testing the result of the method.
I have found this article about the subject. The conclusion of the article is using reflection to test the private methods.
How do you stand against this subject?
You should have the logic in its own class and then unit test that class, so you don't have to reach through the html in order to test the logic.
As a rule:
You should never test private methods. The private methods exists in order to make the public methods pass their tests.
If you can delete the private methods without breaking the public methods, you don't need the private methods and can delete them.
If you can't delete the private methods without breaking the public methods, then the private methods are being tested.
If you follow the practice of TDD, it would be hard to get into this situation because every line of code is written to make unit tests pass. There should be no "stray" code within your class.
I agree with Tormod; private methods should not be tested. Separating the logic from the presentation is a good idea in general and would allow you to test the logic separately from the presentation. Also, writing tests for the logic is a really good way of catching subtle cases where the logic and presentation isn't properly separated.
(Using reflection to test private methods sounds like a really bad idea to me.)
Unit testing is about improving the probability of correctness of execution.
Encapsulation is about minimising the number of potential dependencies with the highest change propagation probability.
Unit testing is about runtime; encapsulation is about source code.
They're practically orthogonal: neither should influence the other. Making a private method public just to test it is not a good idea: that's unit testing unnecessarily degrading your encapsulation.
Copy your entire source code to a test directory and then remove any and all instances of the modifier, "private." Then write your tests towards this, deprivatised directory. This decouples unit testing from encapsulation concerns.
Automate this copying, deprivatising and unit test running with a script such as the below.
Regards,
Ed.
!/bin/bash
rm -rf code-copy
echo Creating code-copy ...
mkdir code-copy
cp -r ../www code-copy/
for i in find code-copy -name "*php" -follow; do
sed -i 's/private/public/g' $i
done
php run_tests.php
Calculating values and conditions is your business logic. The business logic is much more stable than the visual representation. You should test against a well defined interface. This will safe you changes to your tests that would be necessary if you test your code through the GUI and the GUI changes. (You will be able to add other clients as well.)
If you want to test the rendering (and you should) do it separately.
Here's a nice article why integration test won't work. (Your test is not really a unit test. It tests two aspects of the application.)
With this setup there won't be private methods to test.
The usual solution to test the private methods is to extract them into a new class and to delegate. This is what Tormod suggested but you commented that this does not make any sense to you.
What you could also do, is to make the method public but to relay on some sort of naming convention to mark its privacy : for instance privateGetNumberOfPages(), or _getNumberOfPages(). This would be moral encapsulation: this won't prevent anyone to invoke the method, but no one could say he didn't know it was private.
That way you can invoke the method in your unit test but document (not enforce) it is a private method. This works well in some teams, but not in all.
Another possibility, albeit not the best one design-wise, is to make the method protected and to have the testcase class inheriting from the tested class so that the test can invoke the method and the encapsulation is enforced. I'm not sure this is possible in PHP.
I find that if I want to test a private method, then it ends up being complex enough to warrant moving to a new class and becoming public. I usually follow these steps: make the method public, test it thoroughly, notice the duplication this creates in the tests, refactor to push the newly-public method out to a new class.

Categories