I'm trying to get into unit testing, but there's one thing bothering me.
I have a php class which I want to unit test. It takes some parameters, and then spits out HTML. The problem is that the main functionality is calculating some values and conditions, and these I want to test. But I have put this in a private method, because normally, nobody needs to know about this method. But this way I am not possible to unit test the class because I have no means of testing the result of the method.
I have found this article about the subject. The conclusion of the article is using reflection to test the private methods.
How do you stand against this subject?
You should have the logic in its own class and then unit test that class, so you don't have to reach through the html in order to test the logic.
As a rule:
You should never test private methods. The private methods exists in order to make the public methods pass their tests.
If you can delete the private methods without breaking the public methods, you don't need the private methods and can delete them.
If you can't delete the private methods without breaking the public methods, then the private methods are being tested.
If you follow the practice of TDD, it would be hard to get into this situation because every line of code is written to make unit tests pass. There should be no "stray" code within your class.
I agree with Tormod; private methods should not be tested. Separating the logic from the presentation is a good idea in general and would allow you to test the logic separately from the presentation. Also, writing tests for the logic is a really good way of catching subtle cases where the logic and presentation isn't properly separated.
(Using reflection to test private methods sounds like a really bad idea to me.)
Unit testing is about improving the probability of correctness of execution.
Encapsulation is about minimising the number of potential dependencies with the highest change propagation probability.
Unit testing is about runtime; encapsulation is about source code.
They're practically orthogonal: neither should influence the other. Making a private method public just to test it is not a good idea: that's unit testing unnecessarily degrading your encapsulation.
Copy your entire source code to a test directory and then remove any and all instances of the modifier, "private." Then write your tests towards this, deprivatised directory. This decouples unit testing from encapsulation concerns.
Automate this copying, deprivatising and unit test running with a script such as the below.
Regards,
Ed.
!/bin/bash
rm -rf code-copy
echo Creating code-copy ...
mkdir code-copy
cp -r ../www code-copy/
for i in find code-copy -name "*php" -follow; do
sed -i 's/private/public/g' $i
done
php run_tests.php
Calculating values and conditions is your business logic. The business logic is much more stable than the visual representation. You should test against a well defined interface. This will safe you changes to your tests that would be necessary if you test your code through the GUI and the GUI changes. (You will be able to add other clients as well.)
If you want to test the rendering (and you should) do it separately.
Here's a nice article why integration test won't work. (Your test is not really a unit test. It tests two aspects of the application.)
With this setup there won't be private methods to test.
The usual solution to test the private methods is to extract them into a new class and to delegate. This is what Tormod suggested but you commented that this does not make any sense to you.
What you could also do, is to make the method public but to relay on some sort of naming convention to mark its privacy : for instance privateGetNumberOfPages(), or _getNumberOfPages(). This would be moral encapsulation: this won't prevent anyone to invoke the method, but no one could say he didn't know it was private.
That way you can invoke the method in your unit test but document (not enforce) it is a private method. This works well in some teams, but not in all.
Another possibility, albeit not the best one design-wise, is to make the method protected and to have the testcase class inheriting from the tested class so that the test can invoke the method and the encapsulation is enforced. I'm not sure this is possible in PHP.
I find that if I want to test a private method, then it ends up being complex enough to warrant moving to a new class and becoming public. I usually follow these steps: make the method public, test it thoroughly, notice the duplication this creates in the tests, refactor to push the newly-public method out to a new class.
Related
I am new to testing and I'm not sure I go about this the right way:
I want to not to do a unit test on a specific class but on whatever class get resolved out of my ioc container. In the ioc container I bind my interfaces to concrete classes, like so:
Example (I'm using Laravel 5):
// in a service provider
public function register(){
$this->app->bind('FooInterface', function() {
return new SomeConcreteFoo;
});
}
Then I want to write unit test against FooInterface and not SomeConcreteFoo which could be swapped out with some other class at a later point.
The reason I want to do this is that it seems to me that the relevant testing should target whatever my ioc container returns, since that is what I'll be using in the application. It also seems to me that the testing should be done on the interface level, since that is where I define the expectations the rest of the app will have to my class.
I'm having a hard time finding any information on how to do this, which suggests to me that I might think about this the wrong way. For instance maybe what I'm trying to accomplish is more like an integration test rather than a unit test.
So the first question is: am I thinking about testing this the right way? in case I'm not - do you have any suggestions regarding best practice for an alternative test path.
The second question: In case my thinking is sound - how do I setup phpspec to make use of the Laravel IOC container, so that I can test against whatever the IOC returns..
I want to not to do a unit test on a specific class but on whatever class get resolved out of my ioc container. In the ioc container I bind my interfaces to concrete classes, like so [...]
This is not how unit tests are written. Unit testing is about describing a class behaviour in isolation, so the only real object actually created is the class under test (and sometimes simple value objects).
Then I want to write unit test against FooInterface and not SomeConcreteFoo which could be swapped out with some other class at a later point.
This is indeed how you should write your unit tests. Prefer interfaces for collaborators.
Each mocking framework supports this feature, and will create test doubles for you, without forcing you to provide specific implementations.
class BarSpec extends ObjectBehavior
{
function it_does_amazing_things(FooInterface $foo)
{
$results = ['a', 'b', 'c'];
$foo->find('something')->willReturn($results);
$this->findMeSometing()->shouldReturn($results);
}
}
In this particular example, PhpSpec will use Prophecy (its mocking framework), to create a test double of FooInterface and will inject it to the example method. What you do with this object determines if it's a fake, stub or a mock.
The reason I want to do this is that it seems to me that the relevant testing should target whatever my ioc container returns, since that is what I'll be using in the application.
As explained above, unit tests focus on behaviour of a single class. Its collaborators are usually faked. This is for several reasons. One of them is speed. Another one is feedback. If a test fails we'll get a clear feedback on which class is broken. If you were creating all collaborators, instead of using test doubles, a single bug could make your whole test suite red. I won't even mention how hard it would be to maintain and create all the needed objects (although containers could help here).
Remember that writing unit tests is more a design activity, rather than a testing activity.
For instance maybe what I'm trying to accomplish is more like an integration test rather than a unit test.
Indeed. Read more about test pyramid. Most of your tests should be unit tests. Then, you should have some number of integration and acceptance tests, which would exercise more than a single class at once. The reason why you'd want more unit tests than integration tests is that the later are more fragile and it's harder to maintain/change them.
Use PHPUnit for integration tests. PhpSpec is not the right tool for this job. PhpSpec is great for designing your classes (writing unit tests), especially if you do it test-first.
The second question: In case my thinking is sound - how do I setup phpspec to make use of the Laravel IOC container, so that I can test against whatever the IOC returns..
You don't. You could think of using the container in integration tests though.
Some reading:
Test Driven Development by Example by Kent Beck
Growing Object-Oriented Software, Guided by Tests by Steve Freeman
PHP Test Doubles Patterns with Prophecy
My top ten favourite PhpSpec limitations
Conceptual difference between Mockery and Prophecy
Economy of Tests
It's time for some more seemingly simple questions that I just can't seem to find the answer to.
I'm developing a library with TDD (PHP). To my understanding, when using TDD, you should not write any production code without first writing a failing test to warrant it.
I have a mutator method, that appends data to an array with private visibility. How should I test that? Should I just test the various accessor instead? Should the test for the accessor cover the mutator method?
Is it OK for a test to test an accessor and a mutator, or should these be separate tests?
My library requires a dependency, which I will inject through the constructor. What test code might prompt me to write the constructor code?
Sorry for such noobish questions. I've been studying TDD quite a lot, and thought I had it all figured out, but as soon as I try to make use of it, all these little questions come to mind. Obviously I want to make sure that, I implement it effectively and to the best of my knowledge.
Perhaps I'm being too strict? Perhaps the injection is tested implicitly using a mock and checking expectations of a method that makes use of the injected class?
I understand these questions might be subjective, and the answers might be based on people's opinions, but I'm fine with that. I just want to get started in a way that makes sense and works.
Many thanks in advance.
I would test the setter and getter methods together, because that is by far the simplest way to do it without having to change the visibility of your array, which you shouldn't do. Your injected class will be tested implicitly by these tests.
In general try to write your unit tests from the perspective of another user trying to use your class under test. You need to think, what is this class supposed to do or what is its contract (i.e. this class holds an array of objects that users can add and remove from), then write tests to be certain it satisfies that contract. After that, write just enough code to get the test to pass.
I'm designing a class that has two dependencies. One of the dependency classes has been written and tested. The other has not yet been written.
It has occurred to me because the remaining dependency will be written to facilitate the class that will use it, that I should write the latter first, and design the interface of the former as I go along, learning what it should do.
That seems to me to be a great way to make code. After all, as long as the main class gets a mock in its constructor, I can write it and test it without it being aware that its dependency doesn't exists, then I can create the dependency once I am sure I know what I need.
So: how do I do this? Create a skeleton class that I modify as I go along. Perhaps something like:
class NonExistantSkeleton
{
public function requiredMethod1()
{
}
public function newlyDiscoveredRequirement()
{
}
}
and then mock it using PHPUnit, and setting up stubs, etc, to keep my class under development happy?
Is this the way to go?
It seems like a nice way to develop code - and seems to me to make more sense than developing a dependency, without really knowing for sure how it's going to be used.
In short:
Yes. At least thats what I'm doing right now.
Longer Version:
If the expected collaborators of your class don't exist at the point in time where you need them in your tests for the class you are building you have a few options:
Mock non existing classes (which phpunit can do)
Create class skeletions and mock those
Just create interfaces and get mocks for those (which phpunit can do too)
Maybe you don't need any of the above depending on the object
If you programm against an interface anyways than all you need to do is to create that interface and tell PHPUnit to create a stub/mock from it
+No new class without a test
+Using interfaces when appropriate is considered nicer/better than just hinting against classes
When mocking non existing classes you get some drawbacks that I don't like:
-High mock maintenance cost
-Chaning the methods on that classes is slow and tedious
-If you created the class you should rework the mocks again
so I'd advice against that.
The middle way would be to just create the empty class skeleton with its method and use those for mocking.
I quite like that way in cases where there is no interface to hint against as It is fast and creates stable test code.
Having barebone classes with public apis, for me, is no violation of TDD.
There are classes you don't need to mock.
Data transfer objects and Value Objects can always be created anywhere using the new in your production code so your tests also can just the the real objects.
It helps to keep your tests a little cleaner as you don't need to mock/expect a lot of getter/setter methods and so on.
If you follow a test-driven development methodology then the usual approach is as follows:
Figure out what your classes are meant to do, and what their public-facing APIs should be.
Implement "empty" classes that consist of nothing but the public methods signitures with empty bodies (as you have done in the code example you gave).
Work out an implementation strategy. This means working out which classes are dependant on each other and implementing them in an order that means that dependant classes aren't implemented until the classes it depends on are finished, or at least sufficiently functional to develop against. This means doing the classes with no dependencies first, then the classes that depend only on the completed classes, and so on.
Write your tests. It's possible to write the tests now because you know what the black box for your classes look like, what they need to take as input and what they're supposed to return as output.
Run the tests. You should get 0% success, but also 100% code coverage. This is now your baseline.
Start to implement your classes according to your implementation strategy. Run your unit tests from time to time during this process, say once you've got a class completed, to make sure that it meets its specification as laid down in the unit test. Ideally, each test should show an increase in test passes whilst maintaining 100% code coverage.
EDIT: As edorian pointed out, PHP interfaces are a huge help here because PHPUnit can generate mocks and stubs from interfaces as well as from classes. They're also an excellent tool in reducing coupling and improving substitutability in general. They allow you to substitute any class that implements the expected interface, instead of just subclasses of the expected class.
Given the following methods:
public function setFoo($foo) {
$this->_foo = $foo;
return $this;
}
public function getFoo() {
return $this->_foo;
}
Assuming, they may be changed to be more complex in the future:
How would you write unit tests for those methods?
Just one test method?
Should I skip those tests?
What about code coverage?
How about #covers annotation?
Maybe some universal test method to implement in the abstract test case?
(I use Netbeans 7)
This seems like a waste of time, but I wouldn't mind if IDE would generate those test methods automatically.
To qoute from the comment of Sebastian Bergman's blog:
(it's like testing getters and setters -- fail!). In any case, if they were to fail; wouldn't the methods that depend on on them fail?
So, what about the code coverage?
If you do TDD you should write a test for getter and setter. too. Do not
write a single line of code without a test for it - even if your code is
very simple.
Its a kind of religious war to use a tandem of getter and setter for
your test or to isolate each by accessing protected class members using
your unit test framework capabilities. As a black box tester i prefer to
tie my unit test code to the public api instead of tie it to the
concrete implementation details. I expect change. I want to encourage
the developers to refactor existing code. And class internals should not
effect "external code" (unit tests in this case). I don want to break
unit tests when internals change, i want them to break when the public
api changes or when behavior changes. Ok, ok, in case of a failing unit
test do not pin-point to the one-and-only source of problem. I do have
to look in the getter AND the setter to figure out what caused the
problem. Most of the time your getter is very simple (less then 5 lines
of code: e.g. a return and an optional null-check with an exception). So
checking this first is no big deal and not time consuming. And checking
the happy path of the setter is most of the time only a little more
complex (even if you have some validation checks).
Try to isolate your test cases - write a test for a SUT (Subject under
test) that validates its correctness without reley on other methods
(except my example above). The more you isolate the test, the more your
tests spot the problem.
Depending on your test strategy you may be want to cover happy path only
(pragmatic programmer). Or sad pathes, too. I prefer to cover all
execution pathes. When i think i discovered all execution pathes i check
code coverage to identify dead code (not to identify if there are
uncovered execution pathes - 100% code coverage is a missleading indicator).
It is best practice for black box testers to use phpunit in strict mode
and to use #covers to hide collateral coverage.
When you write unit test your test on class A should be executed independent from class B. So your unit tests for class A should not call / cover method of class B.
If you want to identify obsolete getter/setter and other "dead" methods (which are not used by production code) use static code analysis for that. The metric you are interested in is called "Afferent coupling at method level (MethodCa)". Unfortunately this metric (ca) is not available at method-level in PHP Depend (see: http://pdepend.org/documentation/software-metrics/index.html and http://pdepend.org/documentation/software-metrics/afferent-coupling.html). If you realy need it, feel free to contribute it to PHP Depend. An option to exclude calls from the same class would be helpful to get a result without "collateral" calls. If you identify a "dead method" try to figure out if it is meant to be used in near future (the counterpart for an other method that has a #depricated annotation) else remove it. In case it is used in the same class only, make it privat / protected. Do not apply this rule to library code.
Plan B:
If you have acceptance tests (integration test, regression test, etc.) you can run that test without running unit tests at the same time and without phpunits strict mode. This can result in a very similar code coverage result as if you had analysed your production code. But in most cases your non-unit tests are not as strong as your production code is. It depends on your discipline if this plan B is "equal enought" to production code to get a meaningful result.
Further reading:
- Book: Pragmatic Programmer
- Book: Clean Code
Good Question,
i usually try not to test getters&setters directly since i see a greater benefit in testing only the methods that actually do something.
Especially when not using TDD this has the added benefit of showing me setters that i don't use in my unittests showing me that ether my tests are incomplete or that the setter is not used/needed. "If i can execute all the "real" code without using that setter why is it there."
When using fluent setter i sometimes write a test checking the 'fluent' part of the setters but usually that is covered in other tests.
To answer your list:
Just one test method?
That is my least favorite option. All or none. Testing only one is not easy for other people to understand and looks 'random' or needs to be documented in a way.
Edit after comment:
Yes, for "trivial" get/set testing I'd only use one method per property maybe depending on the case even only one method for the whole class (for value objects with many getters and setters I don't want to write/maintain many tests)
How would you write unit tests for those methods?
Should I skip those tests?
I wouldn't skip them. Maybe the getters depending on how many you have (i tend to write only getters i actually need) but the task of having a class completely covered shouldn't fail because of getters.
What about code coverage?
How about #covers annotation?
With #covers my take is always "use it everywhere or don't use it at all". Mixing the two 'styles' of testing takes away some of the benefits of the annotation and looks 'unfinished' to me.
Maybe some universal test method to implement in the abstract test case?
For something like value objects that could work nicely. It might break (or gets more complicated) once you pass in objects / array with type hinting but I'd presonally prefer it over writing manual tests for 500 getters and setters.
This is a common question but strangely can't find a dupe on SO.
You could write unit tests for accessors but the majority of practioners do not. i.e. if the accessors do not have any custom logic, I would not write unit tests to verify if field access works.
Instead I would rely on the consumers of these accessors to ensure that the accessors work. e.g. If getFoo and setFoo don't work, the callers of these method should break. So by writing unit tests for the calling methods, the accessors get verified.
This also means that code coverage should not be a problem. If you find accessors that are not covered after all test suites are run, maybe they are redundant / unused. Delete them.
Try to write a test that illustrates a scenario where a client will use that accessor. e.g. The below snippet shows how the Tooltip (property) for the Pause Button toggles based on its current mode.
[Test]
public void UpdatesTogglePauseTooltipBasedOnState()
{
Assert.That(_mainViewModel.TogglePauseTooltip, Is.EqualTo(Strings.Main_PauseAllBeacons));
_mainViewModel.TogglePauseCommand.Execute(null);
Assert.That(_mainViewModel.TogglePauseTooltip, Is.EqualTo(Strings.Main_ResumeAllBeacons));
_mainViewModel.TogglePauseCommand.Execute(null);
Assert.That(_mainViewModel.TogglePauseTooltip, Is.EqualTo(Strings.Main_PauseAllBeacons));
}
I just started practicing TDD in my projects. I'm developing a project now using php/zend/mysql and phpunit/dbunit for testing. I'm just a bit distracted on the idea of encapsulation and the test driven approach. My idea behind encapsulation is to hide access to several object functionalities. To make it more clear, private and protected functions are not directly testable(unless you will create a public function to call it).
So I end up converting some private and protected functions to public functions just to be able to test them. I'm really violating the principles of encapsulation to give way to micro function testability. Is this the correct way of doing it?
There is a pretty standard answer to this in TDD circles. If there is functionality in a class that you both want hidden and directly tested, you should sprout a class with that functionality. This is a great example of how TDD improves your design.
In the original class, that extraneous functionality is gone, wrapped within the sprouted class, so the original class' design is simpler, and better conforms to the Single Responsibility Principle. In the sprouted class, the extracted functionality is its raison d'etre, therefore it's appropriate for it to be public, and therefore it's testable without test-only modifications.
With respect there Carl Manaster's fine answer, there are some drawbacks you should at least consider before embarking on the path Carl suggested.
The most significant of which is this: we use encapsulation to minimise the number of potential dependencies that carry the greatest probability of change propagation. In your case, you have encapsulated private methods inside your class: they are not available to other classes and thus there are no potential dependencies on them: the cost of any changes you make to them is minimised and has a low probablility of propagation to other classes.
It seems that Carl suggests moving some private methods from your class into a new class, and making those methods public (so that you can test them). (Incidentally, why not just make them public in the original class?)
By doing this, you remove the barrier to other classes' forming dependencies on those methods, which will potentially increase the cost of chaging those methods should any other class take to using them.
You may judge this down-side minor and a worthwhile price to pay for being able to test your private methods, but at least be aware of it. In a small number of cases, it may indeed be worthwhile, but if you institute this throughout your code-base then you'll drastically increase the probability that these dependencies will form, increasing the cost of your maintenance cycle to an unknown degree.
For these reasons, I disagree with Carl that his suggestion is, ” … a great example of how TDD improves your design.”
Furthermore, he states, ”In the original class, that extraneous functionality is gone, wrapped within the sprouted class, so the original class' design is simpler, and better conforms to the Single Responsibility Principle.”
I would argue that the functionality being moved is not at all, ”Extraneous.” Also, ”Simpler,” is a not well-defined: it certainly may be the case that a class's simplicity is inversely proportional to its size but that does not mean that a system of simplest-possible classes will be the simplest possible system: if this were the case, all classes would contain only one method and a system would have an enormous number of classes; the removal of this hierarchical layer of multiple-methods-within-classes, it could be argued, would make the system much more complicated.
The Single Responsibility Principle (SRP) is, furthermore, notoriously subjective and entirely dependent on the level of abstraction of the observer. It is not at all the case that removing a method from a class automatically improves its conformity to the SRP. A Printer class, with 10 methods, has the single responsibility of printing at the level of abstraction of the class. One of its methods may be checkPrinterConnected() and one may be checkPaper(); at the method level, these are clearly separate responsibilities, but they do not automatically suggest that the class should be broken down into further classes.
Carl finishes, ”In the sprouted class, the extracted functionality is its raison d'etre, therefore it's appropriate for it to be public, and therefore it's testable without test-only modifications.” A functionality's importance (it's raison-d'etre-ness) is not the basis for the appropriateness of its being public. The basis for the appropriateness of functionality's being public is the minimising of the interface exposed to the client such that the class's functionality is available for use while the client's independence of the functionality's implementation is maximised. Of course, if you are moving just one method into the sprouted class, then it has to be public. If you are moving more than one method, however, you must make those methods public which are essential to the client's successful use of the class: these public methods may be far less important than some of the private methods from which you wish to shield your client. (In any case, I'm not at a fan of this, ”Raison-d'etre,” phrase as the importance of a method is also not well-defined.)
An alternative approach to Carl's suggest depends on how large you envisage your system to grow. If it will grow to fewer than a few thousand classes, then you might consider having a script to copy your source code to a new directory, change all occurances of, ”private” to, ”public” in that copied source and then write your tests against the copied source. This has the down-side of the time it takes to copy the code but the benefit of preserving encapsulation your original source yet making all the methods testable in the copied version.
Below is the script I use for this purpose.
Regards,
Ed Kirwan
!/bin/bash
rm -rf code-copy
echo Creating code-copy ...
mkdir code-copy
cp -r ../www code-copy/
for i in find code-copy -name "*php" -follow; do
sed -i 's/private/public/g' $i
done
php run_tests.php
I have just read a great article on letting mock objects drive you design:
http://www.mockobjects.com/files/usingmocksandtests.pdf
When Carl says "you should sprout a class with that functionality", the author of this article explain how your tests can guide you, through the use of mock objects, how you can design your class so you 1) don't need to worry about not being able to test private parts, and more importantly 2) how this will improve your design by (I'll paraphrase Carls quote) discovering collaborators and roles with the right responsibility.
The author takes you through an example step by step to make his point very clear.
Here's another article with the same approach:
http://www.methodsandtools.com/archive/archive.php?id=90
A quote:
Many who start with TDD struggle with
getting a grip on dependencies. To
test an object, you exercise some
behaviour and then verify whether an
object is in an expected state.
Because OO design focuses on
behaviour, the state of an object is
usually hidden (encapsulated). To be
able to verify if an object behaves
like expected, you sometimes need to
access the internal state and
introduce special methods to expose
this state, like a getter method or a
property that retrieves the internal
state.
Apart from not wanting objects
cluttering their interfaces and
exposing their private parts, we
neither want to introduce unnecessary
dependencies with such extra getters.
Our tests will become too tightly
coupled and focused on implementation
details.
A group of agile software development
pioneers from the United Kingdom were
also struggling with this back in
1999. They had to add additional getter methods to validate the state
of objects. Their manager didn't like
all this breaking of encapsulation and
declared: I want no getters in the
code! (Mackinnon et al., 2000 &
Freeman et al., 2004)
The team came up with the idea to
focus on interactions rather than
state. They created a special object
to replace the collaborators of
objects under test. These special
objects contained specifications for
expected method calls. They called
these objects mock objects, or mocks
for short. The original ideas have
been refined, resulting in several
mock object frameworks for all common
programming languages: Java (jMock,
EasyMock, Mockito), .NET (NMock,
RhinoMocks), Python (PythonMock,
Mock.py, Ruby (Mocha, RSpec), C++
(mockpp, amop). See
www.mockobjects.com for more
information and links.