Given this code:
function testMe($a)
{
if ($a)
{
return 1;
}
else
{
return $this->testMe(true);
}
}
testMe() cannot be mocked, because then I cant call it. On the other hand, it must be mocked…
I'd say your question has some philosophical potential. I'll try to answer it the way you ask it but before, allow me to comment on your comprehension:
testMe() cannot be mocked, because then I cant call it. On the other hand, it must be mocked… [italics by me]
The unfinished sentence is wrong. You probably sensed it already because you didn't finish it. In a unit test you don't mock the unit. So you put the unit under test, and the unit is the method. That is the smallest part (unit) that can be tested.
What perhaps actually creates a bit of confusion is the recursion within the method.
So you actually ask how to unit-test recursion or a single method call within that recursion. But do you really need to test it?
I would say no. And that is because the recursion is an implementation detail of the method. From the outside it should not make any difference if you exchange the internal algorithm from recursion to a stack based loop for example.
But despite the fact I say you don't need that (and I hope you already have understood the argumentation I outlined), it technically is possible for your very specific scenario to test such a method-call without re-writing the code under test by re-binding $this. Then you can replace the subject under test ($this) with a mock when called. So that you have two methods: The one to test and the mocked one that is accessible via $this->testMe().
This could be done by instantiating the subject under test, create a mock, use PHP's reflection to obtain the closure of testMe() then re-bind $this on the closure to the mock and then call the closure for your test assertions.
I would not call this a unit-test any longer because as I outlined earlier, you're testing internals / privates, so you can use it more to actually test fragments of the recursion under certain circumstances and other similar detailed things that will more actually proof / debug fragments of code. You normally only need that under very specific circumstances when code is really high valuable.
Don't use your little confusion about the recursion to think this is an entirely important place to test on it's own. But if you want to get your fingers dirty, it's perhaps something worth to play with to learn about PHP reflection, closures and re-binding $this.
Related
In the past i always stumbled across a certain problem with phpspec:
Lets assume i have a method which calls multiple methods on another object
class Caller {
public function call(){
$this->receiver->method1();
...
$this->receiver->method2();
}
}
In BDD i would first write a test which makes sure method1 will be called.
function it_calls_method1_of_receiver(Receiver $receiver){
$receiver->method1()->shouldBeCalled();
$this->call();
}
And then i would write the next test to assure method2 will be called.
function it_calls_method2_of_receiver(Receiver $receiver){
$receiver->method2()->shouldBeCalled();
$this->call();
}
But this test fails in phpspec because method1 gets called before method2.
To satisfy phpspec i have to check for both method calls.
function it_calls_method2_of_receiver(Receiver $receiver){
$receiver->method1()->shouldBeCalled();
$receiver->method2()->shouldBeCalled();
$this->call();
}
My problem with that is, that it bloats up every test. In this example it's just one extra line but imagine a method which builds an object with a lot of setters.
I would need to write all setters for every test. It would get quite hard to see the purpose of the test since every test is big and looks the same.
I'm quite sure this is not a problem with phpspec or bdd but rather a problem with my architecture. What would be a better (more testable) way to write this?
For example:
public function handleRequest($request, $endpoint){
$endpoint->setRequest($request);
$endpoint->validate();
$endpoint->handle();
}
Here i validate if an request provides all necessary info for a specific endpoint (or throw an exception) and then handle the request. I choose this pattern to separate validating from the endpoint logic.
Prophecy, the mocking framework used by PhpSpec, is very opinionated. It follows the mockist approach (London School of TDD) which defends that we should describe one behaviour at a time.
A mock is a test, so you want to keep one mock per test. You can mock all the calls, but that will not look elegant. The recommended approach is to separate the behaviour you are testing and select the mock you need for that behaviour, stubbing the rest of the calls. If you see yourself creating loads of stubs in one test that indicates feature envy — you should consider moving the behaviour to the callee, or add a man in the middle.
Say you decide to go ahead and describe the code you have, without refactoring. If you are interested in the second call, as per your example, you should stub the other calls using willReturn, or similar. E.g. $endpoint->setRequest(Argument::type(Request::class))->willReturn() instead of shouldBeCalled().
I am quite new to OOP, so this is really a basic OOP question, in the context of a Laravel Controller.
I'm attempting to create a notification system system that creates Notification objects when certain other objects are created, edited, deleted, etc. So, for example, if a User is edited, then I want to generate a Notification regarding this edit. Following this example, I've created UserObserver that calls NotificationController::store() when a User is saved.
class UserObserver extends BaseObserver
{
public function saved($user)
{
$data = [
// omitted from example
];
NotificationController::store($data);
}
}
In order to make this work, I had to make NotificationController::store() static.
class NotificationController extends \BaseController {
public static function store($data)
{
// validation omitted from example
$notification = Notification::create($data);
}
I'm only vaguely familiar with what static means, so there's more than likely something inherently wrong with what I'm doing here, but this seems to get the job done, more or less. However, everything that I've read indicates that static functions are generally bad practice. Is what I'm doing here "wrong," per say? How could I do this better?
I will have several other Observer classes that will need to call this same NotificationController::store(), and I want NotificationController::store() to handle any validation of $data.
I am just starting to learn about unit testing. Would what I've done here make anything difficult with regard to testing?
I've written about statics extensively here: How Not To Kill Your Testability Using Statics. The gist of it as applies to your case is as follows:
Static function calls couple code. It is not possible to substitute static function calls with anything else or to skip those calls, for whatever reason. NotificationController::store() is essentially in the same class of things as substr(). Now, you probably wouldn't want to substitute a call to substr by anything else; but there are a ton of reasons why you may want to substitute NotificationController, now or later.
Unit testing is just one very obvious use case where substitution is very desirable. If you want to test the UserObserver::saved function just by itself, because it contains a complex algorithm which you need to test with all possible inputs to ensure it's working correctly, you cannot decouple that algorithm from the call to NotificationController::store(). And that function in turn probably calls some Model::save() method, which in turn wants to talk to a database. You'd need to set up this whole environment which all this other unrelated code requires (and which may or may not contain bugs of its own), that it essentially is impossible to simply test this one function by itself.
If your code looked more like this:
class UserObserver extends BaseObserver
{
public function saved($user)
{
$data = [
// omitted from example
];
$this->NotificationController->store($data);
}
}
Well, $this->NotificationController is obviously a variable which can be substituted at some point. Most typically this object would be injected at the time you instantiate the class:
new UserObserver($notificationController)
You could simply inject a mock object which allows any methods to be called, but which simply does nothing. Then you could test UserObserver::saved() in isolation and ensure it's actually bug free.
In general, using dependency injected code makes your application more flexible and allows you to take it apart. This is necessary for unit testing, but will also come in handy later in scenarios you can't even imagine right now, but will be stumped by half a year from now as you need to restructure and refactor your application for some new feature you want to implement.
Caveat: I have never written a single line of Laravel code, but as I understand it, it does support some form of dependency injection. If that's actually really the case, you should definitely use that capability. Otherwise be very aware of what parts of your code you're coupling to what other parts and how this will impact your ability to take it apart and refactor later.
I have some pattern that works great for me, but that I have some difficulty explaining to fellow programmers. I am looking for some justification or literature reference.
I personally work with PHP, but this would also be applicable to Java, Javascript, C++, and similar languages. Examples will be in PHP or Pseudocode, I hope you can live with this.
The idea is to use a lazy evaluation container for intermediate results, to avoid multiple computation of the same intermediate value.
"Dynamic programming":
http://en.wikipedia.org/wiki/Dynamic_programming
The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up
Lazy evaluation container:
class LazyEvaluationContainer {
protected $values = array();
function get($key) {
if (isset($this->values[$key])) {
return $this->values[$key];
}
if (method_exists($this, $key)) {
return $this->values[$key] = $this->$key();
}
throw new Exception("Key $key not supported.");
}
protected function foo() {
// Make sure that bar() runs only once.
return $this->get('bar') + $this->get('bar');
}
protected function bar() {
.. // expensive computation.
}
}
Similar containers are used e.g. as dependency injection containers (DIC).
Details
I usually use some variation of this.
It is possible to have the actual data methods in a different object than the data computation methods?
It is possible to have computation methods with parameters, using a cache with a nested array?
In PHP it is possible to use magic methods (__get() or __call()) for the main retrieval method. In combination with "#property" in the class docblock, this allows type hints for each "virtual" property.
I often use method names like "get_someValue()", where "someValue" is the actual key, to distinguish from regular methods.
It is possible to distribute the data computation to more than one object, to get some kind of separation of concerns?
It is possible to pre-initialize some values?
EDIT: Questions
There is already a nice answer talking about a cute mechanic in Spring #Configuration classes.
To make this more useful and interesting, I extend/clarify the question a bit:
Is storing intermediate values from dynamic programming a legitimate use case for this?
What are the best practices to implement this in PHP? Is some of the stuff in "Details" bad and ugly?
If I understand you correctly, this is quite a standard procedure, although, as you rightly admit, associated with DI (or bootstrapping applications).
A concrete, canonical example would be any Spring #Configuration class with lazy bean definitions; I think it displays exactly the same behavior as you describe, although the actual code that accomplishes it is hidden from view (and generated behind the scenes). Actual Java code could be like this:
#Configuration
public class Whatever {
#Bean #Lazy
public OneThing createOneThing() {
return new OneThing();
}
#Bean #Lazy
public SomeOtherThing createSomeOtherThing() {
return new SomeOtherThing();
}
// here the magic begins:
#Bean #Lazy
public SomeThirdThing getSomeThirdThing() {
return new SomeThirdThing(this.createOneThing(), this.createOneThing(), this.createOneThing(), createSomeOtherThing());
}
}
Each method marked with #Bean #Lazy represents one "resource" that will be created once it is needed (and the method is called) and - no matter how many times it seems that the method is called - the object will only be created once (due to some magic that changes the actual code during loading). So even though it seems that in createOneThing() is called two times in createOneThing(), only one call will occur (and that's only after someone tries to call createSomeThirdThing() or calls getBean(SomeThirdThing.class) on ApplicationContext).
I think you cannot have a universal lazy evaluation container for everything.
Let's first discuss what you really have there. I don't think it's lazy evaluation. Lazy evaluation is defined as delaying an evaluation to the point where the value is really needed, and sharing an already evaluated value with further requests for that value.
The typical example that comes to my mind is a database connection. You'd prepare everything to be able to use that connection when it is needed, but only when there really is a database query needed, the connection is created, and then shared with subsequent queries.
The typical implementation would be to pass the connection string to the constructor, store it internally, and when there is a call to the query method, first the method to return the connection handle is called, which will create and save that handle with the connection string if it does not exist. Later calls to that object will reuse the existing connection.
Such a database object would qualify for lazy evaluating the database connection: It is only created when really needed, and it is then shared for every other query.
When I look at your implementation, it would not qualify for "evaluate only if really needed", it will only store the value that was once created. So it really is only some sort of cache.
It also does not really solve the problem of universally only evaluating the expensive computation once globally. If you have two instances, you will run the expensive function twice. But on the other hand, NOT evaluating it twice will introduce global state - which should be considered a bad thing unless explicitly declared. Usually it would make code very hard to test properly. Personally I'd avoid that.
It is possible to have the actual data methods in a different object than the data computation methods?
If you have a look at how the Zend Framework offers the cache pattern (\Zend\Cache\Pattern\{Callback,Class,Object}Cache), you'd see that the real working class is getting a decorator wrapped around it. All the internal stuff of getting the values stored and read them back is handled internally, from the outside you'd call your methods just like before.
The downside is that you do not have an object of the type of the original class. So if you use type hinting, you cannot pass a decorated caching object instead of the original object. The solution is to implement an interface. The original class implements it with the real functions, and then you create another class that extends the cache decorator and implements the interface as well. This object will pass the type hinting checks, but you are forced to manually implement all interface methods, which do nothing more than pass the call to the internal magic function that would otherwise intercept them.
interface Foo
{
public function foo();
}
class FooExpensive implements Foo
{
public function foo()
{
sleep(100);
return "bar";
}
}
class FooCached extends \Zend\Cache\Pattern\ObjectPattern implements Foo
{
public function foo()
{
//internally uses instance of FooExpensive to calculate once
$args = func_get_args();
return $this->call(__FUNCTION__, $args);
}
}
I have found it impossible in PHP to implement a cache without at least these two classes and one interface (but on the other hand, implementing against an interface is a good thing, it shouldn't bother you). You cannot simply use the native cache object directly.
It is possible to have computation methods with parameters, using a cache with a nested array?
Parameters are working in the above implementation, and they are used in the internal generation of a cache key. You should probably have a look at the \Zend\Cache\Pattern\CallbackCache::generateCallbackKey method.
In PHP it is possible to use magic methods (__get() or __call()) for the main retrieval method. In combination with "#property" in the class docblock, this allows type hints for each "virtual" property.
Magic methods are evil. A documentation block should be considered outdated, as it is no real working code. While I found it acceptable to use magic getter and setter in a really easy-to-understand value object code, which would allow to store any value in any property just like stdClass, I do recommend to be very careful with __call.
I often use method names like "get_someValue()", where "someValue" is the actual key, to distinguish from regular methods.
I would consider this a violation of PSR-1: "4.3. Methods: Method names MUST be declared in camelCase()." And is there a reason to mark these methods as something special? Are they special at all? The do return the value, don't they?
It is possible to distribute the data computation to more than one object, to get some kind of separation of concerns?
If you cache a complex construction of objects, this is completely possible.
It is possible to pre-initialize some values?
This should not be the concern of a cache, but of the implementation itself. What is the point in NOT executing an expensive computation, but to return a preset value? If that is a real use case (like instantly return NULL if a parameter is outside of the defined range), it must be part of the implementation itself. You should not rely on an additional layer around the object to return a value in such cases.
Is storing intermediate values from dynamic programming a legitimate use case for this?
Do you have a dynamic programming problem? There is this sentence on the Wikipedia page you linked:
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping subproblems. If a problem can be solved by combining optimal solutions to non-overlapping subproblems, the strategy is called "divide and conquer" instead.
I think that there are already existing patterns that seem to solve the lazy evaluation part of your example: Singleton, ServiceLocator, Factory. (I'm not promoting singletons here!)
There also is the concept of "promises": Objects are returned that promise to return the real value later if asked, but as long as the value isn't needed right now, would act as the values replacement that could be passed along instead. You might want to read this blog posting: http://blog.ircmaxell.com/2013/01/promise-for-clean-code.html
What are the best practices to implement this in PHP? Is some of the stuff in "Details" bad and ugly?
You used an example that probably comes close to the Fibonacci example. The aspect I don't like about that example is that you use a single instance to collect all values. In a way, you are aggregating global state here - which probably is what this whole concept is about. But global state is evil, and I don't like that extra layer. And you haven't really solved the problem of parameters enough.
I wonder why there are really two calls to bar() inside foo()? The more obvious method would be to duplicate the result directly in foo(), and then "add" it.
All in all, I'm not too impressed until now. I cannot anticipate a real use case for such a general purpose solution on this simple level. I do like IDE auto suggest support, and I do not like duck-typing (passing an object that only simulates being compatible, but without being able to ensure the instance).
I would like to use PHP's assert function in my unit testing framework. It has the advantage of being able to see the expression being evaluated (including comments) within the error message.
The problem is that each method containing tests may have more than one assert statement, and I would like to keep track of how many actual assert statements have been run. assert does not give me a way to count how many times it has been run, only how many times it has failed (within the failure callback).
I tried to abstract the assert statement into a function so that I can add a counting mechanism.
private function assertTrue($expression) {
$this->testCount++;
assert($expression);
}
This does not work however because any variables within the expression are now out of scope.
$var = true;
$this->assertTrue('$var == true'); // fails
Any advice on how I can use assert in my unit testing while being able to count the number of actual tests?
The two ideas I have come up with are to make users count themselves
$this->testCount++;
assert('$foo');
$this->testCount++;
assert('$bar');
or make users put only one assert in each test method (I could then count the number of methods run). but neither of these solutions is very enforcable, and make coding more difficult. Any ideas on how to accomplish this? Or should I just strip assert() from my testing framework?
In PHPUnit, all of the assert*() methods take an additional $message parameter, which you can take advantage of:
$this->assertTrue($var, 'Expected $var to be true.');
If the assertion fails, the message is output with the failure in the post-test report.
This is more useful generally than outputting the actual expression because then you can comment on the significance of the failure:
$this->assertTrue($var, 'Expected result of x to be true when y and z.');
A bit of a cheeky answer here, but open vim and type:
:%s/assert(\(.+\));/assert(\1) ? $assertSuccesses++ : $assertFailures++;/g
(In principle, replace all assert() calls with assert() ? $success++ : $fail++;)
More seriously, providing a mechanism to count tests is really a responsibility a bit beyond the scope of the assert() function. Presumably you want this for an "X/Y tests succeeded" type indicator. You should be doing this in a testing framework, recording what each test is, its outcome and any other debug information.
You are restricted by the fact assert() must be called in the same scope the variables you are testing lie. That leaves -- as far as I can tell -- solutions that require extra code, modify the source before runtime (preprocessing), or a solution that extends PHP at the C-level. This is my proposed solution that involves extra code.
class UnitTest {
// controller that runs the tests
public function runTests() {
// the unit test is called, creating a new variable holder
// and passing it to the unit test.
$this->testAbc($this->newVarScope());
}
// keeps an active reference to the variable holder
private $var_scope;
// refreshes and returns the variable holder
private function newVarScope() {
$this->var_scope = new stdClass;
return $this->var_scope;
}
// number of times $this->assert was called
public $assert_count = 0;
// our assert wrapper
private function assert($__expr) {
++$this->assert_count;
extract(get_object_vars($this->var_scope));
assert($__expr);
}
// an example unit test
private function testAbc($v) {
$v->foo = true;
$this->assert('$foo == true');
}
}
Downfalls to this approach: all variables used in unit testing must be declared as $v->* rather than $*, whereas variables written in the assert statement are still written as $*. Secondly, the warning emitted by assert() will not report the line number at which $this->assert() was called.
For more consistency you could move the assert() method to the variable holder class, as that way you could think about each unit test operating on a test bed, rather than having some sort of magical assert call.
That's not something which unit-testing is intended to do (remember it originated in compiled langs).
And PHPs semantics do not help much with what you are trying to do either.
But you could accomplish it with some syntactic overhead still:
assert('$what == "ever"') and $your->assertCount();
Or even:
$this->assertCount(assert('...'));
To get the assertion string for succeeded conditions still, you could only utilize debug_backtrace and some heuristic string extraction.
This is not enforced much either (short of running a precompiler/regex over the test scripts). But I would look at this from the upside: not every check might be significant enough to warrant recording. A wrapper method thus allows opting out.
It's hard to give an answer without knowing how your framework has been built, but I'll give it a shot.
Instead of directly call the methods of your unit testing class ( methods like assertTrue() ), you could use the magic method of PHP __call(). Using this, you could increase an internal counter everytime assertTrue() method is called. Actually, you can do whatever you want, every time any method is called.
Remember that __call() is invoked if you try to call a method that does not exist. So you would've to change all your methods names, and call them internally from __call(). For instance, you'd have a method called fAssertTrue(), but the unit testing class would use assertTrue(). So since assertTrue() is not defined, __call() method would be invoked, and there you would call fAssertTrue().
Since you're passing the expression already (which might lead, correct me if I'm wrong, to quoting hell):
$this->assertTrue('$var == true'); // fails with asset($expression);
Why not add a tiny extra layer of complexity, and avoid the quoting hell, by using a closure instead?
$this->assertTrue(function() use ($var) {
return $var == true;
}); // succeeds with asset($expression());
Simple:
$this->assertTrue($var == true);
(without quotes!)
It will be evaluated in caller space, so assertTrue() will be passed just false or true.
As others have pointed out, this might not be the best way of testing, but that's another question entirely... ;)
I currently have a method within my class that has to call other methods, some in the same object and others from other objects.
class MyClass
{
public function myMethod()
{
$var1 = $this->otherMethod1();
$var2 = $this->otherMethod2();
$var3 = $this->otherMethod3();
$otherObject = new OtherClass();
$var4 = $otherObject->someMethod();
# some processing goes on with these 4 variables
# then the method returns something else
return $var5;
}
}
I'm new to the whole TDD game, but some of what I think I understood to be key premises to more testable code are composition, loose coupling, with some strategy for Dependency Injection/Inversion of Control.
How do I go about refactoring a method into something more testable in this particular situation?
Do I pass the $this object reference to the method as a parameter, so that I can easily mock/stub the collaborating methods? Is this recommended or is it going overboard?
class MyClass
{
public function myMethod($self, $other)
{
# $self == $this
$var1 = $self->otherMethod1();
$var2 = $self->otherMethod2();
$var3 = $self->otherMethod3();
$var4 = $other->someMethod();
# ...
return $var5;
}
}
Also, it is obvious to me that dependencies are a pretty big deal with TDD, as one has to think about how to inject a stub/mock to the said method for tests. Do most TDDers use DI/IoC as a primary strategy for public dependencies? at which point does it become exaggerated? can you give some pointers to do this efficiently?
These are some good questions... let me first say that I do not really know JS at all, but I am a unit tester and have dealt with these issues. I first want to point out that JsUnit exists if you are not using it.
I wouldn't worry too much about your method calling other methods within the same class... this is bound to happen. What worries me more is the creation of the other object, depending on how complicated it is.
For example, if you are instantiating a class that does all kinds of operations over the network, that is too heavy for a simple unit test. What you would prefer to do is mock out the dependency on that class so that you can have the object produce the result you would expect to receive from its operations on the network, without incurring the overhead of going on the network: network failures, time, etc...
Passing in the other object is a bit messy. What people typically do is have a factory method to instantiate the other object. The factory method can decide, based on whether or not you are testing (typically via a flag) whether or not to instantiate the real object or the mock. In fact, you may want to make the other object a member of you class, and within the constructor, call the factory, or make the decision right there whether or not to instantiate the mock or the real thing. Within the setup function or within your test cases you can set special conditions on the mock object so that it will return the proper value.
Also, just make sure you have tests for your other functions in the same class... I hope this helps!
Looks like the whole idea of this class is not quite correct. In TDD your are testing classes, but not methods. If a method has it own responsibility and provides it's own (separate testable) functionality it should be moved to a separate class. Otherwise it just breaks the whole OOP encapsulation thing. Particularly it breaks the Single Responsibility Principle.
In your case, I would extract the tested method into another class and injected $var1, $var2, $var3 and $other as dependencies. The $other should be mocked, as well any object which tested class depends on.
class TestMyClass extends MyTestFrameworkUnitTestBase{
function testMyClass()
{
$myClass = new MyClass();
$myClass->setVar1('asdf');
$myClass->setVar2(23);
$myClass->setVar3(78);
$otherMock = getMockForClassOther();
$myClass->setOther($otherMock);
$this->assertEquals('result', $myClass->myMethod());
}
}
Basic rule I use is: If I want to test something, I should make it a class. This is not always true in PHP though. But it works in PHP in 90% of cases. (Based on my experience)
I might be wrong, but I am under the impression that objects/classes should be black boxes to their clients, and so to their testing clients (encapsulating I think is the term I am looking for).
There's a few things you can do:
The best thing to do is mock, here's one such library: http://code.google.com/p/php-mock-function
It should let you mock out only the specific functions you want.
If that doesn't work, the next best thing is to provide the implementation of method2 as a method of an object within the MyClass class. I find this one of the easier methods if you can't mock methods directly:
class MyClass {
function __construct($method2Impl) {
$this->method2Impl = $method2Impl;
}
function method2() {
return $this->method2Imple->call();
}
}
Another option is to add an "under test" flag, so that the method behaves different. I don't recommend this either - eventually you'll have differing code paths and with their own bugs.
Another option would be to subclass and override the behaviors you need. I -really- don't suggest this since you'll end up customizing your overridden mock to the point that it'll have bugs itself :).
Finally, if you need to mock out a method because its too complicated, that can be a good sign to move it into its own object and use composition (essentially using the method2Impl technique i mentioned above).
Possibly, this is more a matter of single responsibility principle being violated, which is feeding into TDD issues.
That's a GOOD thing, that means TDD is exposing design flaws. Or so the story goes.
If those methods are not public, and are just you breaking apart you code into more digestable chunks, honestly, I wouldn't care.
If those methods are public, then you've got an issue. Following the rule, 'any public method of a class instance must be callable at any point'. That is to say, if you're requiring some sort of ordering of method calls, then it's time to break that class up.