The next weirdness I'm seeing with PHPUnit:
class DummyTest extends PHPUnit_Framework_TestCase {
public function testDummy() {
$this->assertTrue(false, 'assert1');
$this->assertTrue(false, 'assert2');
}
public function testDummy2() {
$this->assertTrue(false, 'assert3');
}
}
As soon as the first assertion fails in a test, the rest of the test is ignored.
So (with a simple call of phpunit DummyTest.php):
The above code will display 2 tests,
2 assertions, 2 failures. What?
If I make all the tests pass, then
I'll get OK (2 tests, 3 assertions).
Good.
If I only make all the tests pass
except for assert2, I get 2 tests, 3
assertions, 1 failure. Good.
I don't get it, but PHPUnit's been around for ages, surely it has to be me?
Not only are the counts not what I'd expect, only the error message for the first failed assert in the code above is displayed.
(BTW, I'm analyzing the xml format generated by PHPUnit for CI rather than testing real code, hence the practice of multiple assertions in the one test.)
First off: That is expected behavior.
Every test method will stop executing once an assertion fails.
For an example where the opposite will be very annoying*:
class DummyTest extends PHPUnit_Framework_TestCase {
public function testDummy() {
$foo = get_me_my_foo();
$this->assertInstanceOf("MyObject", $foo);
$this->assertTrue($foo->doStuff());
}
}
if phpunit wouldn't stop after the first assertion you'd get an E_FATAL (call to a non member function) and the whole PHP process would die.
So to formulate nice assertions and small tests it's more practical that way.
For another example:
When "asserting that an array has a size of X, then asserting that it contains a,b and c" you don't care about the fact that it doesn't contain those values if the size is 0.
If a test fails you usually just need the message "why it failed" and then, while fixing it, you'll automatically make sure the other assertions also pass.
On an additional note there are also people arguing that you should only have One Asssertion per Test case while I don't practice (and I'm not sure if i like it) I wanted to note it ;)
Welcome to unit testing. Each test function should test one element or process (process being a series of actions that a user might take). (This is a unit, and why it is called "unit testing.") The only time you should have multiple assertions in a test function is if part of the test is dependent on the previous part being successful.
I use this for Selenium testing web pages. So, I might want to assert that I am in the right place every time I navigate to a new page. For instance, if I go to a web page, then login, then change my profile, I would assert that I got to the right place when I logged in, because the test would no longer make sense if my login failed. This prevents me from getting additional error messages, when only one problem was actually encountered.
On the other side, if I have two separate processes to test, I would not test one, then continue on to test the other in the same function, because an error in the first process would mask any problems in the second. Instead, I would write one test function for each process. (And, if one process depended on the success of the other, for instance, post something to a page, then remove the post, I would use the #depends annotation to prevent the second test from running if the first fails.)
In short, if your first assert failing does not make the second one impossible to test, then they should be in separate functions. (Yes, this might result in redundant code. When unit testing, forget all that you have learned about eliminating redundant code. That, or make non-test functions and call them from the test functions. This can make unit tests harder to read, and thus harder to update when changes are made to the subject of the tests though.)
I realize that this question is 2 years old, however the only answer was not very clear about why. I hope that this helps others understand unit testing better.
Related
I'm testing an object that is designed to test if user owns a given email. So on invocation of "tryEmail" method, it sends a message with confirmation link, to the given email address. My test looks like this:
public function testSendingWasSuccessful() {
$confirmationObject = $this->getMock('LT\EmailConfirmation\Model\ConfirmationObjectInterface');
$testType = 'test.type';
$testEmail = 'test#example.com';
$testData = [];
// EmailTester should create a new confirmation object.
$this->manager->expects(static::once())
->method('create')->with($testType, $testEmail)
->willReturn($confirmationObject);
// Then it should send the confirmation message.
$this->mailer->expects(static::once())
->method('send')->with(static::identicalTo($confirmationObject))
->willReturn(true);
// And save the confirmation object.
$this->manager->expects(static::once())
->method('save')->with(static::identicalTo($confirmationObject));
$tester = new EmailTester($this->repository, $this->manager, $this->confirmationHandler, $this->mailer);
static::assertTrue($tester->tryEmail($testType, $testEmail, $testData));
}
Now you can see what is possibly wrong with it - it contains multiple assertions. Why I decided to use those assertions inside of the one test? Because they are dependent on each other. So, the confirmation message should be sent only if new confirmation object has been created, and confirmation object should be saved only if confirmation message was send, and at the end, the output of the "tryEmail" method, using those mocked methods is being asserted.
However, I feel like I accidentally just described the implementation of the "tryEmail" method with my assertions. But it seems to be required for full coverage of this method, and making me sure that it always work as it should. I can imagine bugs passing by if I would remove any of those assertions. For example: static::identicalTo($confirmationObject) which is basically: check if the object passed to the mailer is the same as the one created before. If I would change interface of the mailer, I would have to change also this test of the EmailTester, so it seems like I'm doing something wrong here. At the same time however - how can I check for the above assertion without introducing this coupling? Or maybe should I just leave this untested?
Am I doing it right or wrong? How could I improve on it? When to use assertions on mocks really?
Added: I just had a thought - is it not true that testing class is supposed to test the implementation (if the implementation conforms with the interface)? That would mean that describing implementation in the test is actually a good thing, because it makes sure that implementation works correctly. That would also mean that level of coupling of the implementation will be carried over to the test, and is unavoidable. Am I wrong here?
The rule of "one assertion per test" is to keep your tests focused on one particular behavior of the code being tested. Having multiple assertions in a test is not a bad thing.
When using a mock object, I prefer to have some sort of assertions on methods being replaced. That way I ensure that the system will be using the dependencies as expected.
You testing class is to confirm behaviors of your code. The assertions that you have would be any of the checks that you would do manually to ensure that the class was behaving as you expected. Since you are expecting particular methods to be called in a specific manner, you want to have an assertion for them.
Issues that I see with the test are that you have a mock object returning a mock object. This is usually a code smell that means you are not passing the correct dependencies. You could possibly move the creation of your LT\EmailConfirmation\Model\ConfirmationObjectInterface object out of the method and pass that as a dependency of your method. Replacing the first two parameters of your method with this object.
You also don't seem to be using the third parameter at all in this test so it doesn't appear to be necessary.
I have several integration tests with phpunit,
and in the proccess of the tests there are some logs written to files in the system.
I would like to check if a line was written during a test, is that possible?
example:
/** #test */
function action_that_writes_to_log() {
$this->call('GET', 'path/to/action', [], [], $requestXml);
//I want this:
$this->assertFileHas('the log line written', '/log/file/path.log');
}
The obvious way:
Implementing a custom assertion method, like the one you propose: assertFileHas. It's quite easy, just check if the string appears in the file. The problem you can get is that the line can already exist from another test or the same test already run. A possible solution for this is deleting the logs content before each test or test class, depending on your needs. You would need a method that deletes the logs and call it from setUp or setUpBeforeClass.
I would go with another approach: mocking the logging component, and checking that the right call is being done:
$logger_mock->expects($this->once())
->method('log')
->with($this->equalTo('the log line written'));
This makes easy to test that the components are logging the right messages, but you also need to implement a test that verifies that the logger is capable of actually writting to the file. But it's easier to implement that test once, and then just check that each component calls the logging method.
I have been starting unit testing recently and am wondering, should I be writing unit tests for 100% code coverage?
This seems futile when I end up writing more unit testing code than production code.
I am writing a PHP Codeigniter project and sometimes it seems I write so much code just to test one small function.
For Example this Unit test
public function testLogin(){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
$this->realAuth=new $this->CI->auth;
$this->CI->auth=$this->getMock("Auth",array("logIn"));
$this->CI->auth->expects($this->once())
->method("logIn")
->will($this->returnValue(TRUE));
//test
$this->CI->form_validation->expects($this->once())
->method("run")
->will($this->returnValue(TRUE));
$_POST["login"]=TRUE;
$this->CI->login();
$out = $this->CI->output->get_headers();
//check new header ends with dashboard
$this->assertStringEndsWith("dashboard",$out[0][0]);
//tear down
$this->CI->form_validation=$this->realFormValidation;
$this->CI->auth=$this->realAuth;
}
public function badLoginProvider(){
return array(
array(FALSE,FALSE),
array(TRUE,FALSE)
);
}
/**
* #dataProvider badLoginProvider
*/
public function testBadLogin($formSubmitted,$validationResult){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
//test
$this->CI->form_validation->expects($this->any())
->method("run")
->will($this->returnValue($validationResult));
$_POST["login"]=$formSubmitted;
$this->CI->login();
//check it went to the login page
$out = output();
$this->assertGreaterThan(0, preg_match('/Login/i', $out));
//tear down
$this->CI->form_validation=$this->realFormValidation;
}
For this production code
public function login(){
if($this->input->post("login")){
$this->load->library('form_validation');
$username=$this->input->post('username');
$this->form_validation->set_rules('username', 'Username', 'required');
$this->form_validation->set_rules('password', 'Password', "required|callback_userPassCheck[$username]");
if ($this->form_validation->run()===FALSE) {
$this->load->helper("form");
$this->load->view('dashboard/login');
}
else{
$this->load->model('auth');
echo "valid";
$this->auth->logIn($this->input->post('username'),$this->input->post('password'),$this->input->post('remember_me'));
$this->load->helper('url');
redirect('dashboard');
}
}
else{
$this->load->helper("form");
$this->load->view('dashboard/login');
}
}
Where am I going so wrong?
In my opinion, it's normal for test code to be more than production code. But test code tends to be straightforward, once you get the hang of it, it's like a no brainer task to write tests.
Having said that, if you discover your test code is too complicated to write/to cover all the execution paths in your production code, that's a good indicator for some refactoring: your method may be too long, or attempts to do several things, or has so many external dependencies, etc...
Another point is that it's good to have high test coverage, but does not need to be 100% or some very high number. Sometimes there are code that has no logic, like code that simply delegates tasks to others. In that case you can skip testing them and use #codeCoverageIgnore annotation to ignore them in your code coverage.
In my opinion its logical that test are much more code because you have to test multiple scenarios, must provide test data and you have to check that data for every case.
Typically a test-coverage of 80% is a good value. In most cases its not necessary to test 100% of the code because you should not test for example setters and getter. Dont test just for the statistics ;)
The answer is it depends, but generally no. If you are publishing a library then lots of tests are important and can even help create the examples for the docs.
Internal projects you would probably want to focus your code around complex functions and things which would be bad if they were to go wrong. For each test thing what is the value in having the test here rather than in the parent function?
What you would want to avoid is testing too much is anything that relies on implementation details, or say private method/functions, otherwise if you change the structure you'll find you'll have to repeatedly rewrite the entire suite of tests.
It is better to test at a higher level, the public functions or anything which is at the boundary between modules, a few tests at the highest level you can should yield reasonable converge and ensure the code works in the way that your code is actually called. This is not to say lower level functions shouldn't have tests but at that level it's more to check edge cases than to have a test for the typical case against every function.
Instead of creating tests to increase coverage, create tests to cover bugs as you find and fix them, create tests against new functionality when you would have to manually test it anyway. Create tests to guard against bad things which must not happen. Fragile tests which easily break during refactors should be removed or changed to be less dependant on the implementation of the function.
in php i wrote my own debug function which have two arguments: text and a level of message. However i could be also you the php functions for triggering errors. But to debug in development i use sometimes like this:
debug($xmlobject->asXML(),MY_CONSTANT);
now i want to know whether it is a lack of performance in non debug executing because the arguments are calculated indepent whether they will be used inside function? and how to do that right that is only calculated if i need?
Thanks for your help,
Robert
If you write the following portion of code :
debug($xmlobject->asXML(),MY_CONSTANT);
Then, no matter what the debug() function does, $xmlobject->asXML() will be called and executed.
If you do not want that expression to be evaluated, you must not call it; I see two possible solutions :
Remove the useless-in-production calls to the debug() function, not leaving any debugging code in your source files,
Or make sure they are only executed when needed.
In the second case, a possibility would be to define a constant to configure whether or not you are in debug-mode, and, then, only call debug() when needed :
if (DEBUG) {
debug($xmlobject->asXML(),MY_CONSTANT);
}
Of course, the makes writting debbuging-code a bit harder... and there is a bit of performance-impact (but far smaller than executing the actual code for nothing).
The arguments are sended by value, ergo the method ->asXML() is executed always.
I'm looking for a simpler test framework. I had a look at a few PHPUnit and SimpleTest scripts and I find the required syntactic sugar appalling. SnapTest sounded nice, but was as cumbersome. Apache More::Test was too procedural, even for my taste. And Symfony lime-test was ununique in that regard.
BDD tools like http://everzet.com/Behat/#basics are very nice, but even two abstraction levels higher than desired.
Moreover I've been using throwaway test scripts till now. And I'm wondering if instead of throwing them away, there is a testing framework/tool which simplifies using them for automated tests. Specifically I'd like to use something that:
evaluates output (print/echo), or even return values/objects
serializes and saves it away as probe/comparison data
allows to classify that comparison output as passed test or failure
also collects headers, warning or error messages (which might also be expected output)
in addition to a few $test->assert() or test::fail() states
Basically I'm too lazy to do the test frameworks work, manually pre-define or boolean evaluate and classify the expected output. Also I don't find it entertaining to needlessly wrap test methods into classes, plain include scripts or functions should suffice. Furthermore it shouldn't be difficult to autorun through the test scripts with a pre-initialized base and test environment.
The old .phpt scripts with their --expect-- output come close, but still require too much manual setup. Also I'd prefer a web GUI to run the tests. Is there a modern rehersal of such test scripts? (plus some header/error/result evalation and eventually unit test::assert methods)
Edit, I'll have to give an example. This is your typical PHPUnit test:
class Test_Something extends PHPUnit_Test_Case_Or_Whatever {
function tearUp() {
app::__construct(...);
}
function testMyFunctionForProperResults() {
$this->assertFalse(my_func(false));
$this->assertMatch(my_func("xyzABC"), "/Z.+c/");
$this->assertTrue(my_func(123) == 321);
}
}
Instead I'd like to use plain PHP with less intermingled test API:
function test_my_function_for_proper_results() {
assert::false(my_func(false));
print my_func("xyz_ABC");
return my_func(123);
}
Well, that's actually three tests wrapped in one. But just to highlight: the first version needs manual testing. What I want is sending/returning the test data to the test framework. It's the task of the framework to compare results, and not just spoon-feeded booleans. Or imagine I get a bloated array result or object chain, which I don't want to manually list in the test scripts.
For the record, I've now discovered Shinpuru.
http://arkanis.de/projects/shinpuru/
Which looks promising for real world test cases, and uses PHP5.3-style anonymous functions instead of introspection-class wrappers.
Have to say - it isn't obvious how your example of a simplified test case would be possible to implement. Unfortunately the convolutedness is - more or less - something that has to be lived with. That said, I've seen cases where PHPUnit is extended to simplify things, as well as adding web test runners, tests for headers, output etc (thinking SilverStripe here - they're doing a lot of what you want with PHPUnit). That might be your best bet. For example:
evaluates output (print/echo):
enable output buffering and assert against the buffer result
collect headers, warning or error messages
register your own handler that stores the error message
wget against urls and compare the result (headers and all)
Etc.