I am currently going to start from scratch with the phpunit tests for a project. So I was looking into some projects (like Zend) to see how they are doing things and how they organizing their tests.
Most things are pretty clear, only thing I have some problems with is how to organize the test suites properly.
Zend has an AllTests.php from which loads others test suites.
Tough looking at the class it is useing PHPUnit_Framework_TestSuite to create a suite object and then add the other suites to it, but if I look in the PHPUnit docs for organizing tests in PHPUnit versions after 3.4 there is only a description for XML or FileHierarchy. The one using classes to organize the tests was removed.
I haven't found anything that this method is deprecated and projects like Zend are still using it.
But if it is deprecated, how would I be able to organize tests in the same structure with the xml configuration? Executing all tests is no problem, but how would I organize the tests (in the xml) if I only wanted to execute a few tests. Maybe creating several xmls where I only specify a few tests/test suites to be run?
So if I would want to only test module1 and module2 of the application, would I have an extra xml for each and defining test suites only for those modules (classes used by the module) in it. And also one that defines a test suite for all tests?
Or would it be better to use the #group annotation on the specific tests to mark them to be for module1 or module2?
Thanks in advance for pointing me to some best practices.
I'll start of by linking to the manual and then going into what I've seen and heard in the field.
Organizing phpunit test suites
Module / Test folder organization in the file system
My recommended approach is combining the file system with an xml config.
tests/
\ unit/
| - module1
| - module2
- integration/
- functional/
with a phpunit.xml with a simple:
<testsuites>
<testsuite name="My whole project">
<directory>tests</directory>
</testsuite>
</testsuites>
you can split the testsuites if you want to but thats a project to project choice.
Running phpunit will then execute ALL tests and running phpunit tests/unit/module1 will run all tests of module1.
Organization of the "unit" folder
The most common approach here is to mirror your source/ directory structure in your tests/unit/ folder structure.
You have one TestClass per ProductionClass anyways so it's a good approach in my book.
In file organization
One class per file.
It's not going to work anyways if you have more than one test class in one file so avoid that pitfall.
Don't have a test namespace
It just makes writing the test more verbose as you need an additional use statement so I'd say the testClass should go in the same namespace as the production class but that is nothing PHPUnit forces you to do. I've just found it to be easier with no drawbacks.
Executing only a few tests
For example phpunit --filter Factory executes all FactoryTests while phpunit tests/unit/logger/ executes everything logging related.
You can use #group tags for something like issue numbers, stories or something but for "modules" I'd use the folder layout.
Multiple xml files
It can be useful to create multiple xml files if you want to have:
one without code coverage
one just for the unit tests (but not for the functional or integration or long running tests)
other common "filter" cases
PHPBB3 for example does that for their phpunit.xmls
Code coverage for your tests
As it is related to starting a new project with tests:
My suggestion is to use #covers tags like described in my blog (Only for unit tests, always cover all non public functions, always use covers tags.
Don't generate coverage for your integration tests. It gives you a false sense of security.
Always use whitelisting to include all of your production code so the numbers don't lie to you!
Autoloading and bootstrapping your tests
You don't need any sort of auto loading for your tests. PHPUnit will take care of that.
Use the <phpunit bootstrap="file"> attribute to specify your test bootstrap. tests/bootstrap.php is a nice place to put it. There you can set up your applications autoloader and so on (or call your applications bootstrap for that matter).
Summary
Use the xml configuration for pretty much everything
Seperate unit and integration tests
Your unit test folders should mirror your applications folder structure
To only execute specif tests use phpunit --filter or phpunit tests/unit/module1
Use the strict mode from the get go and never turn it off.
Sample projects to look at
Sebastian Bergmanns "Bank Account" example project
phpBB3 Even so they have to fight some with their legacy ;)
Symfony2
Doctrine2
Basic Directory Structure:
I have been experimenting with keeping the test code right next to the code being tested, literally in the same directory with a slightly different file name from the file with the code it is testing. So far I am liking this approach. The idea is you don't have to spend time and energy keeping the directory structure in sync between your code and your test code. So if you change the name of the directory the code is in, you don't then also need to go and find and change the directory name for the test code. This also causes you to spend less time looking for the test code that goes with some code as it is right there next to it. This even makes it less of a hassle to create the file with the test code to begin with because you don't have to first find the directory with the tests, possibly create a new directory to match the one you are creating tests for, and then create the test file. You just create the test file right there.
One huge advantage of this is it means the other employees (not you because you would never do this) will be less likely to avoid writing test code to begin with because it is just too much work. Even as they add methods to existing classes they will be less likely to not feel like adding tests to the existing test code, because of the low friction of finding the test code.
One disadvantage is this makes it harder to release your production code without the tests accompanying it. Although if you use strict naming conventions it still might be possible. For example, I have been using ClassName.php, ClassNameUnitTest.php, and ClassNameIntegrationTest.php. When I want to run all the unit tests, there is a suite that looks for files ending in UnitTest.php. The integration test suite works similarly. If I wanted to, I could use a similar technique to prevent the tests from getting released to production.
Another disadvantage of this approach is when you are just looking for actual code, not test code, it takes a little more effort to differentiate between the two. But I feel this is actually a good thing as it forces us to feel the pain of the reality that test code is code too, it adds its' own maintenance costs, and is just as vitally a part of the code as anything else, not just something off to the side somewhere.
One test class per class:
This is far from experimental for most programmers, but it is for me. I am experimenting with only having one test class per class being tested. In the past I had an entire directory for each class being tested and then I had several classes inside that directory. Each test class setup the class being tested in a certain way, and then had a bunch of methods each one with a different assertion made. But then I started noticing certain conditions I would get these objects into had stuff in common with other conditions it got into from other test classes. The duplication become too much to handle, so I started creating abstractions to remove it. The test code became very difficult to understand and maintain. I realized this, but I couldn't see an alternative that made sense to me. Just having one test class per class seemed like it would not be able to test nearly enough situations without becoming overwhelming to have all that test code inside one test class. Now I have a different perspective on it. Even if I was right, this is a huge dampener on other programmers, and myself, wanting to write and maintain the tests. Now I am experimenting with forcing myself to have one test class per class being tested. If I run into too many things to test in that one test class, I am experimenting with seeing this as an indication that the class being tested is doing too much, and should be broken up into multiple classes. For removing duplication I am trying to stick to simpler abstractions as much as possible that allows everything to exist in one readable test class.
UPDATE
I am still using and liking this approach, but I have found a very good technique for reducing the amount of test code and the amount of duplication. It is important to write reusable assertion methods inside the test class itself that gets heavily used by the test methods in that class. It helps me to come up with the right types of assertion methods if I think of them as internal DSLs (something Uncle Bob promotes, well actually he promotes actually making internal DSLs). Sometimes you can take this DSL concept even further (actually make a DSL) by accepting a string parameter that has a simple value that refers to what kind of test you are trying to perform. For example, one time I made a reusable asssertion method that accepted a $left, $comparesAs, and a $right parameter. This made the tests very short and readable as the code read something like $this->assertCmp('a', '<', 'b').
Honestly, I can't emphasize that point enough, it is the entire foundation of making writing tests something that is sustainable (that you and the other programmers want to keep doing). It makes it possible for the value that tests add to outweigh what they take away. The point is not that you need to use that exact technique, the point is you need to use some kind of reusable abstractions that allow you to write short and readable tests. It might seem like I'm getting off topic from the question, but I'm really not. If you don't do this, you will eventually fall into the trap of needing to create multiple test classes per class being tested, and things really break down from there.
Related
I just started to learn the concept of BDD.
I learn PHPSpec and Behat for that, but it is not clear to my why do I need to use both. I understand that behat is for the functional/acceptance testing and PHPSpec is mainly for unit testing, but the articles and videos I found on this is basicly testing the the code twice: once with behat (with scenarios) and once with with phpspec. Can someone explain to my with easy examples what is the difference and when do I need to use behat and whan phpspec?
Thanks the anwers in advance,
Br.
Well, before starting with the answer, I would like to point out that what follows is a more general answer than a "phpspec and behat" one.
As you correctly stated, phpspec is a tool designed to write unit tests whereas behat is something for other kind of tests (let's say from integration to e2e/smoke tests). So far so good. So now we can abstract and distinct between unit test tools and other testing tools.
Let's start defining what a unit test is and what is not. A unit test is a kind of test done against a "small" part (the unit) of a system. Typically it's focus is on a single class or method (whereas not always true). Unit tests promote fast refactor as they run fast and in isolation. Please, focus on fast and isolation, we'll be back at them sooner.
Other kind of tests are more cumbersome and oriented to test an interaction between some components, or a whole function, or the "whole" system as the user would use it. Why cumbersome? Because you may have to setup a database, you may have to run a webserver, you may have need for a browser simulator like selenium, and so on. Because of this, those kind of test are much slower than unit ones. Moreover, when an error arise from other tests but unit, as you have a "whole" funcionality, it will be more painful to hunt down the bug whereas with unit tests, at least, you know what class (or group of them) are causing the bug.
Having said that, remeber the statement about unit test speed and isolation? Well, speed helps you to "fail faster" (you don't need to wait the whole system bootstrap, whatever it means for your project) and fail in a more "localized" way (isolation).
My suggestion is to follow the test pyramid: a lot of unit tests (all possibile permutation of I/O, for instance, of a method) and only what's valuable for integration and above. Just to make an example: you can test a repository (that iteracts with DB, so something you can not do with unit) for a particular query, you can test for the login page or homepage to be reached (as a status for the application health) and so on.
My answer is just a short summary; hope that drives you in the right direction.
Moving to Codeception from Behat and still getting used to it's concepts & where things go.
In the hypothetical that my tests are 100% driven from .feature files, does this mean that all of the test code could be in Contexts? That there wouldn't be anything in any functional tests that extend PHPUnit_Framework_TestCase? (Assuming that all my functional tests would extend that)
Codeception is not driven by Gherkin as Behat is. If you are moving away from Behat you will write functions in classes in Codeception directly and you are not going to start from a Gherkin script to then derive the executable specs (in your contexts files, page objects).
In brief the two flows
Behat
Write the BDD scripts/Gherkin - features. These are totally abstract and should usually be logic descriptions of the use cases your system implements. A product owner can start writing this for instance when a user story is created. Requires no programming logic
For each line in the feature implement an executable specification (a function in a Context class) that handles that action
In Behat usually you also use Page objects (unsure if this can also be done in Codeception but I dont see why not if you can import the Page Object library)
Codeception
You write executable specifications as a first step for instance in a Cept class. A developer is needed here as this is actual PHP Code/Classes
When you run codeception then it prints out a list of all the statements it has run, just like a report.
The above is a very simplified description as your question is also very generic. I hope it answers your question
I am doing TDD for a project using PHP. Until now, I write unit tests, make them fail and then write the least amount of code to fulfill the test. After the project has been completed, I write acceptance testing using CasperJS.
Of late I have been looking into Codeception and Behat and some other test frameworks and have been reading about different types of tests like Unit Testing, Integration Testing et al.
Nowhere could I find the correct order of testing.
What I want to know is when I sit down to design the project, I do:
Requirement Analysis
Technology Stack Selection
Enumerate the Resources/Business Entities
Then decide what goes into Models, what stays as Services etc.
Database Design
Do the list of Models, Controllers, Services necessary
Write tests before writing the individual classes using phpUnit
Once API is ready, write CasperJS tests to verify behavior.
While this is not exact, but a good indication of how I run my shop. So, where do integration testing and behavior testing fit in?
This really feels like an opinion based question, so don't be surprised if it gets closed for being such. There really isn't a perfect answer, and deciding how and when to write tests really depends on the project and you.
You could try to work out all the user stories and behaviors and write the acceptance tests before your step 3. This could help illuminate dark corners in the plan.
Or, you could write acceptance tests before starting a feature. This could help to get you in the mindset of what needs to be done with a given feature, its scope, and edge cases.
Or, you could write acceptance tests after the project is finished. This could serve as a final check list of expected behaviors before handing off to the customer for whatever acceptance testing they want to do.
I'm sure there're other points in your workflow where writing acceptance tests might be appropriate, but these are three points where I've found myself writing such tests. IMO, the best place is right before starting a feature. At that point, I have a user story, I'm familiar with the code I've already written, and I have an idea of what the new code is expected to do.
The acceptance tests can be organized to guide coding in the same way unit tests do, but at a broader level. Still iterate through "write failing test, write code to make test pass, write failing test," but also have a larger loop driven by the acceptance tests. Once you get to a point in the inner cycle where you think you'll have a passing acceptance test, check by running the whole suite.
There is another way in which you can ask "where integration and behavior testing fit in," and that's in the sense of "where does that testing fit in with the rest of my testing and code?" This is a little less grey. Unit testing should be run often. The entire unit test suite. Often. So it needs to be incredibly fast. You should be able to know if you broke something internal to your project immediately.
Integration tests are there to verify the ins and outs are working as expected. Outside of your app, your dependencies aren't going to change, and if they do, it should be a big deal that you should be aware of. So there's a clear demarcation between your code and their code. Your unit tests can carry you all the way to that interface. Integration tests verify that the interface you coded for, really is the interface they're providing. You don't need to run these with every little code change. You do need to run them, but maybe only every commit. They can be slower.
Acceptance tests are similar to integration tests, only rather than enlisting an external dependency to verify the interfaces match, they define the interface. You could hold of on running them until near release, but the more often you can run them, the more value they actually provide.
YMMV.
How do you manage your PHPUnit files in your projects?
Do you add it to your git repository or do you ignore them?
Do you use #assert tag in your PHPdocs codes?
Setup
I'm not using php currently, but I'm working with python unit testing and sphinx documentation in git. We add our tests to git and even have certain requirements on test passing for pushing to the remote devel and master branches (master harder than devel). This assures a bit of code quality (test coverage should also be evaluated, but thats not implemented yet :)).
We have the test files in a separate directory next to the top-level source directory in the directories where they belong to, prefixed with test_, so that the unit testing framework finds them automagically.
For documentation its similar, we just put the sphinx docs files into their own subdirectory (docs), which is in our case an independent git submodule, which might be changed in the future.
Rationale
We want to be able to track changes in the tests, as they should be rare. Frequent changes indicate immature code.
Other team members need access to the tests, otherwise they're useless. If they change code in some places, they must be able to verify it doesn't break anything.
Documentation belongs to the code. In case of python, the code directly contains the documentation. So we have to keep it both together, as the docs are generated from the code.
Having the tests and the docs in the repository allows for automated testing and doc building on the remote server, which gives us instantaneous updated documentation and testing feedback. Also the implementation of “code quality” restrictions based on test results works that way (its actually more a reminder for people to run tests, as code quality cannot be checked with tests without looking at test coverage too). Refs are rejected by the git server if tests do not pass.
We for example require that on master, all tests have to pass or be skipped (sadly, we need skipped, as some tests require OpenGL, which is not available on headless), while on devel its okay if tests just “behave like expected” (i.e. pass, skip or expected failure, no unexpected success, error or failure).
Yes, to keeping them in git. Other conventions I picked up by looking at projects, including phpunit itself. (A look at the doctrine2 example shows it seems to follow the same convention.)
I keep tests in a top-level tests directory. Under that I have meaningfully named subdirectories, usually following the main project directory structure. I have a functional subdirectory for tests that test multiple components together (where applicable).
I create phpunit.xml.dist telling it where to find the tests (and also immediately telling anyone looking at the source code that we use phpunit, and by looking at the xml file they can understand the convention too).
I don't use #assert or the skeleton generator. It feels like a toy feature; you do some typing in one place (your source file) to save some typing in another place (your unit test file). But then you'll expand on the tests in the unit test files (see my next paragraph), maybe even deleting some of the original asserts, and now the #assert entries in the original source file are out of date and misleading to anyone looking at just that code.
You have also lost a lot of power that you end up needing for real-world testing of real-world classes (simplistic BankAccount example, I'm looking at you). No setUp()/tearDown(). No instance variables. No support for all the other built-in assert functions, let alone custom ones. No #depends and #dataProvider.
One more reason against #assert, and for maintaining a separate tests directory tree: I like different people to write the tests and the actual code, where possible. When tests fail it sometimes points to a misunderstanding in the original project specs, by either your coder or your tester. When code and tests live close together it is tempting to change them at the same time. Especially late on a Friday afternoon when you have a date.
We store our tests right with the code files, so developers see the tests to execute, and ensure they change the tests as required. We simply add an extension of .test to the file. This way, we can simply include the original file automatically in each test file, which may then be created with a template. When we release the code, the build process deletes the .test files from all directories.
/application/src/
Foo.php
Foo.php.test
/application/src/CLASS/
FOO_BAR.class
FOO_BAR.class.test
require_once(substr(__FILE__, 0, -5)); // strip '.test' extension
I do some PHP with Kohana 3 (IDE:Netbeans), and got excited about idea of writing tests for code. It sounds pretty cool thing to do, but i have few complications and worries.
Why using Kohana unittest module in browser is like 5 times faster then running tests in Netbeans or command Line ?
How could i exclude all kohana internal tests? In the PHPUnit .xml configuration file ?
Why when run any test i've got in Netbeans panel two entries for it - one with yellow triangle (it says 'file x skipped'), and entry with normal test result. I do get that double entries for every test, also those native from Kohana. I don't mind but it's strange.
All over the Web i see examples, tutorials and screencasts of PHPUnit with sample classes and methods that add two numbers or displays name or do some other trivial things. I've learnt to do those kind of assertions, but how could i test my code in Kohana? My Models are 90% ORM stuff. Controllers? How? Any 'How-tos' and examples are welcome.
I've seen in Ruby tutorial about Rspec a way to test DB by using testing enviroment Databse and rollbacks after finisning tests. Also user actions like clicking links were simulated. Is it possible with PHPUnit ?
There always has been a lot of discussion on what has to be tested and what has not to be tested. Generally my opinion is that you shouldn't test things that should work, like the database driver and connection, this has little to do with your code. Some then argue that you should be able to test it anyway, but in most environment this isn't an easy thing to do and usually a big hassle.
Generally controller actions should be tested as well as any helpers or modules you've written. Usually one uses the paradigm of a mocking framework to get around the database. The good thing about this is a gigantic speed increase in your testing. There are several PHP mocking frameworks as well I suppose.
Another great thing to keep in mind is that you also have user testing. This cannot be simulated with the kind of tests you write in kohana. For this it is interesting to look at http://seleniumhq.org/
It's probably better to split such a rambling question into multiple SO questions.
No idea. Perhaps there's an invocation overhead for NetBeans to invoke phpunit, compared to apache passing the request to PHP.
That might be possible, or you could find a way to set the following option: --exclude-group kohana
No idea sorry.
AFAIK PHPUnit can't do client-interaction tests. How to do system behaviour testing could be a question on its own.