I'm adding unit tests to a legacy PHP application that uses a MySQL compatible database. I want to write genuine unit tests that don't touch the database.
How should I avoid accidentally using a database connection?
Lots of parts of the application use static method calls to get a reference to a database connection wrapper object. If I'm looking at one of these calls I know how to use dependency injection and test doubles to avoid hitting the database from a test, but I'm not sure what to do about all the database queries that I'm not looking at at any one time, which could be some way down the call stack from the method I'm trying to test.
I've considered adding a public method to the database access class that would be called from the PHPUnit bootstrap file and set a static variable to make any further database access impossible, but I'm not keen on adding a function to the application code purely for the sake of the tests that would be harmful if called in production.
Adding tests to a legacy application can be delicate, especially unit tests. The main problem you will likely have is that most tests will be hard to write and easily become unintelligible, because they involve massive amount of setting up and mocking. At the same time you will likely not have much freedom to refactor them, so they become easier to test, because that will lead to ripple effects in the code base.
That's why I usually prefer end to end-tests. You can cover lots of ground without having to test close to the implementation and those tests are usually more useful when you want to do large scale refactoring or migrate the legacy code base later, because you ensure that the most important features you were using still work as expected.
For this approach you will need to test through the database, just not the live database. In the beginning it's probably easiest to just make a copy, but it's absolutely worthwhile to create a trimmed down database with some test fixtures from scratch. You can then use something like selenium to test your application through the web interface by describing the actions you take on the site, like go to url x, fill out a form and submit it and describe the expected outcome, like I should be on url y now and there should be a new entry in the database. As you can see these kinds of tests are written very close to what you see on the website and not so much around the implementation or the single units. This is actually intended because in a migration you might want to rip out large chunks and rewrite them. The unit tests will become completely useless then, because the implementation might change drastically, but those end2end-tests describing the functionality of the site will still remain valid.
There are multiple ways you can go about this. If you are familiar with PHPUnit you might want to try the selenium-extension. You should find tutorials for this online, for example this one: https://www.sitepoint.com/using-selenium-with-phpunit/
Another popular option for these kind of tests is Behat with the MinkExtension. In both cases the hardest part is setting up selenium, but once you are able to write a simple test, that for example goes to your frontpage and checks for some text snippet and get that running. You can write tests really fast.
One big downside of these tests is that they are very slow, because they do full web requests and in some cases have to do some waiting for JavaScript. So you should probably not test everything. Instead try to focus on the most important features. If you have some e-commerce project, maybe go through a very generic checkout procedure. Then expand on different variations that are important to you, e.g. logged in user vs. new user or adding vouchers to the basket. Another good way to start is write very stupid tests, that just check whether your urls are actually accessible, so go to url and check for status code and some expected text snippet. Those are not really that useful in terms of making sure your application behaves correctly, but they still give you some safety as to whether some random 500 errors appear out of the blue.
This is probably the best approach for making safe your app works well and make it easier to upgrade, refactor or migrate your application or parts of it. Additionally whenever you add new features, try to write some actual unit tests for them. It's probably easiest if they are not too connected with the old parts of the code. In the best case scenario you won't have to worry too much about the database, because you can replace the data you get from the database with some instances you prepare yourself in the test and then just test whatever feature. Obviously if it's something like a simple we want to have a form that adds this data to the database, you will probably still not want to write a unit test, but instead write one of those bigger end to end-tests instead.
With tests, using the #depends you can make a test method depend on another one, that way, should the first test fail, the test that depends on it will be ignored, is there a way to do that with Test Classes
For example, say i have a page which i test the layout of, checking that the images are showing up, if links are correct, this will be one Test Class, on this page is a link to a form, for that page i would create a new test class checking its layout, validation ect.
what i want to try and pull of is if any test of the first Test Class fails then the second one should be skipped since the first page should be correct as it will be the page a user will see before entering the first (unless they type the second page's url in but have to assume users are stupid and as such don't know how the address bar work)
i should also note that all the tests are being stored using TFS so while the team will have the same tests we may have different phpunit.xml files (one person apparently uses php.xml.dist and wont change from it because it is mentioned once in Magento TAF even though i use .xml and have had no problems), as such, trying to enforce an order in phpunit.xml wont be all that useful (also enforcing the order in the .xml wont do this dependency thing)
Just because we can do a thing, does not mean we should. Having tests that depend on other tests' successful execution is a bad idea. Every decent text on testing tells you this. Listen to them.
I don't think phpUnit has a way to make one test class depend on another. (I started writing down some hacks that spring to mind, but they are so ugly and fragile and I deleted them again.)
I think the best approach is one big class for all your functional tests. Then you can use #depends as much as you need to. <-- That is where my answer to your actual question ends :-)
In your comment on Ross's answer you say: "I was taught that if you have a large number of (test) methods in one class then you should break it into separate classes" To see why we are allowed to break this rule you have to go below the surface to why that is usually a good idea: a lot of code in one class suggests the class is doing too much, making it harder to change, and harder to test. So you use Extract Class refactoring to split classes up into finer functionality. But never mechanically: each class should still be a nice, clean abstraction of something.
In unit testing the class is better thought of as a way to collect related tests together. When one test depends on another then obviously they are related, so they should be in the same class.
If a 2000-line file make you unhappy, one thing you can do, and should do, is Extract Parent Class. Into your parent class goes all the helper functions and custom asserts. You will leave all the actual tests in the derived class, but go through each of them and see what common functionality could be moved to a shared function, and then put that shared function in the parent class.
Responding to Ross's suggestion that #depends is evil, I prefer to think of it as helping you find the balance between idealism and real-world constraints. In the ideal world you want all your tests to be completely independent. That means each test needs to do its own fixture creation and tear down. If using a database, it should be creating its own database (a unique name, so in the future they can be run in parallel), then creating the tables, filling them with data, etc. (Use the helper functions in your parent class to share this common fixture code.)
On the other hand, we want our tests to finish in under 100 milliseconds, so that they don't interrupt our creative flow. Fixture sharing helps speed up tests at the cost of removing the independence.
For functional tests of a website I would recommend using #depends for obvious things like login. If most of your tests will first log into the site, then it makes a lot of sense to make a loginTest(), and have all other tests #depend on that. If login is not working, you know for sure that all your other tests are going to fail... and are wasting huge amounts of the most valuable programmer resource in the process.
When it is not so clear-cut, I'd err on the side of idealism, and come back and optimize later, if you need to.
I want to make some unit tests in my project (i am new to testing), but tutorials online seem to show examples testing only simpliest stuff.
What I want to test is case when after sending POST to addAction in my SurveyController will result in adding corresponding rows to my survey and question tables (one-to-many).
What are the best practices to test database related stuff? Do I create separate db for my test environment and run tests on it? That is the only and right option?
It depends on your circumstances.
Here is my take on this:
The idea is to test but also be DRY (Don't repeat yourself). Your tests must cover all or as many different cases as possible to ensure that your component is thoroughly tested and ready to be released.
If you use an already developed framework to access your database like Doctrine, Zend Framework, PhalconPHP etc. then you can assume that the framework has been tested and not test the actual CRUD operations. You can concentrate on what your own code does.
Some people might want to test even that but in my view it is an overkill and waste of resources. One can actually include the tests of the particular framework with their own if they want to just have more tests :)
If however you were responsible for the database layer classes and its interaction with your application, yeah tests are a must. You might not run them all the time you need to prove a database operation works or not through some piece of code, but you need to have them.
Finally you can use Mock tests as Mark Baker suggested in your code and assume that the database will respond as you expect it to (since it has already been tested). You can then see how your application will react with different responses or results.
Mocking database operations will actually make your tests run faster (along with the other benefits that come with this strategy) since there won't be any database interactions in the tests themselves. This can become really handy in a project with hundreds if not thousands of tests and continuous integration.
HTH
I'm new to unit testing and I'm trying to get started with PHPUnit on an existing project I'm working on.
The problem I'm facing is that I have lots of unit tests that require a database which is fair enough. I've set up an SQLite DB for the sole purpose of unit testing. Sometimes I want to drop and re-create the database for new tests (by this I mean each separate classes), to prevent unnecessary data clashes.
However, sometimes I don't want this to happen if I have unit tests in the same class that depend upon one another; these may need access to data that was saved in a previous test.
I am currently getting a "fresh" database in the setUp() function of each class. What I didn't anticipate is that this function (as with __construct()) would run after every test case within said class.
Is there a way that I can flush the database with each test class? Or am I going about the entire process incorrectly?
Any tips appreciated, thanks.
I also started using PHPUnit fairly recently (about a year ago). The first thing I did was to set up unit tests for the project I was working on at the time. I decided it was a good idea to test the data access layer too and did a similar thing to you. It took me days to set up and I ended up with unit tests that took 8 minutes to run! 99% of that time was spent setting up and tearing down the test database. What a disaster!
What I did was to refactor the project so that only one class actually needed to talk to the database and had integration tests for this, but no unit tests. That meant that my project now had to use dependency injection to facilitate testing. I ended up with a suite of tests that run in around 2-3 seconds and a project that practically wrote itself. It is a dream to maintain and make changes/additions to, I wish all my code had been written that way.
Basically, what I am trying to say in my long winded way is that you should change your code to be easy to test, not try and force the unit tests to fit code not designed in a test driven way. The time you invest in that now (if you can) will be repaid back later with dividends!
Bite the bullet and refactor now!
The setUp and tearDown functions do exactly as you mention, which is to set up the test environment for each test execution and then clean up after each test case exeuction.
What you probably want to do is set up your database data provider at the suite level.
Suite level fixtures
This is probably not really the best way to do things to get true isolated unit tests (i.e. you could set up mock data provider for DB's etc). But it is something that you can do that should meet your immediate needs.
I've got a big project written in PHP and Javascript. The problem is that it's become so big and unmaintainable that changing some little portion of the code will upset and probably break a whole lot of other portions.
I'm really bad at testing my own code (as a matter of fact, others point this out daily), which makes it even more difficult to maintain the project.
The project itself isn't that complicated or complex, it's more the way it's built that makes it complex: we don't have predefined rules or lists to follow when doing our testing. This often results in lots of bugs and unhappy customers.
We started discussing this at the office and came up with the idea of starting to use test driven development instead of the develop like hell and maybe test later (which almost always ends up being fix bugs all the time).
After that background, the things I need help with are the following:
How to implement a test
framework into an already existing
project? (3 years in the
making and counting)
What kind of frameworks are there
for testing? I figure I'll need one
framework for Javascript and
one for PHP.
Whats the best approach for testing
the graphical user interface?
I've never used Unit Testing before so this is really uncharted territory for me.
G'day,
Edit: I've just had a quick look through the first chapter of "The Art of Unit Testing" which is also available as a free PDF at the book's website. It'll give you a good overview of what you are trying to do with a unit test.
I'm assuming you're going to use an xUnit type framework. Some initial high-level thoughts are:
Edit: make sure that everyone is is agreement as to what constitutes a good unit test. I'd suggest using the above overview chapter as a good starting point and if needed take it from there. Imagine having people run off enthusiastically to create lots of unit tests while having a different understanding of what a "good" unit test. It'd be terrible for you to out in the future that 25% of your unit tests aren't useful, repeatable, reliable, etc., etc..
add tests to cover small chunks of code at a time. That is, don't create a single, monolithic task to add tests for the existing code base.
modify any existing processes to make sure new tests are added for any new code written. Make it a part of the review process of the code that unit tests must be provided for the new functionality.
extend any existing bugfix processes to make sure that new tests are created to show presence and prove the absence of the bug. N.B. Don't forget to rollback your candidate fix to introduce the bug again to verify that it is only that single patch that has corrected the problem and it is not being fixed by a combination of factors.
Edit: as you start to build up the number of your tests, start running them as nightly regression tests to check nothing has been broken by new functionality.
make a successful run of all existing tests and entry criterion for the review process of a candidate bugfix.
Edit: start keeping a catalogue of test types, i.e. test code fragments, to make the creation of new tests easier. No sense in reinventing the wheel all the time. The unit test(s) written to test opening a file in one part of the code base is/are going to be similar to the unit test(s) written to test code that opens a different file in a different part of the code base. Catalogue these to make them easy to find.
Edit: where you are only modifying a couple of methods for an existing class, create a test suite to hold the complete set of tests for the class. Then only add the individual tests for the methods you are modifying to this test suite. This uses xUnit termonology as I'm now assuming you'll be using an xUnit framework like PHPUnit.
use a standard convention for the naming of your test suites and tests, e.g. testSuite_classA which will then contain individual tests like test__test_function. For example, test_fopen_bad_name and test_fopen_bad_perms, etc. This helps minimise the noise when moving around the code base and looking at other people's tests. It also has then benefit of helping people when they come to name their tests in the first place by freeing up their mind to work on the more interesting stuff like the tests themselves.
Edit: i wouldn't use TDD at this stage. By definition, TDD will need all tests present before the changes are in place so you will have failing tests all over the place as you add new testSuites to cover classes that you are working on. Instead add the new testSuite and then add the individual tests as required so you don't get a lot of noise occurring in your test results for failing tests. And, as Yishai points out, adding the task of learning TDD at this point in time will really slow you down. Put learning TDD as a task to be done when you have some spare time. It's not that difficult.
as a corollary of this you'll need a tool to keep track of the those existing classes where the testSuite exists but where tests have not yet been written to cover the other member functions in the class. This way you can keep track of where your test coverage has holes. I'm talking at a high level here where you can generate a list of classes and specific member functions where no tests currently exist. A standard naming convention for the tests and testSuites will greatly help you here.
I'll add more points as I think of them.
HTH
You should get yourself a copy Working Effectively with Legacy Code. This will give you good guidance in how to introduce tests into code that is not written to be tested.
TDD is great, but you do need to start with just putting existing code under test to make sure that changes you make don't change existing required behavior while introducing changes.
However, introducing TDD now will slow you down a lot before you get back going, because retrofitting tests, even only in the area you are changing, is going to get complicated before it gets simple.
Just to add to the other excellent answers, I'd agree that going from 0% to 100% coverage in one go is unrealistic - but that you should definitely add unit tests every time you fix a bug.
You say that there are quite a lot of bugs and unhappy customers - I'd be very positive about incorporating strict TDD into the bugfixing process, which is much easier than implementing it overall. After all, if there really is a bug there that needs to be fixed, then creating a test that reproduces it serves various goals:
It's likely to be a minimal test case to demonstrate that there really is an issue
With confidence that the (currently failing) test highlights the reported problem, you'll know for sure if your changes have fixed it
It will forever stand as a regression test that will prevent this same issue recurring in future.
Introducing tests to an existing project is difficult and likely to be a long process, but doing them at the same time as fixing bugs is such an ideal time to do so (parallel to introducing tests gradually in a "normal" sense) that it would be a shame not to take that chance and make lemonade from your bug reports. :-)
From a planning perspective, I think you have three basic choices:
take a cycle to retrofit the code with unit tests
designate part of the team to retrofit the code with unit tests
introduce unit tests gradually as you work on the code
The first approach may well last a lot longer than you anticipate, and your visible productivity will take a hit. If you use it, you will need to get buy-in from all your stakeholders. However, you might use it to kickstart the process.
The problem with the second approach is that you create a distinction between coders and test writers. The coders will not feel any ownership for test maintenance. I think this approach is worth avoiding.
The third approach is the most organic, and it gets you into test-driven development from the get go. It may take some time for a useful body of unit tests to accumulate. The slow pace of test accumulation might actually be an advantage in that it gives you time to get good at writing tests.
All things considered, I think I'd opt for a modest sprint in the spirit of approach 1, followed by a commitment to approach 3.
For the general principles of unit testing I recommend the book xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
I've used PHPUnit with good results. PHPUnit, like other JUnit-derived projects, requires that code to be tested be organized into classes. If your project is not object-oriented, then you'll need to start refactoring non-procedural code into functions, and functions into classes.
I've not personally used a JavaScript framework, though I would image that these frameworks would also require that your code be structured into (at least) callable functions if not full-blown objects.
For testing GUI applications, you may benefit from using Selenium, though a checklist written by a programmer with good QA instincts might work just fine. I've found that using MediaWiki or your favorite Wiki engine is a good place to store checklists and related project documentation.
Implementing a framework is in most cases a complex task because you kinda start rebuilding your old code with some new solid framework parts. Those old parts must start to communicate with the framework. The old parts must receive some callbacks and returnstates, the old parts must then somehow point that out to the user and in fact you suddenly have 2 systems to test.
If you say that you application itself isn't that complex but it has become due to lack of testing it might be a better option to rebuild the application. Put some common frameworks like Zend to the test, gather your requirements, find out if the tested framework suits for the requirements and decide if it's usefull to start over.
I'm not very sure of unit testing, but NetBeans has a built-in unit testing suite.
If the code is really messy, it's possible that it will be very hard to do any unit testing. Only sufficiently loosely coupled and sufficiently well designed components can be unit tested easily. However, functional testing may be a lot easier to implement in your case. I would recommend taking a look at Selenium. With this framework, you will be able to test your GUI and the backend at the same time. However, most probably, it won't help you catch the bugs as well as you could with unit testing.
Maybe this list will help you and your mates to re-structure everything:
Use UML, to design and handle exceptions (http://en.wikipedia.org/wiki/Unified_Modeling_Language)
Use BPMS, to design your work-flow so you won't struggle (http://en.wikipedia.org/wiki/Business_process_management)
Get a list of php frameworks which also support javascript backends (e.g. Zend with jQuery)
Compare these frameworks and take the one, which matches the most to your project desgin and the coding structure used before
You should may be consider using things like ezComponents and Dtrace for debugging and testing
Do not be afraid of changes ;)
For GUI testing you may want to take a look at Selenium (as Ignas R pointed out already) OR you may wanna take a look at this tool as well: STIQ.
Best of luck!
In some cases, doing automatic testing may not be such a good idea, especially when the code base is dirty and PHP mixes it's behavior with Javascript.
It could be better to start with a simple checklist (with links, to make it faster) of the tests that should be done (manually) on the thing before each delivery.
When coding in a 3 years old minefield, better protect yourself with many error checking. The 15 minutes spent writing THE proper error message for each case will not be lost.
Use the bridging method: bridge ugly lengthy function fold() with a call to fnew() which is a wrapper around some clean classes, call both fold and fnew and compare result, log differences, throw the code into production and wait for your fishes. When doing this, always use one cycle for refactoring, anOTHER cycle for changing result (do not even fix bugs in old behavior, just bridge it).
I agree with KOHb, Selenium is a must !
Also have a look at PHPure,
Their software records the inputs and outputs from a working php website, and then automatically writes phpunit tests for functions that do not access external sources (db, files etc..).
It's not a 100% solution, but it's a great start