I´m working on a huge project at my work. We have about 200 database tables, accordingly a huge number of Models, Actions and so on.
How should I begin to write tests for this?
My biggest problem is, with symfony you can test with the lime framework, which is great. But most of the code writes, deletes or do other things with the database. So how can I test a Model without interacting with the database?
I want to do unit-tests because I´ve had problems with bugs in my code, through the refactoring of functions but I don´t even know how to start. The examples from the documentation cover only a very small function. How does it look, when the action file is over 700 lines of code?
Great question.. I've personally run into this all over the place.
Here's what I've found so far:
1) Get a dev database.. DO NOT test on a prod database!
2) It may sound trite, but start small and simple.
3) I don't know what your field is (e-commerce database, contacts database, etc...) but say it is an e-commerce DB. Start by testing creating some order models and saving them. Maybe recreate a real order in a test harness so it saves to the DB. Now you can quickly create 1000 orders to run tests on.. WAY faster than manually doing web checkouts for things. For maximum benefit, create a model of some thing you are currently working on so you can use it during your testing.
4) Now start testing the various methods that your model provides. Again, stick to the ones that are relevant to what you are currently trying to fix/work with. Don't worry about testing everything, just test some stuff, and make sure you can repeat your tests.
5) Need to test controllers? Cool, now you have a model to work with that you don't care about messing up, becuase it isn't real... Need some variations? Create more test suites that build differnt models to fit each of your needs. Models can be complex, but you should be able to write some test functions that create variations of your various models. Then run your controllers against those...
6) Keep plucking away at code coverage.
WARNING: Be careful about being the only one running the unit tests.. You will quickly become the most effective problem solver, but people will then try to get YOU to fix everything...
700 lines in a single controller action? Testing has a way of exposing issues in your own code in other ways than the obvious assertions. If something is difficult to test for whatever reason, there is something wrong with the code.
When starting to test a project, big or small, the focus should be code coverage. Don't worry so much about edge cases at first (unless the situation calls for it). When starting to test a project I like to start with the models as they are the most straightforward. From there move on to controller testing and see how it goes.
Related
I'm adding unit tests to a legacy PHP application that uses a MySQL compatible database. I want to write genuine unit tests that don't touch the database.
How should I avoid accidentally using a database connection?
Lots of parts of the application use static method calls to get a reference to a database connection wrapper object. If I'm looking at one of these calls I know how to use dependency injection and test doubles to avoid hitting the database from a test, but I'm not sure what to do about all the database queries that I'm not looking at at any one time, which could be some way down the call stack from the method I'm trying to test.
I've considered adding a public method to the database access class that would be called from the PHPUnit bootstrap file and set a static variable to make any further database access impossible, but I'm not keen on adding a function to the application code purely for the sake of the tests that would be harmful if called in production.
Adding tests to a legacy application can be delicate, especially unit tests. The main problem you will likely have is that most tests will be hard to write and easily become unintelligible, because they involve massive amount of setting up and mocking. At the same time you will likely not have much freedom to refactor them, so they become easier to test, because that will lead to ripple effects in the code base.
That's why I usually prefer end to end-tests. You can cover lots of ground without having to test close to the implementation and those tests are usually more useful when you want to do large scale refactoring or migrate the legacy code base later, because you ensure that the most important features you were using still work as expected.
For this approach you will need to test through the database, just not the live database. In the beginning it's probably easiest to just make a copy, but it's absolutely worthwhile to create a trimmed down database with some test fixtures from scratch. You can then use something like selenium to test your application through the web interface by describing the actions you take on the site, like go to url x, fill out a form and submit it and describe the expected outcome, like I should be on url y now and there should be a new entry in the database. As you can see these kinds of tests are written very close to what you see on the website and not so much around the implementation or the single units. This is actually intended because in a migration you might want to rip out large chunks and rewrite them. The unit tests will become completely useless then, because the implementation might change drastically, but those end2end-tests describing the functionality of the site will still remain valid.
There are multiple ways you can go about this. If you are familiar with PHPUnit you might want to try the selenium-extension. You should find tutorials for this online, for example this one: https://www.sitepoint.com/using-selenium-with-phpunit/
Another popular option for these kind of tests is Behat with the MinkExtension. In both cases the hardest part is setting up selenium, but once you are able to write a simple test, that for example goes to your frontpage and checks for some text snippet and get that running. You can write tests really fast.
One big downside of these tests is that they are very slow, because they do full web requests and in some cases have to do some waiting for JavaScript. So you should probably not test everything. Instead try to focus on the most important features. If you have some e-commerce project, maybe go through a very generic checkout procedure. Then expand on different variations that are important to you, e.g. logged in user vs. new user or adding vouchers to the basket. Another good way to start is write very stupid tests, that just check whether your urls are actually accessible, so go to url and check for status code and some expected text snippet. Those are not really that useful in terms of making sure your application behaves correctly, but they still give you some safety as to whether some random 500 errors appear out of the blue.
This is probably the best approach for making safe your app works well and make it easier to upgrade, refactor or migrate your application or parts of it. Additionally whenever you add new features, try to write some actual unit tests for them. It's probably easiest if they are not too connected with the old parts of the code. In the best case scenario you won't have to worry too much about the database, because you can replace the data you get from the database with some instances you prepare yourself in the test and then just test whatever feature. Obviously if it's something like a simple we want to have a form that adds this data to the database, you will probably still not want to write a unit test, but instead write one of those bigger end to end-tests instead.
When i started making a mobile app (that uses laravel on the server) i decided to not dig into testing or coding standards as i thought it was better to actually get a working app first.
A few months later i have a working app but my code doesn't follow any standards and nor have i written any tests.
Current directory structure i'm using is:
app/controllers : Contains all the controllers the app uses. The controllers aren't exactly thin, they contain most of the logic and some of them even have multiple conditional statements (if/else). All the database interactions occur in the controllers.
app/models : Define relationships with other models and contain certain functions relative to the particular model eg. A validate function.
app/libraries : contain custom helper functions.
app/database : contains the migrations and seeds.
My app is currently working and the reason for the slack is probably because i work alone on the app.
My concerns:
Should i go ahead and release the app and then see if its even worth
making the effort to refactor or should i first refactor.
I do wish to refactor the code but i'm unsure on as to what approach
i should take. Should i first get the standards right and then make
my code testable? Or should i not worry about standards( and
continue using classmap to autoload) and just try and make my code
testable?
How should i structure my files?
Where should i place interfaces,abstract classes etc?
Note: I am digging into testing and coding standards from whatever resources i can find, but if you guys could point me to some resource i would appreciate it.
Oh dear. Classic definition of legacy code, is code with no unit tests. So now you are in that unfortunate position where, if you are ever going to have to amend/enhance/resuse this code base, you are going to have to absorb a significant cost. The further away from your current implementation you get, the higher that cost is going to get. That's called technical debt. Having testable code doesn't get rid of it though, it simply reduces the interest rate.
Should you do it? Well if you release now, and it achieves some popularity and therefore need to do more to it...
If it's going to be received with total indifference or screams of dismay, then there's no point in releasing it at all, in fact that would be counter-productive.
Should you choose to carry on with the code base, I can't see any efficient way to re-factor for a coding standard without a unit test to prove you haven't broken the app in doing so. If you find a coding standard where testable and tested aren't a key part of it, ignore it, it's drivel.
Of course you still have the problem of how do you change code to make it testable without breaking it. I suspect you can iterate towards that with say delegation in your fat controllers without too much of a difficulty.
Two key points.
Have at least some test first, even if it's a wholly manual one, say a printout of the web page before you do anything. A set of integration tests, perhaps using an automation suite would be very useful, especially if you already have a coupling problem.
The other is, don't make too many changes at once and by too many I mean how many operations in the app will be affected by this code.
Robert C Martin's (no affiliation whatsoever) Working With Legacy Code would be a good purchase if you want to learn more.
Hopefully this lesson has already been learnt. Don't do this again.
Last but not least doing this sort of exercise is a great learning experience, rest assured you will run into situations (lots of them) where this perennial mistake has been made.
I want to make some unit tests in my project (i am new to testing), but tutorials online seem to show examples testing only simpliest stuff.
What I want to test is case when after sending POST to addAction in my SurveyController will result in adding corresponding rows to my survey and question tables (one-to-many).
What are the best practices to test database related stuff? Do I create separate db for my test environment and run tests on it? That is the only and right option?
It depends on your circumstances.
Here is my take on this:
The idea is to test but also be DRY (Don't repeat yourself). Your tests must cover all or as many different cases as possible to ensure that your component is thoroughly tested and ready to be released.
If you use an already developed framework to access your database like Doctrine, Zend Framework, PhalconPHP etc. then you can assume that the framework has been tested and not test the actual CRUD operations. You can concentrate on what your own code does.
Some people might want to test even that but in my view it is an overkill and waste of resources. One can actually include the tests of the particular framework with their own if they want to just have more tests :)
If however you were responsible for the database layer classes and its interaction with your application, yeah tests are a must. You might not run them all the time you need to prove a database operation works or not through some piece of code, but you need to have them.
Finally you can use Mock tests as Mark Baker suggested in your code and assume that the database will respond as you expect it to (since it has already been tested). You can then see how your application will react with different responses or results.
Mocking database operations will actually make your tests run faster (along with the other benefits that come with this strategy) since there won't be any database interactions in the tests themselves. This can become really handy in a project with hundreds if not thousands of tests and continuous integration.
HTH
I'm writing unit tests for a project (written in PHP, using PHPUnit) that have its entire environment (loaded components, events, configuration, cache, per-environment singletons, etc) held in an object which all the components use to interact with each other (using a mediator pattern).
In order to make the unit tests run faster, I'm sharing the environment object and some other objects (for example, in my test case for view object [as in the V of MVC], the view manager object [which acts as a factory for view objects and responsible for the actual rendering]) among tests in the same test case (using PHPUnit's setUpBeforeClass() and static properties).
Even though, to the best of my knowledge, the objects I share shouldn't effect the integrity of the tests (in the views case, for example, the environment and view manager object are shared, but a separate view object is created for every test - which is the object that's actually being tested by the test case), it just feels increasingly wrong to me.
I would prefer it if each test used a completely isolated environment and couldn't effect other tests in the same test case in any way. However, that would make the tests run much slower and it feels like a big price for something that I can't really pinpoint the downside of and mainly just "feels wrong".
What do you think? Can you pinpoint any downsides so I can convince myself its worth the longer execution time? Or am I just over reacting and its completely fine?
I share your feelings so maybe i just state my goals and my solution when i faced that issue:
Devs should have a test suite that runs very very fast
At least single test cases should execute in less than a second
I really want to be sure i don't have interdependencies in my test cases
I'm going to assume you have a Continuous Integration Server running. If not a cronjob might do but consider setting up jenkins, it's really really easy.
For normal usage:
Just share as much fixtures as you need to get the speed you need. It might not be pretty and there might be better solutions along the way but if you have something that is expensive to create just do it once.
I'd suggest helper methods getFoo() { if(!self::$foo) .... create ... return $foo;} over setUpBeforeClass because it can make sharing easier but mainly because of the following point.
Once a night:
Run your test suite with --process-isolation and in that bootstrap recreate your complete database and everything.
It might run 6 hours (disable code coverage for that!) but how cares. Your fixtures will be recreated for every single test case since it's a new php process and the static vars don't exist.
Using this way you can be sure that you don't have created dependent once a day. Thats good enough to remember what you did (and you can run with --filter and --process-isolation if you need to fix something).
Like writing "normal" code, when you write test cases, it's fine to rely on knowledge of how fixture objects work.
If a given factory method is documented as generating new instances every time, then I see only downside in creating the factory method anew each time, especially if the factory creation is itself expensive.
It helps to keep in mind a key goal around writing unit tests. You want to know within 5-10 minutes whether you broke the build. That way you can go out to lunch, go to a meeting, go home, etc. after you get the "all clear". If you know that some part of the fixture is reusable without creating interactions, then you should use that knowledge to make your tests even more comprehensive within that 5-10 minute window. I understand the purist impulse here, but it buys you nothing in terms of test independence, and it unnecessarily limits what your test suite will accomplish for you.
Perhaps this is blasphemous, but if you can still manage a significant threshold of code coverage and you can also guarantee that no state pollution can exist (and that is your real issue -- are you making sure that each test will not be effected by the data left over from the test before), I see no problem with leaving things the way they are.
Let the tests run quickly, and when a bug is found (which is inevitable in integration testing) then you have reason to invest the time in localizing one particular test or set of tests. If you have a toolkit which works for the general case, however, I would leave it as is.
I've got a big project written in PHP and Javascript. The problem is that it's become so big and unmaintainable that changing some little portion of the code will upset and probably break a whole lot of other portions.
I'm really bad at testing my own code (as a matter of fact, others point this out daily), which makes it even more difficult to maintain the project.
The project itself isn't that complicated or complex, it's more the way it's built that makes it complex: we don't have predefined rules or lists to follow when doing our testing. This often results in lots of bugs and unhappy customers.
We started discussing this at the office and came up with the idea of starting to use test driven development instead of the develop like hell and maybe test later (which almost always ends up being fix bugs all the time).
After that background, the things I need help with are the following:
How to implement a test
framework into an already existing
project? (3 years in the
making and counting)
What kind of frameworks are there
for testing? I figure I'll need one
framework for Javascript and
one for PHP.
Whats the best approach for testing
the graphical user interface?
I've never used Unit Testing before so this is really uncharted territory for me.
G'day,
Edit: I've just had a quick look through the first chapter of "The Art of Unit Testing" which is also available as a free PDF at the book's website. It'll give you a good overview of what you are trying to do with a unit test.
I'm assuming you're going to use an xUnit type framework. Some initial high-level thoughts are:
Edit: make sure that everyone is is agreement as to what constitutes a good unit test. I'd suggest using the above overview chapter as a good starting point and if needed take it from there. Imagine having people run off enthusiastically to create lots of unit tests while having a different understanding of what a "good" unit test. It'd be terrible for you to out in the future that 25% of your unit tests aren't useful, repeatable, reliable, etc., etc..
add tests to cover small chunks of code at a time. That is, don't create a single, monolithic task to add tests for the existing code base.
modify any existing processes to make sure new tests are added for any new code written. Make it a part of the review process of the code that unit tests must be provided for the new functionality.
extend any existing bugfix processes to make sure that new tests are created to show presence and prove the absence of the bug. N.B. Don't forget to rollback your candidate fix to introduce the bug again to verify that it is only that single patch that has corrected the problem and it is not being fixed by a combination of factors.
Edit: as you start to build up the number of your tests, start running them as nightly regression tests to check nothing has been broken by new functionality.
make a successful run of all existing tests and entry criterion for the review process of a candidate bugfix.
Edit: start keeping a catalogue of test types, i.e. test code fragments, to make the creation of new tests easier. No sense in reinventing the wheel all the time. The unit test(s) written to test opening a file in one part of the code base is/are going to be similar to the unit test(s) written to test code that opens a different file in a different part of the code base. Catalogue these to make them easy to find.
Edit: where you are only modifying a couple of methods for an existing class, create a test suite to hold the complete set of tests for the class. Then only add the individual tests for the methods you are modifying to this test suite. This uses xUnit termonology as I'm now assuming you'll be using an xUnit framework like PHPUnit.
use a standard convention for the naming of your test suites and tests, e.g. testSuite_classA which will then contain individual tests like test__test_function. For example, test_fopen_bad_name and test_fopen_bad_perms, etc. This helps minimise the noise when moving around the code base and looking at other people's tests. It also has then benefit of helping people when they come to name their tests in the first place by freeing up their mind to work on the more interesting stuff like the tests themselves.
Edit: i wouldn't use TDD at this stage. By definition, TDD will need all tests present before the changes are in place so you will have failing tests all over the place as you add new testSuites to cover classes that you are working on. Instead add the new testSuite and then add the individual tests as required so you don't get a lot of noise occurring in your test results for failing tests. And, as Yishai points out, adding the task of learning TDD at this point in time will really slow you down. Put learning TDD as a task to be done when you have some spare time. It's not that difficult.
as a corollary of this you'll need a tool to keep track of the those existing classes where the testSuite exists but where tests have not yet been written to cover the other member functions in the class. This way you can keep track of where your test coverage has holes. I'm talking at a high level here where you can generate a list of classes and specific member functions where no tests currently exist. A standard naming convention for the tests and testSuites will greatly help you here.
I'll add more points as I think of them.
HTH
You should get yourself a copy Working Effectively with Legacy Code. This will give you good guidance in how to introduce tests into code that is not written to be tested.
TDD is great, but you do need to start with just putting existing code under test to make sure that changes you make don't change existing required behavior while introducing changes.
However, introducing TDD now will slow you down a lot before you get back going, because retrofitting tests, even only in the area you are changing, is going to get complicated before it gets simple.
Just to add to the other excellent answers, I'd agree that going from 0% to 100% coverage in one go is unrealistic - but that you should definitely add unit tests every time you fix a bug.
You say that there are quite a lot of bugs and unhappy customers - I'd be very positive about incorporating strict TDD into the bugfixing process, which is much easier than implementing it overall. After all, if there really is a bug there that needs to be fixed, then creating a test that reproduces it serves various goals:
It's likely to be a minimal test case to demonstrate that there really is an issue
With confidence that the (currently failing) test highlights the reported problem, you'll know for sure if your changes have fixed it
It will forever stand as a regression test that will prevent this same issue recurring in future.
Introducing tests to an existing project is difficult and likely to be a long process, but doing them at the same time as fixing bugs is such an ideal time to do so (parallel to introducing tests gradually in a "normal" sense) that it would be a shame not to take that chance and make lemonade from your bug reports. :-)
From a planning perspective, I think you have three basic choices:
take a cycle to retrofit the code with unit tests
designate part of the team to retrofit the code with unit tests
introduce unit tests gradually as you work on the code
The first approach may well last a lot longer than you anticipate, and your visible productivity will take a hit. If you use it, you will need to get buy-in from all your stakeholders. However, you might use it to kickstart the process.
The problem with the second approach is that you create a distinction between coders and test writers. The coders will not feel any ownership for test maintenance. I think this approach is worth avoiding.
The third approach is the most organic, and it gets you into test-driven development from the get go. It may take some time for a useful body of unit tests to accumulate. The slow pace of test accumulation might actually be an advantage in that it gives you time to get good at writing tests.
All things considered, I think I'd opt for a modest sprint in the spirit of approach 1, followed by a commitment to approach 3.
For the general principles of unit testing I recommend the book xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
I've used PHPUnit with good results. PHPUnit, like other JUnit-derived projects, requires that code to be tested be organized into classes. If your project is not object-oriented, then you'll need to start refactoring non-procedural code into functions, and functions into classes.
I've not personally used a JavaScript framework, though I would image that these frameworks would also require that your code be structured into (at least) callable functions if not full-blown objects.
For testing GUI applications, you may benefit from using Selenium, though a checklist written by a programmer with good QA instincts might work just fine. I've found that using MediaWiki or your favorite Wiki engine is a good place to store checklists and related project documentation.
Implementing a framework is in most cases a complex task because you kinda start rebuilding your old code with some new solid framework parts. Those old parts must start to communicate with the framework. The old parts must receive some callbacks and returnstates, the old parts must then somehow point that out to the user and in fact you suddenly have 2 systems to test.
If you say that you application itself isn't that complex but it has become due to lack of testing it might be a better option to rebuild the application. Put some common frameworks like Zend to the test, gather your requirements, find out if the tested framework suits for the requirements and decide if it's usefull to start over.
I'm not very sure of unit testing, but NetBeans has a built-in unit testing suite.
If the code is really messy, it's possible that it will be very hard to do any unit testing. Only sufficiently loosely coupled and sufficiently well designed components can be unit tested easily. However, functional testing may be a lot easier to implement in your case. I would recommend taking a look at Selenium. With this framework, you will be able to test your GUI and the backend at the same time. However, most probably, it won't help you catch the bugs as well as you could with unit testing.
Maybe this list will help you and your mates to re-structure everything:
Use UML, to design and handle exceptions (http://en.wikipedia.org/wiki/Unified_Modeling_Language)
Use BPMS, to design your work-flow so you won't struggle (http://en.wikipedia.org/wiki/Business_process_management)
Get a list of php frameworks which also support javascript backends (e.g. Zend with jQuery)
Compare these frameworks and take the one, which matches the most to your project desgin and the coding structure used before
You should may be consider using things like ezComponents and Dtrace for debugging and testing
Do not be afraid of changes ;)
For GUI testing you may want to take a look at Selenium (as Ignas R pointed out already) OR you may wanna take a look at this tool as well: STIQ.
Best of luck!
In some cases, doing automatic testing may not be such a good idea, especially when the code base is dirty and PHP mixes it's behavior with Javascript.
It could be better to start with a simple checklist (with links, to make it faster) of the tests that should be done (manually) on the thing before each delivery.
When coding in a 3 years old minefield, better protect yourself with many error checking. The 15 minutes spent writing THE proper error message for each case will not be lost.
Use the bridging method: bridge ugly lengthy function fold() with a call to fnew() which is a wrapper around some clean classes, call both fold and fnew and compare result, log differences, throw the code into production and wait for your fishes. When doing this, always use one cycle for refactoring, anOTHER cycle for changing result (do not even fix bugs in old behavior, just bridge it).
I agree with KOHb, Selenium is a must !
Also have a look at PHPure,
Their software records the inputs and outputs from a working php website, and then automatically writes phpunit tests for functions that do not access external sources (db, files etc..).
It's not a 100% solution, but it's a great start