With tests, using the #depends you can make a test method depend on another one, that way, should the first test fail, the test that depends on it will be ignored, is there a way to do that with Test Classes
For example, say i have a page which i test the layout of, checking that the images are showing up, if links are correct, this will be one Test Class, on this page is a link to a form, for that page i would create a new test class checking its layout, validation ect.
what i want to try and pull of is if any test of the first Test Class fails then the second one should be skipped since the first page should be correct as it will be the page a user will see before entering the first (unless they type the second page's url in but have to assume users are stupid and as such don't know how the address bar work)
i should also note that all the tests are being stored using TFS so while the team will have the same tests we may have different phpunit.xml files (one person apparently uses php.xml.dist and wont change from it because it is mentioned once in Magento TAF even though i use .xml and have had no problems), as such, trying to enforce an order in phpunit.xml wont be all that useful (also enforcing the order in the .xml wont do this dependency thing)
Just because we can do a thing, does not mean we should. Having tests that depend on other tests' successful execution is a bad idea. Every decent text on testing tells you this. Listen to them.
I don't think phpUnit has a way to make one test class depend on another. (I started writing down some hacks that spring to mind, but they are so ugly and fragile and I deleted them again.)
I think the best approach is one big class for all your functional tests. Then you can use #depends as much as you need to. <-- That is where my answer to your actual question ends :-)
In your comment on Ross's answer you say: "I was taught that if you have a large number of (test) methods in one class then you should break it into separate classes" To see why we are allowed to break this rule you have to go below the surface to why that is usually a good idea: a lot of code in one class suggests the class is doing too much, making it harder to change, and harder to test. So you use Extract Class refactoring to split classes up into finer functionality. But never mechanically: each class should still be a nice, clean abstraction of something.
In unit testing the class is better thought of as a way to collect related tests together. When one test depends on another then obviously they are related, so they should be in the same class.
If a 2000-line file make you unhappy, one thing you can do, and should do, is Extract Parent Class. Into your parent class goes all the helper functions and custom asserts. You will leave all the actual tests in the derived class, but go through each of them and see what common functionality could be moved to a shared function, and then put that shared function in the parent class.
Responding to Ross's suggestion that #depends is evil, I prefer to think of it as helping you find the balance between idealism and real-world constraints. In the ideal world you want all your tests to be completely independent. That means each test needs to do its own fixture creation and tear down. If using a database, it should be creating its own database (a unique name, so in the future they can be run in parallel), then creating the tables, filling them with data, etc. (Use the helper functions in your parent class to share this common fixture code.)
On the other hand, we want our tests to finish in under 100 milliseconds, so that they don't interrupt our creative flow. Fixture sharing helps speed up tests at the cost of removing the independence.
For functional tests of a website I would recommend using #depends for obvious things like login. If most of your tests will first log into the site, then it makes a lot of sense to make a loginTest(), and have all other tests #depend on that. If login is not working, you know for sure that all your other tests are going to fail... and are wasting huge amounts of the most valuable programmer resource in the process.
When it is not so clear-cut, I'd err on the side of idealism, and come back and optimize later, if you need to.
Related
I'm adding unit tests to a legacy PHP application that uses a MySQL compatible database. I want to write genuine unit tests that don't touch the database.
How should I avoid accidentally using a database connection?
Lots of parts of the application use static method calls to get a reference to a database connection wrapper object. If I'm looking at one of these calls I know how to use dependency injection and test doubles to avoid hitting the database from a test, but I'm not sure what to do about all the database queries that I'm not looking at at any one time, which could be some way down the call stack from the method I'm trying to test.
I've considered adding a public method to the database access class that would be called from the PHPUnit bootstrap file and set a static variable to make any further database access impossible, but I'm not keen on adding a function to the application code purely for the sake of the tests that would be harmful if called in production.
Adding tests to a legacy application can be delicate, especially unit tests. The main problem you will likely have is that most tests will be hard to write and easily become unintelligible, because they involve massive amount of setting up and mocking. At the same time you will likely not have much freedom to refactor them, so they become easier to test, because that will lead to ripple effects in the code base.
That's why I usually prefer end to end-tests. You can cover lots of ground without having to test close to the implementation and those tests are usually more useful when you want to do large scale refactoring or migrate the legacy code base later, because you ensure that the most important features you were using still work as expected.
For this approach you will need to test through the database, just not the live database. In the beginning it's probably easiest to just make a copy, but it's absolutely worthwhile to create a trimmed down database with some test fixtures from scratch. You can then use something like selenium to test your application through the web interface by describing the actions you take on the site, like go to url x, fill out a form and submit it and describe the expected outcome, like I should be on url y now and there should be a new entry in the database. As you can see these kinds of tests are written very close to what you see on the website and not so much around the implementation or the single units. This is actually intended because in a migration you might want to rip out large chunks and rewrite them. The unit tests will become completely useless then, because the implementation might change drastically, but those end2end-tests describing the functionality of the site will still remain valid.
There are multiple ways you can go about this. If you are familiar with PHPUnit you might want to try the selenium-extension. You should find tutorials for this online, for example this one: https://www.sitepoint.com/using-selenium-with-phpunit/
Another popular option for these kind of tests is Behat with the MinkExtension. In both cases the hardest part is setting up selenium, but once you are able to write a simple test, that for example goes to your frontpage and checks for some text snippet and get that running. You can write tests really fast.
One big downside of these tests is that they are very slow, because they do full web requests and in some cases have to do some waiting for JavaScript. So you should probably not test everything. Instead try to focus on the most important features. If you have some e-commerce project, maybe go through a very generic checkout procedure. Then expand on different variations that are important to you, e.g. logged in user vs. new user or adding vouchers to the basket. Another good way to start is write very stupid tests, that just check whether your urls are actually accessible, so go to url and check for status code and some expected text snippet. Those are not really that useful in terms of making sure your application behaves correctly, but they still give you some safety as to whether some random 500 errors appear out of the blue.
This is probably the best approach for making safe your app works well and make it easier to upgrade, refactor or migrate your application or parts of it. Additionally whenever you add new features, try to write some actual unit tests for them. It's probably easiest if they are not too connected with the old parts of the code. In the best case scenario you won't have to worry too much about the database, because you can replace the data you get from the database with some instances you prepare yourself in the test and then just test whatever feature. Obviously if it's something like a simple we want to have a form that adds this data to the database, you will probably still not want to write a unit test, but instead write one of those bigger end to end-tests instead.
When i started making a mobile app (that uses laravel on the server) i decided to not dig into testing or coding standards as i thought it was better to actually get a working app first.
A few months later i have a working app but my code doesn't follow any standards and nor have i written any tests.
Current directory structure i'm using is:
app/controllers : Contains all the controllers the app uses. The controllers aren't exactly thin, they contain most of the logic and some of them even have multiple conditional statements (if/else). All the database interactions occur in the controllers.
app/models : Define relationships with other models and contain certain functions relative to the particular model eg. A validate function.
app/libraries : contain custom helper functions.
app/database : contains the migrations and seeds.
My app is currently working and the reason for the slack is probably because i work alone on the app.
My concerns:
Should i go ahead and release the app and then see if its even worth
making the effort to refactor or should i first refactor.
I do wish to refactor the code but i'm unsure on as to what approach
i should take. Should i first get the standards right and then make
my code testable? Or should i not worry about standards( and
continue using classmap to autoload) and just try and make my code
testable?
How should i structure my files?
Where should i place interfaces,abstract classes etc?
Note: I am digging into testing and coding standards from whatever resources i can find, but if you guys could point me to some resource i would appreciate it.
Oh dear. Classic definition of legacy code, is code with no unit tests. So now you are in that unfortunate position where, if you are ever going to have to amend/enhance/resuse this code base, you are going to have to absorb a significant cost. The further away from your current implementation you get, the higher that cost is going to get. That's called technical debt. Having testable code doesn't get rid of it though, it simply reduces the interest rate.
Should you do it? Well if you release now, and it achieves some popularity and therefore need to do more to it...
If it's going to be received with total indifference or screams of dismay, then there's no point in releasing it at all, in fact that would be counter-productive.
Should you choose to carry on with the code base, I can't see any efficient way to re-factor for a coding standard without a unit test to prove you haven't broken the app in doing so. If you find a coding standard where testable and tested aren't a key part of it, ignore it, it's drivel.
Of course you still have the problem of how do you change code to make it testable without breaking it. I suspect you can iterate towards that with say delegation in your fat controllers without too much of a difficulty.
Two key points.
Have at least some test first, even if it's a wholly manual one, say a printout of the web page before you do anything. A set of integration tests, perhaps using an automation suite would be very useful, especially if you already have a coupling problem.
The other is, don't make too many changes at once and by too many I mean how many operations in the app will be affected by this code.
Robert C Martin's (no affiliation whatsoever) Working With Legacy Code would be a good purchase if you want to learn more.
Hopefully this lesson has already been learnt. Don't do this again.
Last but not least doing this sort of exercise is a great learning experience, rest assured you will run into situations (lots of them) where this perennial mistake has been made.
I have a question on how to implement behat/mink functional tests.
In my web app, I have users that can access some data sheets if they have the required credentials (i.e. no access/ read only / write).
I want to be able to test all the possible contexts via behat/mink.
The question is what is the best practice for such testing ?
Some dev told me that I have to create a scenario for each type of user I would like to use. Then, I will have to use the user I created in others tests.
But I am not very confortable with this idea : I believe that it introduces coupling between my tests. If the test that creates the user fails, then the test that checks the access over my datasheet for that specific user will also fail.
So, I believed that I could use some fixtures : before testing my app, I run a script that will insert me all the profiles I need. I will have some tests dedicated for creating users, and I will use the fixtures to check if a specific user is allowed to access a specific datasheet.
The counterpart with this solution is that I will have to maintain the fixtures set.
Do you have any suggestion / idea ?
Hi user3333860 (what a username XD),
I'm not an expert in testing and these days, I'm more on ruby/rspec but I personally think that both solutions are good and are used.
Use a feature to create your User:
If your test for creating a user fails, It may means that your User creation refactoring is also messed up.
So the fact that other tests fails doesn't seems to be a drawback to me. But I do understand the fact that you don't want to have coupling between your tests.
The main point is, are your test ran in a static order or are they run randomly (ie: rspec don't always launch tests in the same order) and are you ready to have the same test run multiple time in different features so that your other tests can successfully complete.
Use a fixtures:
Well it is also a good and popular solution (in fact, the one I use) and you already pinpoint the fact that you will have to maintain.
In the end, I'd take the fixtures path ONLY with a tools helping like FactoryGirl that helps you maintain your objects templates (here is the PHP version of it)
https://github.com/breerly/factory-girl-php
Hope I helped with your dilemma.
I'm writing unit tests for a project (written in PHP, using PHPUnit) that have its entire environment (loaded components, events, configuration, cache, per-environment singletons, etc) held in an object which all the components use to interact with each other (using a mediator pattern).
In order to make the unit tests run faster, I'm sharing the environment object and some other objects (for example, in my test case for view object [as in the V of MVC], the view manager object [which acts as a factory for view objects and responsible for the actual rendering]) among tests in the same test case (using PHPUnit's setUpBeforeClass() and static properties).
Even though, to the best of my knowledge, the objects I share shouldn't effect the integrity of the tests (in the views case, for example, the environment and view manager object are shared, but a separate view object is created for every test - which is the object that's actually being tested by the test case), it just feels increasingly wrong to me.
I would prefer it if each test used a completely isolated environment and couldn't effect other tests in the same test case in any way. However, that would make the tests run much slower and it feels like a big price for something that I can't really pinpoint the downside of and mainly just "feels wrong".
What do you think? Can you pinpoint any downsides so I can convince myself its worth the longer execution time? Or am I just over reacting and its completely fine?
I share your feelings so maybe i just state my goals and my solution when i faced that issue:
Devs should have a test suite that runs very very fast
At least single test cases should execute in less than a second
I really want to be sure i don't have interdependencies in my test cases
I'm going to assume you have a Continuous Integration Server running. If not a cronjob might do but consider setting up jenkins, it's really really easy.
For normal usage:
Just share as much fixtures as you need to get the speed you need. It might not be pretty and there might be better solutions along the way but if you have something that is expensive to create just do it once.
I'd suggest helper methods getFoo() { if(!self::$foo) .... create ... return $foo;} over setUpBeforeClass because it can make sharing easier but mainly because of the following point.
Once a night:
Run your test suite with --process-isolation and in that bootstrap recreate your complete database and everything.
It might run 6 hours (disable code coverage for that!) but how cares. Your fixtures will be recreated for every single test case since it's a new php process and the static vars don't exist.
Using this way you can be sure that you don't have created dependent once a day. Thats good enough to remember what you did (and you can run with --filter and --process-isolation if you need to fix something).
Like writing "normal" code, when you write test cases, it's fine to rely on knowledge of how fixture objects work.
If a given factory method is documented as generating new instances every time, then I see only downside in creating the factory method anew each time, especially if the factory creation is itself expensive.
It helps to keep in mind a key goal around writing unit tests. You want to know within 5-10 minutes whether you broke the build. That way you can go out to lunch, go to a meeting, go home, etc. after you get the "all clear". If you know that some part of the fixture is reusable without creating interactions, then you should use that knowledge to make your tests even more comprehensive within that 5-10 minute window. I understand the purist impulse here, but it buys you nothing in terms of test independence, and it unnecessarily limits what your test suite will accomplish for you.
Perhaps this is blasphemous, but if you can still manage a significant threshold of code coverage and you can also guarantee that no state pollution can exist (and that is your real issue -- are you making sure that each test will not be effected by the data left over from the test before), I see no problem with leaving things the way they are.
Let the tests run quickly, and when a bug is found (which is inevitable in integration testing) then you have reason to invest the time in localizing one particular test or set of tests. If you have a toolkit which works for the general case, however, I would leave it as is.
On our development team, we decided to give Unit Testing a try. We use Simpletest. However, it's been a tough road. After a week, I only have created 1 unit test that tests a certain helper file. That's it. The rest (controllers, models, views, libraries) don't have unit tests yet. And I plan to not test a majority of them. Views, for example, are too trivial to test, so I pass up on testing that. Next, the controllers. I plan my controllers to not do complex stuff, so that it only does passing of information between the models and the views. I'd move those more complex stuff to libraries or helpers.
Now for my questions:
1) Am I doing it wrong? So far, there's nothing more that I can see that can be erroneous so it would need a unit test. Most of the stuff (right now) are just CRUD.
2) Do we really need to unit test controllers? Since a controller's job is just minor processing of data passed between View and Model, I find very little initiative in unit testing it.
3) If I use WebTestCase to test for controllers, would that be still considered a Unit Test? Or is it already an integration test?
4) Suppose you got me to test my controller, how would I test it? As far as I know, CI follows the Front Controller pattern through index.php, so how would I handle (mock?) that?
If you are still open with suggestion to another unit test for CodeIgniter, I suggest you try Toast. I found it easy to use, and not interfere much with my development process.
I only use unit test to test my library, helper, and model. My controller not have much code, only get post and uri parameter, sanitizing it using trim or intval, pass it to library or model, then pass the result to view.
View almost have none code to test, since it display everything to browser. Mostly, it just need css and js debugging.
Model almost always need testing, since it dealing with data. Without unit test, I found it difficult to trace some bug, specially with a complex computation.
Library and helper do a repetitive task, so it need a unit test to make sure the logic in it is doing the work correctly.
I hope this help.
Are you doing something wrong? I don't think so.
Do we really need to unit test controllers? I don't. Maybe I should. Seems like a lot of work, though.
If I use WebTestCase to test for controllers, would that be still considered a Unit Test? Or is it already an integration test? WebTestCase would be an interesting approach to testing controllers if some meaningful output could be detected; for example, detecting that no error has occurred when calling /some/specific/path.
Suppose you got me to test my controller, how would I test it? That's a tough one. You would presumably need to initialize part of the application environment in order to do anything worthwhile.
Most articles/books tell you to define your tests before you start coding. Maybe I've tried that, but I'm usually too impatient. It seems to get in the way of quick prototyping, but maybe defining the unit tests is a way of quick prototyping.
I've found that deciding what to test with PHP is a challenge. I think you have to pick your battles. If it's critical that a method return a value of specific type, then test for that. If a lot of things happen automatically when instantiating an object, you could test for that too.
In general, what I do -- right or wrong -- is get everything working, then create some basic tests and then add tests as needed based on any problems I encounter. The idea is to never have a repeat problem and each test will insure that the application behaves the same through its life-cycle.
As for specifics, I'm using Zend Framework, but it will be similar in CodeIgniter. I also use SimpleTest (but with my own class wrapper). I may or may not unit test models and I have never implemented tests for controllers or views; it's seems like too much work and too little benefit. Most frameworks "fail early" and so issues are typically obvious. But any common library code make for easier targets and bugs here -- especially logic bugs -- are harder to detect. Set up your tests to make sure things work as expected so that your framework-specific code encounters few problems.