I have a question on how to implement behat/mink functional tests.
In my web app, I have users that can access some data sheets if they have the required credentials (i.e. no access/ read only / write).
I want to be able to test all the possible contexts via behat/mink.
The question is what is the best practice for such testing ?
Some dev told me that I have to create a scenario for each type of user I would like to use. Then, I will have to use the user I created in others tests.
But I am not very confortable with this idea : I believe that it introduces coupling between my tests. If the test that creates the user fails, then the test that checks the access over my datasheet for that specific user will also fail.
So, I believed that I could use some fixtures : before testing my app, I run a script that will insert me all the profiles I need. I will have some tests dedicated for creating users, and I will use the fixtures to check if a specific user is allowed to access a specific datasheet.
The counterpart with this solution is that I will have to maintain the fixtures set.
Do you have any suggestion / idea ?
Hi user3333860 (what a username XD),
I'm not an expert in testing and these days, I'm more on ruby/rspec but I personally think that both solutions are good and are used.
Use a feature to create your User:
If your test for creating a user fails, It may means that your User creation refactoring is also messed up.
So the fact that other tests fails doesn't seems to be a drawback to me. But I do understand the fact that you don't want to have coupling between your tests.
The main point is, are your test ran in a static order or are they run randomly (ie: rspec don't always launch tests in the same order) and are you ready to have the same test run multiple time in different features so that your other tests can successfully complete.
Use a fixtures:
Well it is also a good and popular solution (in fact, the one I use) and you already pinpoint the fact that you will have to maintain.
In the end, I'd take the fixtures path ONLY with a tools helping like FactoryGirl that helps you maintain your objects templates (here is the PHP version of it)
https://github.com/breerly/factory-girl-php
Hope I helped with your dilemma.
Related
I'm adding unit tests to a legacy PHP application that uses a MySQL compatible database. I want to write genuine unit tests that don't touch the database.
How should I avoid accidentally using a database connection?
Lots of parts of the application use static method calls to get a reference to a database connection wrapper object. If I'm looking at one of these calls I know how to use dependency injection and test doubles to avoid hitting the database from a test, but I'm not sure what to do about all the database queries that I'm not looking at at any one time, which could be some way down the call stack from the method I'm trying to test.
I've considered adding a public method to the database access class that would be called from the PHPUnit bootstrap file and set a static variable to make any further database access impossible, but I'm not keen on adding a function to the application code purely for the sake of the tests that would be harmful if called in production.
Adding tests to a legacy application can be delicate, especially unit tests. The main problem you will likely have is that most tests will be hard to write and easily become unintelligible, because they involve massive amount of setting up and mocking. At the same time you will likely not have much freedom to refactor them, so they become easier to test, because that will lead to ripple effects in the code base.
That's why I usually prefer end to end-tests. You can cover lots of ground without having to test close to the implementation and those tests are usually more useful when you want to do large scale refactoring or migrate the legacy code base later, because you ensure that the most important features you were using still work as expected.
For this approach you will need to test through the database, just not the live database. In the beginning it's probably easiest to just make a copy, but it's absolutely worthwhile to create a trimmed down database with some test fixtures from scratch. You can then use something like selenium to test your application through the web interface by describing the actions you take on the site, like go to url x, fill out a form and submit it and describe the expected outcome, like I should be on url y now and there should be a new entry in the database. As you can see these kinds of tests are written very close to what you see on the website and not so much around the implementation or the single units. This is actually intended because in a migration you might want to rip out large chunks and rewrite them. The unit tests will become completely useless then, because the implementation might change drastically, but those end2end-tests describing the functionality of the site will still remain valid.
There are multiple ways you can go about this. If you are familiar with PHPUnit you might want to try the selenium-extension. You should find tutorials for this online, for example this one: https://www.sitepoint.com/using-selenium-with-phpunit/
Another popular option for these kind of tests is Behat with the MinkExtension. In both cases the hardest part is setting up selenium, but once you are able to write a simple test, that for example goes to your frontpage and checks for some text snippet and get that running. You can write tests really fast.
One big downside of these tests is that they are very slow, because they do full web requests and in some cases have to do some waiting for JavaScript. So you should probably not test everything. Instead try to focus on the most important features. If you have some e-commerce project, maybe go through a very generic checkout procedure. Then expand on different variations that are important to you, e.g. logged in user vs. new user or adding vouchers to the basket. Another good way to start is write very stupid tests, that just check whether your urls are actually accessible, so go to url and check for status code and some expected text snippet. Those are not really that useful in terms of making sure your application behaves correctly, but they still give you some safety as to whether some random 500 errors appear out of the blue.
This is probably the best approach for making safe your app works well and make it easier to upgrade, refactor or migrate your application or parts of it. Additionally whenever you add new features, try to write some actual unit tests for them. It's probably easiest if they are not too connected with the old parts of the code. In the best case scenario you won't have to worry too much about the database, because you can replace the data you get from the database with some instances you prepare yourself in the test and then just test whatever feature. Obviously if it's something like a simple we want to have a form that adds this data to the database, you will probably still not want to write a unit test, but instead write one of those bigger end to end-tests instead.
I'm creating a php script, which should process the POST data it receives (from AJAX or else) and send it further (to another script).
I'm wondering how to develop it in a "BDD way".
So far I've done the "processing part" by writting features with Behat and created the required blocks (classes) using phpspec.
But then I'm blocked when it comes to testing those following features:
the script only processes / accepts POST data,
the script sends only valid data further after processing,
the script sends back errors in case of invalid data.
It seems to me that I could write the tests against the script itself, but then I'm wondering:
if it's a good idea (it does seem simple enough but a bit messy though because there is not much isolation)
how to do this elegantly in behat (it seems messy for me to have to manually run my local server and have its url hardcoded in my tests / contexts, but maybe it's just the way to do it)
Any ideas or suggestions?
BDD:
Let's try to change your mindset a bit. The BDD is about a collaboration and the automation part (tests) are only the last part of it.
Your acceptance tests, this is what you do via Behat, should only cover a specification of your feature via examples. It means, don't focus on testing of all possible scenarios like you would do via unit/integration tests but specify only the minimum which describes your feature enough to be revealing the intention of that feature.
In most cases, the examples cover only positive scenarios and 1-5 are enough.
Little help here. Ask yourself, what would you mention if these examples would be a part of a documentation for customers? The specification by examples is no more no less than the documentation of an application which has an ability to be automatically tested.
Testing level:
Unfortunately, I don't know the technical background of your script so the answer will be more a theory.
There are more levels of acceptance tests and the higher you test the more you cover but the more expensive is their creation and maintenance:
UI
HTTP request via infrastructure
Initiate the application and inject a fake request
Call a controller directly
Call an application service which is processing the domain logic
There is my personal practice. Because BDD is the best with TDD, I always start with the point 5), and sometimes, I also add the higher level 3) to be sure that the application works correctly as the whole. I use levels 2) and 1) very rarely as I don't need to test my infrastructure via acceptance tests, it's not their purpose.
This is more of a comment rather than a proper answer, but it would not fit well in a comment either...
So first thing, good that you're trying to test all the things. But in order to do BDD (hence using Behat), there has to be some sort of benefit for a stakeholder or part of the business with who you should have a conversation about the feature at hand.
The fact that you're just describing a script that receives a particular input and transforms it into a particular output, sounds pretty much what you'd expect from a Unit test, right?
On the other hand, if this particular script is solving a stakeholder's need and there's a story or scenario that this script is helping to satisfy, I assume that there would be some sort of change in the state of the system which you can test for. If so, then go ahead and describe it using Behat, having a conversation with the appropriate stakeholders. I guess that you'd need to set up an environment for your system in which you'd then run your script and then check that the state of it has been changed appropriately.
With tests, using the #depends you can make a test method depend on another one, that way, should the first test fail, the test that depends on it will be ignored, is there a way to do that with Test Classes
For example, say i have a page which i test the layout of, checking that the images are showing up, if links are correct, this will be one Test Class, on this page is a link to a form, for that page i would create a new test class checking its layout, validation ect.
what i want to try and pull of is if any test of the first Test Class fails then the second one should be skipped since the first page should be correct as it will be the page a user will see before entering the first (unless they type the second page's url in but have to assume users are stupid and as such don't know how the address bar work)
i should also note that all the tests are being stored using TFS so while the team will have the same tests we may have different phpunit.xml files (one person apparently uses php.xml.dist and wont change from it because it is mentioned once in Magento TAF even though i use .xml and have had no problems), as such, trying to enforce an order in phpunit.xml wont be all that useful (also enforcing the order in the .xml wont do this dependency thing)
Just because we can do a thing, does not mean we should. Having tests that depend on other tests' successful execution is a bad idea. Every decent text on testing tells you this. Listen to them.
I don't think phpUnit has a way to make one test class depend on another. (I started writing down some hacks that spring to mind, but they are so ugly and fragile and I deleted them again.)
I think the best approach is one big class for all your functional tests. Then you can use #depends as much as you need to. <-- That is where my answer to your actual question ends :-)
In your comment on Ross's answer you say: "I was taught that if you have a large number of (test) methods in one class then you should break it into separate classes" To see why we are allowed to break this rule you have to go below the surface to why that is usually a good idea: a lot of code in one class suggests the class is doing too much, making it harder to change, and harder to test. So you use Extract Class refactoring to split classes up into finer functionality. But never mechanically: each class should still be a nice, clean abstraction of something.
In unit testing the class is better thought of as a way to collect related tests together. When one test depends on another then obviously they are related, so they should be in the same class.
If a 2000-line file make you unhappy, one thing you can do, and should do, is Extract Parent Class. Into your parent class goes all the helper functions and custom asserts. You will leave all the actual tests in the derived class, but go through each of them and see what common functionality could be moved to a shared function, and then put that shared function in the parent class.
Responding to Ross's suggestion that #depends is evil, I prefer to think of it as helping you find the balance between idealism and real-world constraints. In the ideal world you want all your tests to be completely independent. That means each test needs to do its own fixture creation and tear down. If using a database, it should be creating its own database (a unique name, so in the future they can be run in parallel), then creating the tables, filling them with data, etc. (Use the helper functions in your parent class to share this common fixture code.)
On the other hand, we want our tests to finish in under 100 milliseconds, so that they don't interrupt our creative flow. Fixture sharing helps speed up tests at the cost of removing the independence.
For functional tests of a website I would recommend using #depends for obvious things like login. If most of your tests will first log into the site, then it makes a lot of sense to make a loginTest(), and have all other tests #depend on that. If login is not working, you know for sure that all your other tests are going to fail... and are wasting huge amounts of the most valuable programmer resource in the process.
When it is not so clear-cut, I'd err on the side of idealism, and come back and optimize later, if you need to.
I want to make some unit tests in my project (i am new to testing), but tutorials online seem to show examples testing only simpliest stuff.
What I want to test is case when after sending POST to addAction in my SurveyController will result in adding corresponding rows to my survey and question tables (one-to-many).
What are the best practices to test database related stuff? Do I create separate db for my test environment and run tests on it? That is the only and right option?
It depends on your circumstances.
Here is my take on this:
The idea is to test but also be DRY (Don't repeat yourself). Your tests must cover all or as many different cases as possible to ensure that your component is thoroughly tested and ready to be released.
If you use an already developed framework to access your database like Doctrine, Zend Framework, PhalconPHP etc. then you can assume that the framework has been tested and not test the actual CRUD operations. You can concentrate on what your own code does.
Some people might want to test even that but in my view it is an overkill and waste of resources. One can actually include the tests of the particular framework with their own if they want to just have more tests :)
If however you were responsible for the database layer classes and its interaction with your application, yeah tests are a must. You might not run them all the time you need to prove a database operation works or not through some piece of code, but you need to have them.
Finally you can use Mock tests as Mark Baker suggested in your code and assume that the database will respond as you expect it to (since it has already been tested). You can then see how your application will react with different responses or results.
Mocking database operations will actually make your tests run faster (along with the other benefits that come with this strategy) since there won't be any database interactions in the tests themselves. This can become really handy in a project with hundreds if not thousands of tests and continuous integration.
HTH
I'm writing unit tests for a project (written in PHP, using PHPUnit) that have its entire environment (loaded components, events, configuration, cache, per-environment singletons, etc) held in an object which all the components use to interact with each other (using a mediator pattern).
In order to make the unit tests run faster, I'm sharing the environment object and some other objects (for example, in my test case for view object [as in the V of MVC], the view manager object [which acts as a factory for view objects and responsible for the actual rendering]) among tests in the same test case (using PHPUnit's setUpBeforeClass() and static properties).
Even though, to the best of my knowledge, the objects I share shouldn't effect the integrity of the tests (in the views case, for example, the environment and view manager object are shared, but a separate view object is created for every test - which is the object that's actually being tested by the test case), it just feels increasingly wrong to me.
I would prefer it if each test used a completely isolated environment and couldn't effect other tests in the same test case in any way. However, that would make the tests run much slower and it feels like a big price for something that I can't really pinpoint the downside of and mainly just "feels wrong".
What do you think? Can you pinpoint any downsides so I can convince myself its worth the longer execution time? Or am I just over reacting and its completely fine?
I share your feelings so maybe i just state my goals and my solution when i faced that issue:
Devs should have a test suite that runs very very fast
At least single test cases should execute in less than a second
I really want to be sure i don't have interdependencies in my test cases
I'm going to assume you have a Continuous Integration Server running. If not a cronjob might do but consider setting up jenkins, it's really really easy.
For normal usage:
Just share as much fixtures as you need to get the speed you need. It might not be pretty and there might be better solutions along the way but if you have something that is expensive to create just do it once.
I'd suggest helper methods getFoo() { if(!self::$foo) .... create ... return $foo;} over setUpBeforeClass because it can make sharing easier but mainly because of the following point.
Once a night:
Run your test suite with --process-isolation and in that bootstrap recreate your complete database and everything.
It might run 6 hours (disable code coverage for that!) but how cares. Your fixtures will be recreated for every single test case since it's a new php process and the static vars don't exist.
Using this way you can be sure that you don't have created dependent once a day. Thats good enough to remember what you did (and you can run with --filter and --process-isolation if you need to fix something).
Like writing "normal" code, when you write test cases, it's fine to rely on knowledge of how fixture objects work.
If a given factory method is documented as generating new instances every time, then I see only downside in creating the factory method anew each time, especially if the factory creation is itself expensive.
It helps to keep in mind a key goal around writing unit tests. You want to know within 5-10 minutes whether you broke the build. That way you can go out to lunch, go to a meeting, go home, etc. after you get the "all clear". If you know that some part of the fixture is reusable without creating interactions, then you should use that knowledge to make your tests even more comprehensive within that 5-10 minute window. I understand the purist impulse here, but it buys you nothing in terms of test independence, and it unnecessarily limits what your test suite will accomplish for you.
Perhaps this is blasphemous, but if you can still manage a significant threshold of code coverage and you can also guarantee that no state pollution can exist (and that is your real issue -- are you making sure that each test will not be effected by the data left over from the test before), I see no problem with leaving things the way they are.
Let the tests run quickly, and when a bug is found (which is inevitable in integration testing) then you have reason to invest the time in localizing one particular test or set of tests. If you have a toolkit which works for the general case, however, I would leave it as is.