I have some confused question about testing.
We're using Slim as framework of the system. As I know, Unit Test is the minimum of testing. For example, to test if the class or method works as expected.
To test another functions of the system, for example, the system provide an API to search products information, this search function designed by the following thoughts:
Clients post keyword to the API entry. eg. /search
Handle the request and inject SearchService into the controller.
Set the keyword to the SearhService.
Push the keyword into the database. eg. search_history.
Fetch the search result back to controller, and response to clients.
This is how search API designs.
And we created a functional test for the Search API called SearchTest. This is how we done:
Use $this->runApp() method provided by Slim to sending requests to the API entry.
Set several test cases to test different kind of scenario, such as "No keyword provided", "No search result" ... etc.
Assert the responses that return value or HTTP status as expected.
Some questions are confusing:
Sending requests and asserting the responses, is that all? Do we misunderstand so called "functional testing"?
We shouldn't care about how it works, we just need to send request and assert the responses, right?
If not, should we check dependencies or used components work as expected? For example, should we connect to the database, and check if the keyword pushing success or not?
Automated Testing (and testing in general) is considered a good practice in Software Engineering. However, there exist a lot of hot discussion on what are the boundaries for test methodology and types classification.
In this sense, a practical approach to correctly implement a testing methodology is learning from those who you may consider respectable in your context of software development AND making sure that the practice you adopt plays nice for the goal you are willing to achieve. Try to be consistent on this.
Just as a reference, lets take this definition.
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Considering this approach, you want your tests to give you confidence that the user is able to execute some function (or feature) provided by your application. It does not matter (here) how that feature is provided, just get into your user's shoes and think "Am I getting what I'm expecting from this action?", the answer to that question should hint you the assertions for your test.
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
If you want to test how a certain function is executed, it's probably a Unit Test what may help you to do those assertions.
But again, remember it's not that of a sharp limit on what your tests should perform out from their name, but just to know what classification would fit best for the type of assertions you are performing (that helps you better structure your tests and give you a clear idea of what are you testing). Try to be consistent and focus on testing what you really need to be confident on.
Related
I'm creating a php script, which should process the POST data it receives (from AJAX or else) and send it further (to another script).
I'm wondering how to develop it in a "BDD way".
So far I've done the "processing part" by writting features with Behat and created the required blocks (classes) using phpspec.
But then I'm blocked when it comes to testing those following features:
the script only processes / accepts POST data,
the script sends only valid data further after processing,
the script sends back errors in case of invalid data.
It seems to me that I could write the tests against the script itself, but then I'm wondering:
if it's a good idea (it does seem simple enough but a bit messy though because there is not much isolation)
how to do this elegantly in behat (it seems messy for me to have to manually run my local server and have its url hardcoded in my tests / contexts, but maybe it's just the way to do it)
Any ideas or suggestions?
BDD:
Let's try to change your mindset a bit. The BDD is about a collaboration and the automation part (tests) are only the last part of it.
Your acceptance tests, this is what you do via Behat, should only cover a specification of your feature via examples. It means, don't focus on testing of all possible scenarios like you would do via unit/integration tests but specify only the minimum which describes your feature enough to be revealing the intention of that feature.
In most cases, the examples cover only positive scenarios and 1-5 are enough.
Little help here. Ask yourself, what would you mention if these examples would be a part of a documentation for customers? The specification by examples is no more no less than the documentation of an application which has an ability to be automatically tested.
Testing level:
Unfortunately, I don't know the technical background of your script so the answer will be more a theory.
There are more levels of acceptance tests and the higher you test the more you cover but the more expensive is their creation and maintenance:
UI
HTTP request via infrastructure
Initiate the application and inject a fake request
Call a controller directly
Call an application service which is processing the domain logic
There is my personal practice. Because BDD is the best with TDD, I always start with the point 5), and sometimes, I also add the higher level 3) to be sure that the application works correctly as the whole. I use levels 2) and 1) very rarely as I don't need to test my infrastructure via acceptance tests, it's not their purpose.
This is more of a comment rather than a proper answer, but it would not fit well in a comment either...
So first thing, good that you're trying to test all the things. But in order to do BDD (hence using Behat), there has to be some sort of benefit for a stakeholder or part of the business with who you should have a conversation about the feature at hand.
The fact that you're just describing a script that receives a particular input and transforms it into a particular output, sounds pretty much what you'd expect from a Unit test, right?
On the other hand, if this particular script is solving a stakeholder's need and there's a story or scenario that this script is helping to satisfy, I assume that there would be some sort of change in the state of the system which you can test for. If so, then go ahead and describe it using Behat, having a conversation with the appropriate stakeholders. I guess that you'd need to set up an environment for your system in which you'd then run your script and then check that the state of it has been changed appropriately.
When i started making a mobile app (that uses laravel on the server) i decided to not dig into testing or coding standards as i thought it was better to actually get a working app first.
A few months later i have a working app but my code doesn't follow any standards and nor have i written any tests.
Current directory structure i'm using is:
app/controllers : Contains all the controllers the app uses. The controllers aren't exactly thin, they contain most of the logic and some of them even have multiple conditional statements (if/else). All the database interactions occur in the controllers.
app/models : Define relationships with other models and contain certain functions relative to the particular model eg. A validate function.
app/libraries : contain custom helper functions.
app/database : contains the migrations and seeds.
My app is currently working and the reason for the slack is probably because i work alone on the app.
My concerns:
Should i go ahead and release the app and then see if its even worth
making the effort to refactor or should i first refactor.
I do wish to refactor the code but i'm unsure on as to what approach
i should take. Should i first get the standards right and then make
my code testable? Or should i not worry about standards( and
continue using classmap to autoload) and just try and make my code
testable?
How should i structure my files?
Where should i place interfaces,abstract classes etc?
Note: I am digging into testing and coding standards from whatever resources i can find, but if you guys could point me to some resource i would appreciate it.
Oh dear. Classic definition of legacy code, is code with no unit tests. So now you are in that unfortunate position where, if you are ever going to have to amend/enhance/resuse this code base, you are going to have to absorb a significant cost. The further away from your current implementation you get, the higher that cost is going to get. That's called technical debt. Having testable code doesn't get rid of it though, it simply reduces the interest rate.
Should you do it? Well if you release now, and it achieves some popularity and therefore need to do more to it...
If it's going to be received with total indifference or screams of dismay, then there's no point in releasing it at all, in fact that would be counter-productive.
Should you choose to carry on with the code base, I can't see any efficient way to re-factor for a coding standard without a unit test to prove you haven't broken the app in doing so. If you find a coding standard where testable and tested aren't a key part of it, ignore it, it's drivel.
Of course you still have the problem of how do you change code to make it testable without breaking it. I suspect you can iterate towards that with say delegation in your fat controllers without too much of a difficulty.
Two key points.
Have at least some test first, even if it's a wholly manual one, say a printout of the web page before you do anything. A set of integration tests, perhaps using an automation suite would be very useful, especially if you already have a coupling problem.
The other is, don't make too many changes at once and by too many I mean how many operations in the app will be affected by this code.
Robert C Martin's (no affiliation whatsoever) Working With Legacy Code would be a good purchase if you want to learn more.
Hopefully this lesson has already been learnt. Don't do this again.
Last but not least doing this sort of exercise is a great learning experience, rest assured you will run into situations (lots of them) where this perennial mistake has been made.
I have a question on how to implement behat/mink functional tests.
In my web app, I have users that can access some data sheets if they have the required credentials (i.e. no access/ read only / write).
I want to be able to test all the possible contexts via behat/mink.
The question is what is the best practice for such testing ?
Some dev told me that I have to create a scenario for each type of user I would like to use. Then, I will have to use the user I created in others tests.
But I am not very confortable with this idea : I believe that it introduces coupling between my tests. If the test that creates the user fails, then the test that checks the access over my datasheet for that specific user will also fail.
So, I believed that I could use some fixtures : before testing my app, I run a script that will insert me all the profiles I need. I will have some tests dedicated for creating users, and I will use the fixtures to check if a specific user is allowed to access a specific datasheet.
The counterpart with this solution is that I will have to maintain the fixtures set.
Do you have any suggestion / idea ?
Hi user3333860 (what a username XD),
I'm not an expert in testing and these days, I'm more on ruby/rspec but I personally think that both solutions are good and are used.
Use a feature to create your User:
If your test for creating a user fails, It may means that your User creation refactoring is also messed up.
So the fact that other tests fails doesn't seems to be a drawback to me. But I do understand the fact that you don't want to have coupling between your tests.
The main point is, are your test ran in a static order or are they run randomly (ie: rspec don't always launch tests in the same order) and are you ready to have the same test run multiple time in different features so that your other tests can successfully complete.
Use a fixtures:
Well it is also a good and popular solution (in fact, the one I use) and you already pinpoint the fact that you will have to maintain.
In the end, I'd take the fixtures path ONLY with a tools helping like FactoryGirl that helps you maintain your objects templates (here is the PHP version of it)
https://github.com/breerly/factory-girl-php
Hope I helped with your dilemma.
I want to make some unit tests in my project (i am new to testing), but tutorials online seem to show examples testing only simpliest stuff.
What I want to test is case when after sending POST to addAction in my SurveyController will result in adding corresponding rows to my survey and question tables (one-to-many).
What are the best practices to test database related stuff? Do I create separate db for my test environment and run tests on it? That is the only and right option?
It depends on your circumstances.
Here is my take on this:
The idea is to test but also be DRY (Don't repeat yourself). Your tests must cover all or as many different cases as possible to ensure that your component is thoroughly tested and ready to be released.
If you use an already developed framework to access your database like Doctrine, Zend Framework, PhalconPHP etc. then you can assume that the framework has been tested and not test the actual CRUD operations. You can concentrate on what your own code does.
Some people might want to test even that but in my view it is an overkill and waste of resources. One can actually include the tests of the particular framework with their own if they want to just have more tests :)
If however you were responsible for the database layer classes and its interaction with your application, yeah tests are a must. You might not run them all the time you need to prove a database operation works or not through some piece of code, but you need to have them.
Finally you can use Mock tests as Mark Baker suggested in your code and assume that the database will respond as you expect it to (since it has already been tested). You can then see how your application will react with different responses or results.
Mocking database operations will actually make your tests run faster (along with the other benefits that come with this strategy) since there won't be any database interactions in the tests themselves. This can become really handy in a project with hundreds if not thousands of tests and continuous integration.
HTH
Currently we have a lot of web pages that either have SQL statements embedded in them or call a specific php script that does a specific job - ie getNames.php - as part of a ajax call back. Neither are particularly maintainable.
I was thinking about using a REST like API to get the necessary data to the client and then munge the data into something usable. This is attractive as this lessens the burden on maintaining highly complex sql in code and allows centralisation of data (so just one AJAX call to get the data not lots of little ones). Also allows the database to change lessening the impact on the client.
However there are two problems I can see with this strategy:
The site is a game, and so I need the RESTlike API to be protected from abuse/cheating as much as possible.
All examples of REST API's use a controller to handle the requests in root. That's not ideal for me since we are at //company/games/game/ and there already is an index.php at root (//company/).
What options and strategies do I have for the two constraints I listed?
Well, you're asking for opinion, but I'm well seasoned enough (having written many many API schemes over the years) that I'm totally willing to open myself up to Net abuse. I think the key here, and this should provide an opinion to work from on both of your questions, is that REST is simply a set of principles. Sure there are people that follow a RESTful pattern explicitly, but that isn't practical for most people.
Take the Flickr "REST" API for instance... a call may look like this: http://api.flickr.com/services/rest/?method=flickr.favorites.getContext&api_key=a114adf91150953107987e4c3dc14df8&photo_id=6033564557&format=json&nojsoncallback=1&api_sig=0d2c215992d643ef6fe4a085805f7059
Not very RESTful, from a patterning perspective... however, it contains all of the elements of REST and is a fine enough model. You can understand what it is doing at a glance, and you can easily build on top of that.
IN the end, REST is a set of principals, not a protocol, and not even a pattern in and of itself. You're free to implement it however you want. There's always an interoperability intermediate layer and the point is to just make it understandable... and many of the REST patterns actually get in the way of that, favoring form over function.
In fact, most of the patterns I've seen are insufficient for anything particularly advanced, but that's part of the point of REST... Keep It Simple (Stupid).