Laravel Unit testing controllers error when using multiple methods - php

I am trying to test some of my controllers through Unit Testing. But there is something strange happening. With the following code in my testcase:
public function test_username_registration_too_short()
{
$result = $this->action('POST', 'App\\Controllers\\API\\UserController#store', null, [
'username' => 'foo'
]);
$this->assertEquals('not_saved', $result->getContent());
// $result = $this->action('POST', 'App\\Controllers\\API\\UserController#store', null, [
// 'username' => 'foo'
// ]);
// $this->assertEquals('not_saved', $result->getContent());
}
public function test_username_registration_too_short_run_2()
{
$result = $this->action('POST', 'App\\Controllers\\API\\UserController#store', null, [
'username' => 'foo'
]);
$this->assertEquals('not_saved', $result->getContent());
}
When I run this, the initial too_short test passes, but the exact same code on run 2 does not pass (it even manages to save the user). But if I put that same code twice in the same method (what is commented out now) it works perfectly? I have nothing in my setUp or tearDown methods. And I am a bit lost here.
The code in the controller is the following:
$user = new User(Input::all());
if($user->save())
{
return 'saved';
}
return 'not_saved';

I'm not going to stop repeating myself over this question. There's a similar answer to a (somewhat) similar question. TL;DR: don't use unit testing framework for functional / integration testing.
This is area of functional testing and there is a fabulous framework
called Behat. You should do your own research, but essentially, while
PHPUnit is great at testing more or less independent blocks of
functionality it sucks at testing bigger things like full request
execution. Later you will start experiencing issues with session
errors, misconfigured environment, etc., all because each request is
supposed to be executed in it's own separate space and you force it
into doing the opposite. Behat on the other hand works in a very
different way, where for each scenario (post robot, view non-existing
page), it sends a fresh request to the server and checks the result.
It is mostly used for final testing of everything working together by
making assertions on the final result (response object / html / json).
If you want to test your code the proper way consider using the right tools for that. Once you know your way around with Behat you'll fall in love with it + you can use PHPUnit from within the Behat, to make individual assertions.

Related

How can I do a partial integration test (phpunit)?

I am working on an extension (app) of nextcloud (which is based on Symfony). I have a helper class to extract data from the request that is passed by the HTTP server to PHP. A much-reduced one could be something like this (to get the point here):
<?php
namespace OCA\Cookbook\Helpers;
class RequestHelper {
public function getJson(){
if($_SERVER['Request_Method' === 'PUT'){ // Notice the typos, should be REQUEST_METHOD
$raw = file_get_content('php://input');
return json_decode($raw, true);
} else { /* ... */ }
}
}
Now I want to test this code. Of course, I can do some unit testing and mock the $_SERVER variable. Potentially I would have to extarct the file_get_content into its own method and do a partial mock of that class. I get that. The question is: How much is this test worth?
If I just mimick the behavior of that class (white box testing) in my test cases I might even copy and paste the typo I intentionally included here. As this code is an MWE, real code might get more complex and should be compatible with different HTTP servers (like apache, nginx, lighttpd etc).
So, ideally, I would like to do some automated testing in my CI process that uses a real HTTP server with different versions/programs to see if the integration is working correctly. Welcome to integration testing.
I could now run the nextcloud server with my extension included in a test environment and test some real API endpoints. This is more like functional testing as everything is tested (server, NC core, my code and the DB):
phpunit <---> HTTP server <---> nextcloud core <---> extension code <---> DB
^
|
+--> RequestHelper
Apart from speed, I have to carefully take into account to test all possible paths through the class RequestHelper (device under test, DUT). This seems a bit brittle to me in the long run.
All I could think of is adding a simple endpoint only for testing the functionality of the DUT, something like a pure echo endpoint or so. For the production use, I do not feel comfortable having something like this laying around.
I am therefore looking for an integration test with a partial mock of the app (mocking the business logic + DB) to test the route between the HTTP server and my DUT. In other words, I want to test the integration of the HTTP server, nextcloud core, my controller, and the DUT above without any business logic of my app.
How can I realize such test cases?
Edit 1
As I found from the comments the problem statement was not so obviously clear, I try to explain a bit at the cost of the simplicity of the use-case.
There is the nextcloud core that can be seen as a framework from the perspective of the app. So, there can be controller classes that can be used as targets for URL/API endpoints. So for example /apps/cookbook/recipe/15 with a GET method will fetch the recipe with id 15. Similarly, with PUT there can be a JSON uploaded to update that recipe.
So, inside the corresponding controller the structure is like
class RecipeController extends Controller {
/* Here the PUT /apps/cookbook/recipe/{id} endpoint will be routed */
public function update($id){
$json = $this->requestHelper->getJson(); // Call to helper
// Here comes the business logic
// aka calls to other classes that will save and update the state
// and perform the DB operation
$this->service->doSomething($json);
// Return an answer if the operation terminated successfully
return JsonResponse(['state'=>'ok'], 200);
}
}
I want to test the getJson() method against different servers. Here I want to mock at least the $this->service->doSomething($json) to be a no-op. Ideally, I would like to spy into the resulting $json variable to test that exactly.
No doubt, in my test class it would be something like
class TestResponseHandler extends TestCase {
public function setUp() { /* Set up the http deamon as system service */}
public testGetJson() {
// Creat Guzzle client
$client = new Client([
'base_uri' => 'http://localhost:8080/apps/cookbook',
]);
// Run the API call
$headers = ...;
$body = ...;
$response = $client->put('recipe/15', 'PUT', $headers, $body);
// Check the response body
// ....
}
}
Now, I have two code interpreters running: Once, there is the one (A) that runs phpunit (and makes the HTTP request). Second, there is the one (B) associated with the HTTP server listening on localhost:8080.
As the code above with the call to getJson() is running inside a PHP interpreter (B) outside the phpunit instance I cannot mock directly as far as I understand. I would have to change the main app's code if I am not mistaken.
Of course, I could provide (more or less) useful data in the test function and let the service->doSomething() method do its job but then I am no longer testing only a subset of functions but I am doing functional or system testing. Also, this makes it harder to generate well-aimed test cases if all these side-effects need to be taken into account.

TDD: best practices mocking stacks of objects

I'm trying to get familiar with unit testing in PHP with a small API in Lumen.
Writing the first few tests was pretty nice with the help of some tutorials but now I encountered a point where I have to mock/ stub a dependency.
My controller depends on a specific custom interface type hinted in the constructor.
Of course, I defined this interface/implementation-binding within a ServiceProvider.
public function __construct(CustomValidatorContract $validator)
{
// App\Contracts\CustomValidatorContract
$this->validator = $validator;
}
public function resize(Request $request)
{
// Illuminate\Contracts\Validation\Validator
$validation = $this->validator->validate($request->all());
if ($validation->fails()) {
$response = array_merge(
$validation
->errors() // Illuminate\Support\MessageBag
->toArray(),
['error' => 'Invalid request data.']
);
// response is global helper
return response()->json($response, 400, ['Content-Type' => 'application/json']);
}
}
As you can see, my CustomValidatorContract has a method validate() which returns an instance of Illuminate\Contracts\Validation\Validator (the validation result). This in turn returns an instance of Illuminate\Support\MessageBag when errors() is called. MessageBag then has a toArray()-method.
Now I want to test the behavior of my controller in case the validation fails.
/** #test */
public function failing_validation_returns_400()
{
$EmptyErrorMessageBag = $this->createMock(MessageBag::class);
$EmptyErrorMessageBag
->expects($this->any())
->method('toArray')
->willReturn(array());
/** #var ValidationResult&\PHPUnit\Framework\MockObject\MockObject $AlwaysFailsTrueValidationResult */
$AlwaysFailsTrueValidationResult = $this->createStub(ValidationResult::class);
$AlwaysFailsTrueValidationResult
->expects($this->atLeastOnce())
->method('fails')
->willReturn(true);
$AlwaysFailsTrueValidationResult
->expects($this->atLeastOnce())
->method('errors')
->willReturn($EmptyErrorMessageBag);
/** #var Validator&\PHPUnit\Framework\MockObject\MockObject $CustomValidatorAlwaysFailsTrue */
$CustomValidatorAlwaysFailsTrue = $this->createStub(Validator::class);
$CustomValidatorAlwaysFailsTrue
->expects($this->once())
->method('validate')
->willReturn($AlwaysFailsTrueValidationResult);
$controller = new ImageResizeController($CustomValidatorAlwaysFailsTrue);
$response = $controller->resize(new Request);
$this->assertEquals(400, $response->status());
$this->assertEquals(
'application/json',
$response->headers->get('Content-Type')
);
$this->assertJson($response->getContent());
$response = json_decode($response->getContent(), true);
$this->assertArrayHasKey('error', $response);
}
This is a test that runs ok - but can someone please tell me if there is a better way to write this? It doesn't feel right.
Is this big stack of moc-objects needed because of the fact that I'm using a framework in the background? Or is there something wrong with my architecture so that this feels so "overengineered"?
Thanks
What you are doing is not unit testing because you are not testing a single unit of your application. This is an integration test, performed with unit testing framework, and this is the reason it looks intuitively wrong.
Unit testing and integration testing happen at different times, at different places and require different approaches and tools - the former tests every single class and function of your code, while latter couldn't care less about those, they just request APIs and validate responses. Also, IT doesn't imply mocking anything because it's goal is to test how well your units integrate with each other.
You'll have hard time supporting tests like that because every time you change CustomValidatorContract you'll have to fix all the tests involving it. This is how UT improves code design by requiring it to be as loosely coupled as possible (so you could pick a single unit and use it without the need to boot entire app), respecting SRP & OCP, etc.
You don't need to test 3rd party code, pick an already tested one instead. You don't need to test side effects either, because environment is just like 3rd party service, it should be tested separately (return response() is a side effect). Also it seriously slows down the testing.
All that leads to the idea that you only want to test your CustomValidatorContract in isolation. You don't even need to mock anything there, just instantiate the validator, give it few sets of input data and check how it goes.
This is a test that runs ok - but can someone please tell me if there is a better way to write this? It doesn't feel right. Is this big stack of moc-objects needed because of the fact that I'm using a framework in the background? Or is there something wrong with my architecture so that this feels so "overengineered"?
The big stack of mock objects indicates that your test subject is tightly coupled to many different things.
If you want to support simpler tests, then you need to make the design simpler.
In other words, instead of Controller.resize being one enormous monolithic thing that knows all of the details about everything, think about a design where resize only knows about the surface of things, and how to delegate work to other (more easily tested) pieces.
This is normal, in the sense that TDD is a lot about choosing designs that support better testing.

How can I test a bot detection script?

I have written a bot detection script using PHP. I want to test that script by sending bots to click on links so I can know if the script works. How can I do that?
Here is the PHP code:
function bot_detected($USER_AGENT){
$crawlers = array(
'Googlebot',
'msnbot',
'Yahoo',
'Lycos',
'facebookexternalhit'
);
$crawlers_agents = implode('|', $crawlers);
if(strpos($crawlers_agents, $USER_AGENT) === false){
return false;
} else {
return TRUE;
}
}
You could use a chrome plugin to set a custom user agent so you can test it, for example this one
The heart of testing is simulating the environment.
First, download a testing suite. I recommend and use PHPUnit. This will allow you to write tests that can survive code alterations that exist in separate files. Without a testing suite, you will inevitably write a program called a driver and do the same thing, but driver files often get lost or are forgotten about, because each one is coded on an as-needed basis, and their is usually no system put in place as to storing drivers together or using a consistent and predictable naming schema. For these reasons, I recommend learning a testing suite, like PHPUnit, which will force you to think about test longevity and file name conventions.
Once you have a testing suite selected, start by designing your test suite. Your short program is really just a function call, so you need the test to pass multiple values to your function, and then test the response to ensure that you get the predicted result.
In a hybrid PHP-pseudocode, this might look like the following:
require 'myfile.php'
class MyTest extends TestClass{
/**
* Provides parameters and expected results to the test method.
*/
public function providerOfTestCases(){
return [
'Googlebot Test Case' => [ 'Googlebot', true ],
'msnbot Test Case' => [ 'mstbot' , true ],
.
.
'nonbot test case' => [ 'randomStringData', false ]
];
}
/**
* #dataProvider providerOfTestCases
*/
public function testBotDetector( $userString, $expectedResult ){
$functionResult = bot_detected( $userString );
$message_on_failure = "When testing $userString, we expect "
. ( $expectedResult ? "TRUE" : "FALSE" )
. " but instead the function outputs "
. ( $functionResult ? "TRUE" : "FALSE" );
$this->assertEquals( $expectedResult, $functionResult, $message_on_failure );
}
}
This test, for such a simple function, will tell you mostly what you already know, that for each string in your list of bot-names that you get a TRUE result.
In addition to this, I would add a logging function to your production system to keep track of all the $USER values that are tested. The biggest problem with a function like the one you have written is that it relies on your preset data list to be accurate. There is no way to test, in advance, that the values you have listed are actually the values that are delivered to your system. By logging all the values that are tested, you can regularly inspect the log for new values that should be considered and possible mistakes.
This second process relies on the comment made by #RiggsFolly to your original post. Your log files will only be filled by actual bot visits, so you must be patient as you wait for the logs to fill. Check the logs regularly, and make sure that you are seeing the values that you expect to see.
Remember to include in your log the result of your function output so that you can tripple-check the performance of your function.
I hope all of this has been helpful. Happy coding!

phpunit is letting a test pass when it should fail

I'm following along with this Laracasts video https://laracasts.com/lessons/tdd-by-example
In the lesson we are creating a test in PHPunit for the store method on the SupportController
Here is what the test looks like so far.
public function test_submits_support_request_upon_form_submission()
{
$this->call('POST', 'support/store', ['name' => 'john']);
}
I do not have a rout yet that matched 'support/store'.
When I run this I get no falures. In the video, he got an error Synphoy\Component\HttpKernal\Exeception\NotFoundHttpException
This makes sense becase the route the test is trying to hit doesn't exist yet. I'm using Laravel 5 and he's using 4 in the video. Is there something I need to change in order to get this to work correctly?
It's entirely possible that Laravel 5 doesn't throw an exception when you call the call method -- even if it doesn't match a route. If a test method doesn't throw an exception, and contains no assertions, it will pass. Although, that said, depending on how you've configured phpunit it should be warning you that your test contains no assertions.
I'd start with
What does $this->call('POST', 'support/store', ['name' => 'john']); return?
Are you sure your test class is included in the run?
Are you sure the test runner runs your class?
Answer these, and I suspect an answer to your question will emerge.

Should I be unit testing every piece of code

I have been starting unit testing recently and am wondering, should I be writing unit tests for 100% code coverage?
This seems futile when I end up writing more unit testing code than production code.
I am writing a PHP Codeigniter project and sometimes it seems I write so much code just to test one small function.
For Example this Unit test
public function testLogin(){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
$this->realAuth=new $this->CI->auth;
$this->CI->auth=$this->getMock("Auth",array("logIn"));
$this->CI->auth->expects($this->once())
->method("logIn")
->will($this->returnValue(TRUE));
//test
$this->CI->form_validation->expects($this->once())
->method("run")
->will($this->returnValue(TRUE));
$_POST["login"]=TRUE;
$this->CI->login();
$out = $this->CI->output->get_headers();
//check new header ends with dashboard
$this->assertStringEndsWith("dashboard",$out[0][0]);
//tear down
$this->CI->form_validation=$this->realFormValidation;
$this->CI->auth=$this->realAuth;
}
public function badLoginProvider(){
return array(
array(FALSE,FALSE),
array(TRUE,FALSE)
);
}
/**
* #dataProvider badLoginProvider
*/
public function testBadLogin($formSubmitted,$validationResult){
//setup
$this->CI->load->library("form_validation");
$this->realFormValidation=new $this->CI->form_validation;
$this->CI->form_validation=$this->getMock("CI_Form_validation");
//test
$this->CI->form_validation->expects($this->any())
->method("run")
->will($this->returnValue($validationResult));
$_POST["login"]=$formSubmitted;
$this->CI->login();
//check it went to the login page
$out = output();
$this->assertGreaterThan(0, preg_match('/Login/i', $out));
//tear down
$this->CI->form_validation=$this->realFormValidation;
}
For this production code
public function login(){
if($this->input->post("login")){
$this->load->library('form_validation');
$username=$this->input->post('username');
$this->form_validation->set_rules('username', 'Username', 'required');
$this->form_validation->set_rules('password', 'Password', "required|callback_userPassCheck[$username]");
if ($this->form_validation->run()===FALSE) {
$this->load->helper("form");
$this->load->view('dashboard/login');
}
else{
$this->load->model('auth');
echo "valid";
$this->auth->logIn($this->input->post('username'),$this->input->post('password'),$this->input->post('remember_me'));
$this->load->helper('url');
redirect('dashboard');
}
}
else{
$this->load->helper("form");
$this->load->view('dashboard/login');
}
}
Where am I going so wrong?
In my opinion, it's normal for test code to be more than production code. But test code tends to be straightforward, once you get the hang of it, it's like a no brainer task to write tests.
Having said that, if you discover your test code is too complicated to write/to cover all the execution paths in your production code, that's a good indicator for some refactoring: your method may be too long, or attempts to do several things, or has so many external dependencies, etc...
Another point is that it's good to have high test coverage, but does not need to be 100% or some very high number. Sometimes there are code that has no logic, like code that simply delegates tasks to others. In that case you can skip testing them and use #codeCoverageIgnore annotation to ignore them in your code coverage.
In my opinion its logical that test are much more code because you have to test multiple scenarios, must provide test data and you have to check that data for every case.
Typically a test-coverage of 80% is a good value. In most cases its not necessary to test 100% of the code because you should not test for example setters and getter. Dont test just for the statistics ;)
The answer is it depends, but generally no. If you are publishing a library then lots of tests are important and can even help create the examples for the docs.
Internal projects you would probably want to focus your code around complex functions and things which would be bad if they were to go wrong. For each test thing what is the value in having the test here rather than in the parent function?
What you would want to avoid is testing too much is anything that relies on implementation details, or say private method/functions, otherwise if you change the structure you'll find you'll have to repeatedly rewrite the entire suite of tests.
It is better to test at a higher level, the public functions or anything which is at the boundary between modules, a few tests at the highest level you can should yield reasonable converge and ensure the code works in the way that your code is actually called. This is not to say lower level functions shouldn't have tests but at that level it's more to check edge cases than to have a test for the typical case against every function.
Instead of creating tests to increase coverage, create tests to cover bugs as you find and fix them, create tests against new functionality when you would have to manually test it anyway. Create tests to guard against bad things which must not happen. Fragile tests which easily break during refactors should be removed or changed to be less dependant on the implementation of the function.

Categories