Laravel remembers original response during http tests - php

Given the following pest test:
it('allows admins to create courses', function () {
$admin = User::factory()->admin()->create();
actingAs($admin);
$this->get('/courses')->assertDontSee('WebTechnologies');
$this->followingRedirects()->post('/courses', [
'course-name' => 'WebTechnologies',
])->assertStatus(200)->assertSee('WebTechnologies');
});
The above should fully work; however, the second request post('/courses')...
fails saying that:
Failed asserting that <...> contains "WebTechnologies".
If I remove the first request:
it('allows admins to create courses', function () {
$admin = User::factory()->admin()->create();
actingAs($admin);
$this->followingRedirects()->post('/courses', [
'course-name' => 'WebTechnologies',
])->assertStatus(200)->assertSee('WebTechnologies');
});
The test passes.
If I remove the second request instead:
it('allows admins to create courses', function () {
$admin = User::factory()->admin()->create();
actingAs($admin);
$this->get('/courses')->assertDontSee('WebTechnologies');
});
It also passes.
So why should the combination of the two cause them to fail? I feel Laravel is caching the original response, but I can't find anything within the documentation supporting this claim.

I have created an issue about this on Laravel/Sanctum as my problem was about authentication an stuff...
https://github.com/laravel/sanctum/issues/377
One of the maintainers of Laravel Said:
You can't perform two HTTP requests in the same test method. That's not supported.
I would have wanted a much clearer explanation on why it's not supported.
but I guess, we would never know. (Unless we dive deep into the Laravel framework and trace the request)
UPDATE:
My guess is that, knowing how Laravel works, for each REAL request Laravel initializes a new instance of the APP...
but when it comes to Test, Laravel Initializes the APP for each Test case NOT for each request, There for making the second request not valid.
here is the file that creates the request when doing a test...
vendor/laravel/framework/src/Illuminate/Foundation/Testing/Concerns/MakesHttpRequests.php
it's on the call method line: 526 (Laravel v9.26.1)
as you can see...
Laravel only uses 1 app instance... not rebuilding the app...
Line 528: $kernel = $this->app->make(HttpKernel::class);
https://laravel.com/docs/9.x/container#the-make-method
the $kernel Variable is an instance of vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php
My guess here is that the HttpKernel::class is a singleton.
P.S. I can do a little more deep dive, but I've procrastinated too much already by answering this question, it was fun thou.
TL;DR.
You can't perform two HTTP requests in the same test method. That's not supported.
UPDATE:
I was not able to stop myself...
I found Laravel initializing Kernel as a singleton
/{probject_dir}/bootstrap/app.php:29-32

Please make sure to not use any classic singleton pattern which isn't invoked with singleton binding or facades.
https://laravel.com/docs/9.x/container#binding-a-singleton
$this->app->singleton(Transistor::class, function ($app) {
return new Transistor($app->make(PodcastParser::class));
});
The Laravel app won't be completely restarted during tests unlike different incoming HTTP requests - even if you call different API endpoints in your tests

Related

How can I do a partial integration test (phpunit)?

I am working on an extension (app) of nextcloud (which is based on Symfony). I have a helper class to extract data from the request that is passed by the HTTP server to PHP. A much-reduced one could be something like this (to get the point here):
<?php
namespace OCA\Cookbook\Helpers;
class RequestHelper {
public function getJson(){
if($_SERVER['Request_Method' === 'PUT'){ // Notice the typos, should be REQUEST_METHOD
$raw = file_get_content('php://input');
return json_decode($raw, true);
} else { /* ... */ }
}
}
Now I want to test this code. Of course, I can do some unit testing and mock the $_SERVER variable. Potentially I would have to extarct the file_get_content into its own method and do a partial mock of that class. I get that. The question is: How much is this test worth?
If I just mimick the behavior of that class (white box testing) in my test cases I might even copy and paste the typo I intentionally included here. As this code is an MWE, real code might get more complex and should be compatible with different HTTP servers (like apache, nginx, lighttpd etc).
So, ideally, I would like to do some automated testing in my CI process that uses a real HTTP server with different versions/programs to see if the integration is working correctly. Welcome to integration testing.
I could now run the nextcloud server with my extension included in a test environment and test some real API endpoints. This is more like functional testing as everything is tested (server, NC core, my code and the DB):
phpunit <---> HTTP server <---> nextcloud core <---> extension code <---> DB
^
|
+--> RequestHelper
Apart from speed, I have to carefully take into account to test all possible paths through the class RequestHelper (device under test, DUT). This seems a bit brittle to me in the long run.
All I could think of is adding a simple endpoint only for testing the functionality of the DUT, something like a pure echo endpoint or so. For the production use, I do not feel comfortable having something like this laying around.
I am therefore looking for an integration test with a partial mock of the app (mocking the business logic + DB) to test the route between the HTTP server and my DUT. In other words, I want to test the integration of the HTTP server, nextcloud core, my controller, and the DUT above without any business logic of my app.
How can I realize such test cases?
Edit 1
As I found from the comments the problem statement was not so obviously clear, I try to explain a bit at the cost of the simplicity of the use-case.
There is the nextcloud core that can be seen as a framework from the perspective of the app. So, there can be controller classes that can be used as targets for URL/API endpoints. So for example /apps/cookbook/recipe/15 with a GET method will fetch the recipe with id 15. Similarly, with PUT there can be a JSON uploaded to update that recipe.
So, inside the corresponding controller the structure is like
class RecipeController extends Controller {
/* Here the PUT /apps/cookbook/recipe/{id} endpoint will be routed */
public function update($id){
$json = $this->requestHelper->getJson(); // Call to helper
// Here comes the business logic
// aka calls to other classes that will save and update the state
// and perform the DB operation
$this->service->doSomething($json);
// Return an answer if the operation terminated successfully
return JsonResponse(['state'=>'ok'], 200);
}
}
I want to test the getJson() method against different servers. Here I want to mock at least the $this->service->doSomething($json) to be a no-op. Ideally, I would like to spy into the resulting $json variable to test that exactly.
No doubt, in my test class it would be something like
class TestResponseHandler extends TestCase {
public function setUp() { /* Set up the http deamon as system service */}
public testGetJson() {
// Creat Guzzle client
$client = new Client([
'base_uri' => 'http://localhost:8080/apps/cookbook',
]);
// Run the API call
$headers = ...;
$body = ...;
$response = $client->put('recipe/15', 'PUT', $headers, $body);
// Check the response body
// ....
}
}
Now, I have two code interpreters running: Once, there is the one (A) that runs phpunit (and makes the HTTP request). Second, there is the one (B) associated with the HTTP server listening on localhost:8080.
As the code above with the call to getJson() is running inside a PHP interpreter (B) outside the phpunit instance I cannot mock directly as far as I understand. I would have to change the main app's code if I am not mistaken.
Of course, I could provide (more or less) useful data in the test function and let the service->doSomething() method do its job but then I am no longer testing only a subset of functions but I am doing functional or system testing. Also, this makes it harder to generate well-aimed test cases if all these side-effects need to be taken into account.

Log contextual data (without specifying it explicitly)

I'm working on a multi-tenant app where I need to log a lot more data than what I pass to the log facade. What I mean is, every time I do this...
Log::info('something happened');
I get this:
[2017-02-15 18:12:55] local.INFO: something happened
But I want to get this:
[2017-02-15 18:12:55] [my ec2 instance id] [client_id] local.INFO: something happened
As you can see I'm logging the EC2 instance ID and my app's client ID. I'm of course simplifying this as I need to log a lot more stuff in there. When I consume and aggregate these logs, having these extra fields make them incredibly handy to figure what things went wrong and where.
In the Zend Framework, I usually subclass the logger and add these extra fields in my subclass but I'm not sure how I can do that with Laravel. I can't find where the logger is instantiated so that I can plug my custom logger in (if that is even the way to go in Laravel).
So, I'm not asking how to get the EC2 instance ID and the other stuff, I'm only asking what the proper way to "hot wire" the Laravel logger is to be able to plug this in.
Just an idea... the logger in Laravel is really a Monolog instance... You could push a handler on it and do whatever processing you want for each entry... like so...
<?php
$logger->pushProcessor(function ($record) {
$record['extra']['dummy'] = 'Hello world!';
return $record;
});
As per the Laravel doc you can hook up into the monolog config at boot...
Custom Monolog Configuration
If you would like to have complete control over how Monolog is
configured for your application, you may use the application's
configureMonologUsing method. You should place a call to this method
in your bootstrap/app.php file right before the $app variable is
returned by the file:
$app->configureMonologUsing(function ($monolog) {
$monolog->pushHandler(...);
});
return $app;
So instead just push a processor on the $monolog instance passed to the hook...
Just an idea, I have not tried this in Laravel but used Monolog before...

Testing Laravel Service Providers

I'm (we're) creating a package that acts as a core component for our future CMS and of course that package needs some unit tests.
When the package registeres, the first thing it does is set the back/frontend context like this:
class FoundationServiceProvider extends ServiceProvider
{
// ... stuff ...
public function register()
{
// Switch the context.
// Url's containing '/admin' will get the backend context
// all other urls will get the frontend context.
$this->app['build.context'] = request()->segment(1) === 'admin'
? Context::BACKEND
: Context::FRONTEND;
}
}
So when I visit the /admin url, the app('build.context') variable will be set to backend otherwise it will be set to `frontend.
To test this I've created the following test:
class ServiceProviderTest extends \TestCase
{
public function test_that_we_get_the_backend_context()
{
$this->visit('admin');
$this->assertEquals(Context::BACKEND, app('build.context'));
}
}
When I'm running the code in the browser (navigating to /admin) the context will get picked up and calling app('build.context') will return backend, but when running this test, I always get 'frontend'.
Is there something I did not notice or some incorrect code while using phpunit?
Thanks in advance
Well, this is a tricky situation. As I understand it, laravel initiates two instances of the framework when running tests - one that is running the tests and another that is being manipulated through instructions. You can see it in tests/TestCase.php file.
So in your case you are manipulating one instance, but checking the context of another (the one that did not visit /admin and is just running the tests). I don't know if there's a way to access the manipulated instance directly - there's nothing helpful in documentation on this issue.
One workaround would be to create a route just for testing purposes, something like /admin/test_context, which would output the current context, and the check it with
$this->visit('admin/test_context')->see(Context::BACKEND);
Not too elegant, but that should work. Otherwise, look around in laravel, maybe you will find some undocumented feature.

Programmatically add exception from CSRF check from Laravel package

The Problem in a Nutshell
I'm looking for a way to remove VerifyCsrfToken from the global middleware pipeline from within a package without the user having to modify App\Http\Middleware\VerifyCsrfToken. Is this possible?
The Use Case
I'm developing a package that would make it easy to securely add push-to-deploy functionality to any Laravel project. I'm starting with Github. Github uses webhooks to notify 3rd party apps about events, such as pushes or releases. In other words, I would register a URL like http://myapp.com/deploy at Github, and Github will send a POST request to that URL with a payload containing details about the event whenever it happens, and I could use that event to trigger a new deployment. Obviously, I don't want to trigger a deployment on the off chance that some random (or perhaps malicious) agent other than the Github service hits that URL. As such, Github has a process for securing your webhooks. This involves registering a secret key with Github that they will use to send a special, securely hashed header along with the request that you can use to verify it.
My approach to making this secure involves:
Random Unique URL/Route and Secret Key
First, I automatically generate two random, unique strings, that are stored in the .env file and used to create a secret key route within my app. In the .env file this looks like:
AUTODEPLOY_SECRET=BHBfCiC0bjIDCAGH2I54JACwKNrC2dqn
AUTODEPLOY_ROUTE=UG2Yu8QzHY6KbxvLNxcRs0HVy9lQnKsx
The config for this package creates two keys, auto-deploy.secret and auto-deploy.route that I can access when registering the route so that it never gets published in any repo:
Route::post(config('auto-deploy.route'),'MyController#index');
I can then go to Github and register my webook like this:
In this way, both the deployment URL and the key used to authenticate the request will remain secret, and prevent a malicious agent from triggering random deployments on the site.
Global Middleware for Authenticating Webhook Requests
The next part of the approach involves creating a piece of global middleware for the Laravel app that would catch and authenticate the webhook requests. I am able to make sure that my middleware gets executed near the beginning of the queue by using an approach demonstrated in this Laracasts discussion thread. In the ServiceProvider for my package, I can prepend a new global middleware class as follows:
public function boot(Illuminate\Contracts\Http\Kernel $kernel)
{
// register the middleware
$kernel->prependMiddleware(Middleware\VerifyWebhookRequest::class);
// load my route
include __DIR__.'/routes.php';
}
My Route looks like:
Route::post(
config('auto-deploy.route'), [
'as' => 'autodeployroute',
'uses' => 'MyPackage\AutoDeploy\Controllers\DeployController#index',
]
);
And then my middleware would implement a handle() method that looks something like:
public function handle($request, Closure $next)
{
if ($request->path() === config('auto-deploy.route')) {
if ($request->secure()) {
// handle authenticating webhook request
if (/* webhook request is authentic */) {
// continue on to controller
return $next($request);
} else {
// abort if not authenticated
abort(403);
}
} else {
// request NOT submitted via HTTPS
abort(403);
}
}
// Passthrough if it's not our secret route
return $next($request);
}
This function works right up until the continue on to controller bit.
The Problem in Detail
Of course the problem here is that since this is a POST request, and there is no session() and no way to get a CSRF token in advance, the global VerifyCsrfToken middleware generates a TokenMismatchException and aborts. I have read through numerous forum threads, and dug through the source code, but I can't find any clean and easy way to disable the VerifyCsrfToken middleware for this one request. I have tried several workarounds, but I don't like them for various reasons.
Workaround Attempt #1: Have user modify VerifyCsrfToken middleware
The documented and supported method for solving this problem is to add the URL to the $except array in the App\Http\Middleware\VerifyCsrfToken class, e.g.
// The URIs that should be excluded from CSRF verification
protected $except = [
'UG2Yu8QzHY6KbxvLNxcRs0HVy9lQnKsx',
];
The problem with this, obviously, is that when this code gets checked into the repo, it will be visible to anyone who happens to look. To get around this I tried:
protected $except = [
config('auto-deploy.route'),
];
But PHP didn't like it. I also tried using the route name here:
protected $except = [
'autodeployroute',
];
But this doesn't work either. It has to be the actual URL. The thing that actually does work is to override the constructor:
protected $except = [];
public function __construct(\Illuminate\Contracts\Encryption\Encrypter $encrypter)
{
parent::__construct($encrypter);
$this->except[] = config('auto-deploy.route');
}
But this would have to be part of the installation instructions, and would be an unusual install step for a Laravel package. I have a feeling this is the solution I'll end up adopting, as I guess it's not really that difficult to ask users to do this. And it has the upside of at least possibly making them conscious that the package they're about to install circumvents some of Laravel's built in security.
Workaround Attempt #2: catch the TokenMismatchException
The next thing I tried was to see if I could just catch the exception, then ignore it and move on, i.e.:
public function handle($request, Closure $next)
{
if ($request->secure() && $request->path() === config('auto-deploy.route')) {
if ($request->secure()) {
// handle authenticating webhook request
if (/* webhook request is authentic */) {
// try to continue on to controller
try {
// this will eventually trigger the CSRF verification
$response = $next($request);
} catch (TokenMismatchException $e) {
// but, maybe we can just ignore it and move on...
return $response;
}
} else {
// abort if not authenticated
abort(403);
}
} else {
// request NOT submitted via HTTPS
abort(403);
}
}
// Passthrough if it's not our secret route
return $next($request);
}
Yeah, go ahead and laugh at me now. Silly wabbit, that's not how try/catch works! Of course $response is undefined within the catch block. And If I try doing $next($request) in the catch block, it just bangs up against the TokenMismatchException again.
Workaround Attempt #3: Run ALL of my code in the middleware
Of course, I could just forget about using a Controller for the deploy logic and trigger everything from the middleware's handle() method. The request lifecycle would end there, and I would never let the rest of the middleware propagate. I can't help feeling that there's something inelegant about that, and that it departs from the overall design patterns upon which Laravel is built so much that it would end up making maintenance and collaboration difficult moving forward. At least I know it would work.
Workaround Attempt #4: Modify the Pipeline
Philip Brown has an excellent tutorial describing the Pipeline pattern and how it gets implemented in Laravel. Laravel's middleware uses this pattern. I thought maybe, just maybe, there was a way to get access to the Pipeline object that queues up the middleware packages, loop through them, and remove the CSRF one for my route. Best I can tell, there are ways to add new elements to the pipeline, but no way to find out what's in it or to modify it in any way. If you know of a way, please let me know!!!
Workaround Attempt #5: Use the WithoutMiddleware trait
I haven't investigated this one quite as thoroughly, yet, but it appears that this trait was added recently to allow testing routes without having to worry about middleware. It's clearly NOT meant for production, and disabling the middleware would mean that I'd have to come up with a whole new solution for figuring out how to get my package to do its thing. I decided this was not the way to go.
Workaround Attempt #6: Give up. Just use Forge or Envoyer
Why reinvent the wheel? Why not just pay for one or both of these service that already supports push-to-deploy rather than go to the trouble of rolling my own package? Well, for one, I only pay $5/month for my server, so somehow the economics of paying another $5 or $10 per month for one of these services doesn't feel right. I'm a teacher who builds apps to support my teaching. None of them generate revenue, and although I could probably afford it, this kinda thing adds up over time.
Discussion
Okay, so I've spent the better part of two solid days banging my head against this problem, which is what brought me here looking for help. Do you have a solution? If you've read this far, perhaps you'll indulge a couple of closing thoughts.
Thought #1: Bravo to the Laravel guys for taking security seriously!
I'm really impressed with how difficult it is to write a package that circumvents the built-in security mechanisms. I'm not talking about "circumvention" in the I'm-trying-to-do-something-bad way, but in the sense that I'm trying to write a legitimate package that would save me and lots of other people time, but would, in effect, be asking them to "trust me" with the security of their applications by potentially opening them up to malicious deployment triggers. This should be tough to get right, and it is.
Thought #2: Maybe I shouldn't be doing this
Frequently if something is hard or impossible to implement in code, that is by design. Maybe it's Bad Design™ on my part to want to automate the entire installation process for this package. Maybe this is the code telling me, "Don't do that!" What do you think?
In summary, here are two questions:
Do you know a way to do this that I haven't thought of?
Is this bad design? Should I not do it?
Thanks for reading, and thank you for your thoughtful answers.
P.S. Before someone says it, I know this might be a duplicate, but I provided much more detail than the other poster, and he never found a solution, either.
I know it is not good practice to use the Reflection API in production code, but this is the only solution i could think of where no additional configuration is needed. This is more like a proof of concept and I would not use it in production code.
I think a better and more stable solution is to have the user update his middleware to work with your package.
tl;dr - you can place this in your packages boot code:
// Just remove CSRF middleware when we hit the deploy route
if(request()->is(config('auto-deploy.route')))
{
// Create a reflection object of the app instance
$appReflector = new ReflectionObject(app());
// When dumping the App instance, it turns out that the
// global middleware is registered at:
// Application
// -> instances
// -> Illuminate\Contracts\Http\Kernel
// -> ... Somewhere in the 'middleware' array
//
// The 'instance' property of the App object is not accessible
// by default, so we have to make it accessible in order to
// get and set its value.
$instancesProperty = $appReflector->getProperty('instances');
$instancesProperty->setAccessible(true);
$instances = $instancesProperty->getValue(app());
$kernel = $instances['Illuminate\Contracts\Http\Kernel'];
// Now we got the Kernel instance.
// Again, we have to set the accessibility of the instance.
$kernelReflector = new ReflectionObject($kernel);
$middlewareProperty = $kernelReflector->getProperty('middleware');
$middlewareProperty->setAccessible(true);
$middlewareArray = $middlewareProperty->getValue($kernel);
// The $middlewareArray contains all global middleware.
// We search for the CSRF entry and remove it if it exists.
foreach ($middlewareArray as $i => $middleware)
{
if ($middleware == 'App\Http\Middleware\VerifyCsrfToken')
{
unset($middlewareArray[ $i ]);
break;
}
}
// The last thing we have to do is to update the altered
// middleware array on the Kernel instance.
$middlewareProperty->setValue($kernel, $middlewareArray);
}
I haven't tested this with Laravel 5.1 - for 5.2 it works.
So you could create a Route::group where you can explicitly say which middleware you want to use.
For example in your ServiceProvider you could do something like this:
\Route::group([
'middleware' => ['only-middleware-you-need']
], function () {
require __DIR__ . '/routes.php';
});
So just exclude VerifyCsrfToken middleware, and put what you need.

How to create a Forbidden Response in Symfony2?

I just discovered SensioLabsInsight and found very interesting tips on how to write good code. It would be great if there was some explanation on why (or why not) something should be used - even for basic stuff like exit and die. It would help me to explain things to people I work with.
So my question is specifically for AccessDeniedHttpException - it says:
Symfony applications should not throw AccessDeniedHttpException
So how do I return 403 Forbidden from the application controller or EventListener?
What is the best practice?
To be honest I thought it would be
throw new AccessDeniedHttpException()
Since for 404 you have
throw $this->createNotFoundException()
But it looks like I was wrong.
I think it means that you must throw AccessDeniedException instead of directly throwing AccessDeniedHttpException.
Main reason is that AccessDeniedException is catched by the event listener in Symfony\Component\Security\Http\Firewall\ExceptionListener and then you can make some stuff with it. Check onKernelException function.
That sentence has to be considered with the whole architecture of Symfony in mind.
In the Symfony framework there is a whole subsystem devoted to security applying the 2 step Authentication + Authorization process.
That said in the architecture of Symfony the Controllers, that are what basically the framework leaves to you to develop and so they are "the application", will be called only if the Authentication + Authorization has been passed.
So that sentence say that you should not need to throw that Exception becouse that is the work for the Security component. Doing that it is not forbidden or even made impossible but it is not the way which the framework has been normally thinked to work.
This can happen in two situations:
Your application is particular and you need to do that way
You are doing the security work out of the framework way of doing. It is your choice, just evaluate cost/benefits of not using the framework features and write your own ones.
Looking here http://symfony.com/doc/current/cookbook/security/custom_authentication_provider.html it seems like you might be able to throw an AuthenticationException which returns a 403 Response (?)
Here is Controller::createNotFoundException() implementation:
public function createNotFoundException($message = 'Not Found', \Exception $previous = null)
{
return new NotFoundHttpException($message, $previous);
}
It throws a bit different exception.
I don't know the reason for this tip. Maybe its because in controller or event listener You can directly return the Response, without throwing exception and thus triggering other event listeners.
Symfony uses event listeners to handle exceptions. You can create your own listeners and manage the response. Might be useful for API. For example I have used it to return pretty json responses in dev environment (with stack trace and additional debugging info).

Categories