Codeception has documentation for functional tests at: https://codeception.com/docs/05-UnitTests
So following that on my laravel/homestead project I do as follows:
in functional.sute.yml:
class_name: FunctionalTester
modules:
enabled:
- Laravel5
- \Helper\Functional
My test:
<?php
class LoginCest
{
public function _before(FunctionalTester $I)
{
}
public function _after(FunctionalTester $I)
{
}
// tests
public function tryLogin (FunctionalTester $I)
{
$I->amOnPage('/login');
$I->fillField('email', 'someemail');
$I->fillField('password', 'somepw');
$I->click('Login');
$
I->see('some text');
}
}
So when I run the test, it fails:
There was 1 error:
---------
1) LoginCest: Try login
Test tests/functional/LoginCest.php:tryLogin
[ExternalUrlException] Codeception\Module\Laravel5 can't open external URL: http://myapp.test/login
Scenario Steps:
4. $I->click("Login") at tests/functional/LoginCest.php:20
3. $I->fillField("password","somepw") at tests/functional/LoginCest.php:19
2. $I->fillField("email","someemail") at tests/functional/LoginCest.php:18
1. $I->amOnPage("/login") at tests/functional/LoginCest.php:17
#1 Codeception\Lib\InnerBrowser->click
#2 /home/vagrant/Code/my-app/tests/_support/_generated/FunctionalTesterActions.php:1114
#3 /home/vagrant/Code/my-app/tests/functional/LoginCest.php:20
#4 LoginCest->tryLogin
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
My app url is someapp.test which is running on homestead.
In looking at my LoginController I see:
$this->redirectTo();
at the very end.
Now I understand that functional tests do not require a webserver and I could probably make it work using the acceptance test. But really having a hard time understanding on why anyone would use codeception to do functional tests if you cant even specify a url. Also why would codeception use a login example for functional test when others may face similar issues?
Some background first.
URL consists of these main parts: PROTOCOL://DOMAIN:PORT/URI?QUERY_STRING#HASH
HASH is only used in client side and not passed to webserver, so it can't be used for routing.
PROTOCOL and PORT can be used for routing, but that is very unusual.
Some websites display different contents depending on DOMAIN which is used to access them,
but most only use URI and/or QUERY_STRING parts for routing and displaying the right page.
The main distinction of functional testing using Codeception is that it doesn't require webserver and because of that mostly doesn't care about domain names.
Website code usually doesn't care if it is accessed using http://myapp.test/ or http://google.com/ and happily returns your front page for either.
Though if a click on http://google.com/ link rendered your front page it almost certainly would be wrong.
To prevent that, external domain check was implemented years ago.
All internal links in your website must have no domain component or it must match the one passed using Host header.
Exception is made for domains that are using for domain based routing, such domains can be used in tests.
Related
I am working on an extension (app) of nextcloud (which is based on Symfony). I have a helper class to extract data from the request that is passed by the HTTP server to PHP. A much-reduced one could be something like this (to get the point here):
<?php
namespace OCA\Cookbook\Helpers;
class RequestHelper {
public function getJson(){
if($_SERVER['Request_Method' === 'PUT'){ // Notice the typos, should be REQUEST_METHOD
$raw = file_get_content('php://input');
return json_decode($raw, true);
} else { /* ... */ }
}
}
Now I want to test this code. Of course, I can do some unit testing and mock the $_SERVER variable. Potentially I would have to extarct the file_get_content into its own method and do a partial mock of that class. I get that. The question is: How much is this test worth?
If I just mimick the behavior of that class (white box testing) in my test cases I might even copy and paste the typo I intentionally included here. As this code is an MWE, real code might get more complex and should be compatible with different HTTP servers (like apache, nginx, lighttpd etc).
So, ideally, I would like to do some automated testing in my CI process that uses a real HTTP server with different versions/programs to see if the integration is working correctly. Welcome to integration testing.
I could now run the nextcloud server with my extension included in a test environment and test some real API endpoints. This is more like functional testing as everything is tested (server, NC core, my code and the DB):
phpunit <---> HTTP server <---> nextcloud core <---> extension code <---> DB
^
|
+--> RequestHelper
Apart from speed, I have to carefully take into account to test all possible paths through the class RequestHelper (device under test, DUT). This seems a bit brittle to me in the long run.
All I could think of is adding a simple endpoint only for testing the functionality of the DUT, something like a pure echo endpoint or so. For the production use, I do not feel comfortable having something like this laying around.
I am therefore looking for an integration test with a partial mock of the app (mocking the business logic + DB) to test the route between the HTTP server and my DUT. In other words, I want to test the integration of the HTTP server, nextcloud core, my controller, and the DUT above without any business logic of my app.
How can I realize such test cases?
Edit 1
As I found from the comments the problem statement was not so obviously clear, I try to explain a bit at the cost of the simplicity of the use-case.
There is the nextcloud core that can be seen as a framework from the perspective of the app. So, there can be controller classes that can be used as targets for URL/API endpoints. So for example /apps/cookbook/recipe/15 with a GET method will fetch the recipe with id 15. Similarly, with PUT there can be a JSON uploaded to update that recipe.
So, inside the corresponding controller the structure is like
class RecipeController extends Controller {
/* Here the PUT /apps/cookbook/recipe/{id} endpoint will be routed */
public function update($id){
$json = $this->requestHelper->getJson(); // Call to helper
// Here comes the business logic
// aka calls to other classes that will save and update the state
// and perform the DB operation
$this->service->doSomething($json);
// Return an answer if the operation terminated successfully
return JsonResponse(['state'=>'ok'], 200);
}
}
I want to test the getJson() method against different servers. Here I want to mock at least the $this->service->doSomething($json) to be a no-op. Ideally, I would like to spy into the resulting $json variable to test that exactly.
No doubt, in my test class it would be something like
class TestResponseHandler extends TestCase {
public function setUp() { /* Set up the http deamon as system service */}
public testGetJson() {
// Creat Guzzle client
$client = new Client([
'base_uri' => 'http://localhost:8080/apps/cookbook',
]);
// Run the API call
$headers = ...;
$body = ...;
$response = $client->put('recipe/15', 'PUT', $headers, $body);
// Check the response body
// ....
}
}
Now, I have two code interpreters running: Once, there is the one (A) that runs phpunit (and makes the HTTP request). Second, there is the one (B) associated with the HTTP server listening on localhost:8080.
As the code above with the call to getJson() is running inside a PHP interpreter (B) outside the phpunit instance I cannot mock directly as far as I understand. I would have to change the main app's code if I am not mistaken.
Of course, I could provide (more or less) useful data in the test function and let the service->doSomething() method do its job but then I am no longer testing only a subset of functions but I am doing functional or system testing. Also, this makes it harder to generate well-aimed test cases if all these side-effects need to be taken into account.
I have an API, written in PHP with Slim (3, will be upgraded to 4 soon), I also have an extensive specification written in openapi3 YML format (I could convert to another format if necessary, but I think it would be the best to keep oas3). Currently I am testing all endpoint against this specification with Dredd. This is a Tool which goes through a API specification an, sends example data to the real API and checks if the result matches the spec. That works, but not very good. For a test run I have to wipe out and re-initialize the db with PHP and then run Dredd with npm. Plus since Dredd does not support all features of oas3 I have to do some conversion first.
Since I have some Unit-tests with PHPUnit anyway, I would love to run the other tests with PHPUnit as well (and get rid of the Node-stuff at all). I found http://opensource.byjg.com/php-swagger-test/ and https://github.com/Maks3w/SwaggerAssertions. Both of them provide this very feature, but I would have to write a separate function for every endpoint, containing sample data and so on - stuff which is already in the API spec. Any Idea how I could avoid this afford and just use my API spec as test defintion with PHPUnit or at least any PHP library?
Example from the specs:
/users:
get:
summary: get a list of users
description: get info about all registered users - in whole application or in a workspace.
parameters:
- in: header
name: AuthToken
schema:
$ref: '#/components/schemas/auth'
example:
at: 132token
- in: query
name: ws
description: id of a workspace to get users from. can be omitted for all users in system
required: false
examples:
a:
value: 0
b:
value: 1
With byjg php-swagger-test I would have to write something like this
public function usersA()
{
$request = new \ByJG\Swagger\SwaggerRequester();
$request
->withMethod('GET')
->withPath("/users")
->withHEader(blabla)
->withRequestBody(['ws'=>0]);
$this->assertRequest($request);
}
public function usersB()
{
$request = new \ByJG\Swagger\SwaggerRequester();
$request
->withMethod('GET')
->withPath("/users")
->withHEader(blabla)
->withRequestBody(['ws'=>1]);
$this->assertRequest($request);
}
Two tests (functions) containing only information which is already in the spec. Is there a better tool/way to run tests over all endpoints against the spec without writing all those?
I'm (we're) creating a package that acts as a core component for our future CMS and of course that package needs some unit tests.
When the package registeres, the first thing it does is set the back/frontend context like this:
class FoundationServiceProvider extends ServiceProvider
{
// ... stuff ...
public function register()
{
// Switch the context.
// Url's containing '/admin' will get the backend context
// all other urls will get the frontend context.
$this->app['build.context'] = request()->segment(1) === 'admin'
? Context::BACKEND
: Context::FRONTEND;
}
}
So when I visit the /admin url, the app('build.context') variable will be set to backend otherwise it will be set to `frontend.
To test this I've created the following test:
class ServiceProviderTest extends \TestCase
{
public function test_that_we_get_the_backend_context()
{
$this->visit('admin');
$this->assertEquals(Context::BACKEND, app('build.context'));
}
}
When I'm running the code in the browser (navigating to /admin) the context will get picked up and calling app('build.context') will return backend, but when running this test, I always get 'frontend'.
Is there something I did not notice or some incorrect code while using phpunit?
Thanks in advance
Well, this is a tricky situation. As I understand it, laravel initiates two instances of the framework when running tests - one that is running the tests and another that is being manipulated through instructions. You can see it in tests/TestCase.php file.
So in your case you are manipulating one instance, but checking the context of another (the one that did not visit /admin and is just running the tests). I don't know if there's a way to access the manipulated instance directly - there's nothing helpful in documentation on this issue.
One workaround would be to create a route just for testing purposes, something like /admin/test_context, which would output the current context, and the check it with
$this->visit('admin/test_context')->see(Context::BACKEND);
Not too elegant, but that should work. Otherwise, look around in laravel, maybe you will find some undocumented feature.
I'm currently writing unit tests for an API written in PHP. This API implements a RateLimiting step before each request, and I want to avoid this step while I'm testing the application.
Now, if I want to run the tests locally I just have to check the local IP, which is "::1". But I'm having problems accessing the environment variables that my continuous integration server provides (I am using wercker).
If I run this from a PHPUnit test:
var_export(isset($_SERVER["CI"]) || isset($_SERVER["wercker"]));
I get true, but if I do do something similar before applying the rate limiting:
if (isset($_SERVER["CI"]) || $request->getIp() === "::1") {
return;
} else {//...
the wercker tests keep failing because it never skips the rate limiting logic. Notice that the first piece of code is run from a test in PHPUnit, while the second one is part of the server application.
What am I doing wrong with the environment variables?
Please let me know if I must provide more information or documentation.
I was able to make it work by using PHP's getenv function
if (getenv("CI") || $request->getIp() === "::1") {
return;
} else {//...
To display all environment variables on Wercker server, add this step (E.g : in build section) :
build:
steps:
- script:
name: show env vars
code: env
I have several integration tests with phpunit,
and in the proccess of the tests there are some logs written to files in the system.
I would like to check if a line was written during a test, is that possible?
example:
/** #test */
function action_that_writes_to_log() {
$this->call('GET', 'path/to/action', [], [], $requestXml);
//I want this:
$this->assertFileHas('the log line written', '/log/file/path.log');
}
The obvious way:
Implementing a custom assertion method, like the one you propose: assertFileHas. It's quite easy, just check if the string appears in the file. The problem you can get is that the line can already exist from another test or the same test already run. A possible solution for this is deleting the logs content before each test or test class, depending on your needs. You would need a method that deletes the logs and call it from setUp or setUpBeforeClass.
I would go with another approach: mocking the logging component, and checking that the right call is being done:
$logger_mock->expects($this->once())
->method('log')
->with($this->equalTo('the log line written'));
This makes easy to test that the components are logging the right messages, but you also need to implement a test that verifies that the logger is capable of actually writting to the file. But it's easier to implement that test once, and then just check that each component calls the logging method.