Unit tests (PHPUnit): create separate project just for testing? - php

I'm writing some unit tests (PHPUnit) for a project that will be used by another project. The project that I am testing has a class that will include files that will be present only in the other project.
Now how do I write a unit test for this class without having the files that it needs to include available? Would you recommend setting up a complete testing project with stub files (the file in question is a file that contains some settings) and running all the unit tests there? Or should I create directories and files using for example the setUp() method?
Edit:
To be more specific, I have a base project A, which is a website. I have a project B, which contains a class that generates a form. The form class will be installed in project A using Composer. The form class will in project A check for the existence of a dir with a settings file. If it exists, it will include it and load the settings in it. To test the form class, do you think I should create a project C (just for testing) which installs project B and in which I set up the directory with the settings file for testing? Or do you think it's a better way to go to create the directory with the settings file in project B itself? The latter seems a bit odd to me, as I don't want to have all this unit testing material available in project A when I composer-install project B in it.

Yes!! To everything:
Now how do I write a unit test for this class without having the files that it needs to include available?
You can create test fixtures. What is the specification your programming your code to? As you develop your code are you using a test file? reading documentation? Given a specification by the client? You could create an input file that fulfills the spec and provide it to your function.
Would you recommend setting up a complete testing project with stub files (the file in question is a file that contains some settings) and running all the unit tests there?
Yes, but only the absolute smallest amount of tests necessary to programmatically ensure that the functionality that you are saying you're providing your client is being delivered! If the function is provided a file path and parses it and then loads the settings I feel like there needs to be at least a couple test cases that ensure that a file can be read from the operating system. Having a fixture file that your test loads to verify that the file opening logic is correct should be a pretty reliable test. I think the tricky part is minimizing the number of these tests.
For example, if you need to test your settings parsing logic, it may seem easy to create a settings file and have your test load and parse that settings file. For a couple tests this will be plenty fast and reliable. But as your test suite grows it becomes orders of magnituded slower than in memory tests. Instead of going through the file system to test the settings parsing logic, you could excercise a settings parsing function directly by providing it with the string contents of the file. This way you could build a settings string in memory, in your test function and pass it into the parsing function, avoiding any file system reads. If the file is too large and expects a file like object so that it can incrementally read data from the file system you could create an in memory stub object which you could use.
I'm not sure of the php api for that but if there is like a readline method you could create a fake file object provide it with the PHP file api and create your fake settings file in memory during the test, also avoiding the file system.
Or should I create directories and files using for example the setUp() method?
What is the benefit of this over having a static file? In my experiences minimizing test complexity and test logic is huge for test suite maintenance and performance.

You could create/delete the files and directories during setup and tear down, but be aware that this can make your tests flakey as sometimes files can not be deleted, e.g. due to lock issues or when a test fails and tear down is not called.
A safer way to interact with "your local filesystem" is vfsStream which will never actually write to disk. The documentation contains a nice basic example both for using the filesystem directly or using vfsStream to mock the filesystem access: https://github.com/mikey179/vfsStream/wiki/Example

Related

Preserve unit test generated content?

I am doing some fairly complex unit tests using PHPUnit. In these tests some files are being generated in temp dirs. After test is finished all this get's wiped. Is there a way to say to framework to leave generated content untouched?
There are 2 ways you could achieve this. Without knowing what exactly clears those files, my best bet is to subclass PHPUnit\Framework\TestCase and implement tearDown or tearDownAfterClass there (and have the relevant test cases subclass that instead), or alternatively by using register_shutdown_function in your bootstrap script.
The tearDown/shutdown method could simply rename the temp dir and mkdir a new one so there'll be nothing to clear, but it's still best to not have those files cleared in the first place. If that code sits inside your vendor/ directory, it's still possible to modify those files.

How to handle config files in PHP library

I wrote a small library (https://github.com/dborsatto/php-giantbomb) that acts as a wrapper for an API. In order to work, the library needs 2 basic configuration options:
An API key
A file that describes the API options
What is the suggested way to handle these two files?
The API key is of course personal so it can't be added to the repository. But by doing so, I can't write a functional test for the library as a whole, thus limiting myself to only unit tests for the single parts. Do I give up the functional tests altogether or is there a way to make this work?
About the other config, basically it's a YAML file (https://github.com/dborsatto/php-giantbomb/blob/master/api_config.yml) that includes the base API endpoint, and configuration for each data repository. At the moment this is handled by a Config class that is decoupled in a way that the user must write glue code and inject data into the Config. This way it's easier to test, but as a whole I feel like it's creating a bigger disadvantage than just letting the Config class load the file, parse it and behave accordingly. What is the best way to do this? At the moment there are no tests in the repository, but I'm working on it (along with some code refactoring).
I would suggest to leave the configuration outside your library; I've done something similar for the Mandrill mailing service, where I left to the developer the management of the configuration (I was working in a Symfony 2 project). For me there is no Config class, just the service constructor that accepts the API key and an (optional) array of options:
public function __construct($api, $options = array())
{
// My code
}
When I need to use my service inside the Symfony 2 application, I take the needed parameters and configuration from a place (Symfony's config files) external to my service; this way I can decouple the library from the configuration. Of course the service contructor throws an exception if the mandatory parameters are missing.

Uploading testsuite to Subversion repository

I want to upload my testsuite to Subversion repository. I was wondering where in the Subversion repo the test suite should be placed. At the moment we have a root folder (containing all the source code) and a documentation folder. Should we create a testing folder within the root? We also have an automatic build system that makes a new build every minute. How can we get the tests to be run automatically in parallel?
And also, if the test fails or pass how will I get to know the result of it? When it's uploaded in Subversion?
First, the decision how to structure your code has to be discussed in your team. But I would always suggest to remove source from root and move it into separate directory. As well for docs, externals, etc.
However why do you wish to upload results into a repository?
When jusing a CI like Jenkins, TFS etc the results should be displayed on the server to be reviewed by all, not stored in svn as the results are like generated binary data usually not checked in for your own project.
Additionally why does a build system make a build every minute? Why is it not polling the source repo?
The explicit answer for your question would be checking in the result output of a test run to repository which indicates whether this test has worked or not. But I would not recommend that.

How do you manage the unit test files in projects? do you add them in git?

How do you manage your PHPUnit files in your projects?
Do you add it to your git repository or do you ignore them?
Do you use #assert tag in your PHPdocs codes?
Setup
I'm not using php currently, but I'm working with python unit testing and sphinx documentation in git. We add our tests to git and even have certain requirements on test passing for pushing to the remote devel and master branches (master harder than devel). This assures a bit of code quality (test coverage should also be evaluated, but thats not implemented yet :)).
We have the test files in a separate directory next to the top-level source directory in the directories where they belong to, prefixed with test_, so that the unit testing framework finds them automagically.
For documentation its similar, we just put the sphinx docs files into their own subdirectory (docs), which is in our case an independent git submodule, which might be changed in the future.
Rationale
We want to be able to track changes in the tests, as they should be rare. Frequent changes indicate immature code.
Other team members need access to the tests, otherwise they're useless. If they change code in some places, they must be able to verify it doesn't break anything.
Documentation belongs to the code. In case of python, the code directly contains the documentation. So we have to keep it both together, as the docs are generated from the code.
Having the tests and the docs in the repository allows for automated testing and doc building on the remote server, which gives us instantaneous updated documentation and testing feedback. Also the implementation of “code quality” restrictions based on test results works that way (its actually more a reminder for people to run tests, as code quality cannot be checked with tests without looking at test coverage too). Refs are rejected by the git server if tests do not pass.
We for example require that on master, all tests have to pass or be skipped (sadly, we need skipped, as some tests require OpenGL, which is not available on headless), while on devel its okay if tests just “behave like expected” (i.e. pass, skip or expected failure, no unexpected success, error or failure).
Yes, to keeping them in git. Other conventions I picked up by looking at projects, including phpunit itself. (A look at the doctrine2 example shows it seems to follow the same convention.)
I keep tests in a top-level tests directory. Under that I have meaningfully named subdirectories, usually following the main project directory structure. I have a functional subdirectory for tests that test multiple components together (where applicable).
I create phpunit.xml.dist telling it where to find the tests (and also immediately telling anyone looking at the source code that we use phpunit, and by looking at the xml file they can understand the convention too).
I don't use #assert or the skeleton generator. It feels like a toy feature; you do some typing in one place (your source file) to save some typing in another place (your unit test file). But then you'll expand on the tests in the unit test files (see my next paragraph), maybe even deleting some of the original asserts, and now the #assert entries in the original source file are out of date and misleading to anyone looking at just that code.
You have also lost a lot of power that you end up needing for real-world testing of real-world classes (simplistic BankAccount example, I'm looking at you). No setUp()/tearDown(). No instance variables. No support for all the other built-in assert functions, let alone custom ones. No #depends and #dataProvider.
One more reason against #assert, and for maintaining a separate tests directory tree: I like different people to write the tests and the actual code, where possible. When tests fail it sometimes points to a misunderstanding in the original project specs, by either your coder or your tester. When code and tests live close together it is tempting to change them at the same time. Especially late on a Friday afternoon when you have a date.
We store our tests right with the code files, so developers see the tests to execute, and ensure they change the tests as required. We simply add an extension of .test to the file. This way, we can simply include the original file automatically in each test file, which may then be created with a template. When we release the code, the build process deletes the .test files from all directories.
/application/src/
Foo.php
Foo.php.test
/application/src/CLASS/
FOO_BAR.class
FOO_BAR.class.test
require_once(substr(__FILE__, 0, -5)); // strip '.test' extension

Unit Testing and External Resources

I'm kind of new to unit testing, but I've recently seen how it can be quite useful. I've seen that most unit tests are self running. In fact most unit testing frameworks provide a way of running several tests at once (such as unit testing a whole system).
I wonder though; How do you deal with external resources in self running unit tests? I like the idea of testing a whole system and seeing which classes failed, but a class may, for example, create thumbnails from an uploaded image. How would that test be self running when it relies on an upload image? Would I keep a directory of images, and "pretend" to upload one of them in the test?
Any thoughts on the matter would be greatly appreciated.
If you are planning on testing external resources then it would be integration testing. In pure unit testing -> to test external resources, you would have to mock the external resource. So in this case you create a IDirectory interface and then use say a FakeDirectory class and then use FakeDirectory to "Upload" the image. And when you are actually using the application you would pass an actual directory.
In integration testing, you could have a setup class which would do all the work to set up and then you would test.
I have come across this same situation while unit testing my PHP classes. There are functions which can be tested without using any other resources (unit testing), but many functions perform file read/write operations or require database access (integration testing). In order to test these functions, I've combined unit testing with integration testing. In my setUp and tearDown testing classes, it may load a database schema or fetch test data from a local test_data/ directory required by the class functions.
If you need to test what happens with user input, you indeed need some sample data at hand. A directory with images, text files, PDFs or whatever else is needed, should be there along your unit tests. Or you can generate random data programmatically in your tests.
Yes, ideally a class that creates a thumbnail can use a placeholder image that you provide as a resource in your unit test directory. You should be able to test the class in isolation, with as little dependency on the rest of your application as possible. That's kind of what people mean when they recommend to design your code to be "testable."
Mock external dependencies. I have no real experience of mocking in php but I have seen enough resources online just googling for mock and php that it's being done

Categories