I wrote a small library (https://github.com/dborsatto/php-giantbomb) that acts as a wrapper for an API. In order to work, the library needs 2 basic configuration options:
An API key
A file that describes the API options
What is the suggested way to handle these two files?
The API key is of course personal so it can't be added to the repository. But by doing so, I can't write a functional test for the library as a whole, thus limiting myself to only unit tests for the single parts. Do I give up the functional tests altogether or is there a way to make this work?
About the other config, basically it's a YAML file (https://github.com/dborsatto/php-giantbomb/blob/master/api_config.yml) that includes the base API endpoint, and configuration for each data repository. At the moment this is handled by a Config class that is decoupled in a way that the user must write glue code and inject data into the Config. This way it's easier to test, but as a whole I feel like it's creating a bigger disadvantage than just letting the Config class load the file, parse it and behave accordingly. What is the best way to do this? At the moment there are no tests in the repository, but I'm working on it (along with some code refactoring).
I would suggest to leave the configuration outside your library; I've done something similar for the Mandrill mailing service, where I left to the developer the management of the configuration (I was working in a Symfony 2 project). For me there is no Config class, just the service constructor that accepts the API key and an (optional) array of options:
public function __construct($api, $options = array())
{
// My code
}
When I need to use my service inside the Symfony 2 application, I take the needed parameters and configuration from a place (Symfony's config files) external to my service; this way I can decouple the library from the configuration. Of course the service contructor throws an exception if the mandatory parameters are missing.
Related
I'm writing some unit tests (PHPUnit) for a project that will be used by another project. The project that I am testing has a class that will include files that will be present only in the other project.
Now how do I write a unit test for this class without having the files that it needs to include available? Would you recommend setting up a complete testing project with stub files (the file in question is a file that contains some settings) and running all the unit tests there? Or should I create directories and files using for example the setUp() method?
Edit:
To be more specific, I have a base project A, which is a website. I have a project B, which contains a class that generates a form. The form class will be installed in project A using Composer. The form class will in project A check for the existence of a dir with a settings file. If it exists, it will include it and load the settings in it. To test the form class, do you think I should create a project C (just for testing) which installs project B and in which I set up the directory with the settings file for testing? Or do you think it's a better way to go to create the directory with the settings file in project B itself? The latter seems a bit odd to me, as I don't want to have all this unit testing material available in project A when I composer-install project B in it.
Yes!! To everything:
Now how do I write a unit test for this class without having the files that it needs to include available?
You can create test fixtures. What is the specification your programming your code to? As you develop your code are you using a test file? reading documentation? Given a specification by the client? You could create an input file that fulfills the spec and provide it to your function.
Would you recommend setting up a complete testing project with stub files (the file in question is a file that contains some settings) and running all the unit tests there?
Yes, but only the absolute smallest amount of tests necessary to programmatically ensure that the functionality that you are saying you're providing your client is being delivered! If the function is provided a file path and parses it and then loads the settings I feel like there needs to be at least a couple test cases that ensure that a file can be read from the operating system. Having a fixture file that your test loads to verify that the file opening logic is correct should be a pretty reliable test. I think the tricky part is minimizing the number of these tests.
For example, if you need to test your settings parsing logic, it may seem easy to create a settings file and have your test load and parse that settings file. For a couple tests this will be plenty fast and reliable. But as your test suite grows it becomes orders of magnituded slower than in memory tests. Instead of going through the file system to test the settings parsing logic, you could excercise a settings parsing function directly by providing it with the string contents of the file. This way you could build a settings string in memory, in your test function and pass it into the parsing function, avoiding any file system reads. If the file is too large and expects a file like object so that it can incrementally read data from the file system you could create an in memory stub object which you could use.
I'm not sure of the php api for that but if there is like a readline method you could create a fake file object provide it with the PHP file api and create your fake settings file in memory during the test, also avoiding the file system.
Or should I create directories and files using for example the setUp() method?
What is the benefit of this over having a static file? In my experiences minimizing test complexity and test logic is huge for test suite maintenance and performance.
You could create/delete the files and directories during setup and tear down, but be aware that this can make your tests flakey as sometimes files can not be deleted, e.g. due to lock issues or when a test fails and tear down is not called.
A safer way to interact with "your local filesystem" is vfsStream which will never actually write to disk. The documentation contains a nice basic example both for using the filesystem directly or using vfsStream to mock the filesystem access: https://github.com/mikey179/vfsStream/wiki/Example
Ok, so I am working on creating a custom standalone library that I intend to use in a Drupal 8 site. Drupal 8 runs on Symfony 2.8.x. I want this code to be usable outside Drupal. So I have focused on making it more Symfony oriented than Drupal oriented.
What I have found, thus far, with all my searching, is that Symfony requires you to write a bunch of config declarations in DependencyInjection/Configuration.php. As well as service declarations in a MyBundleExtension.php file.
What I have NOT found is a simple way to say "Hey, I want this config parameter in this standalone (not at all a controller) class". So I wrote the class you see below.
Is there a better way to handle this?
Code: http://pastebin.com/pdp53kxe
Also, will this create any problems with loading services?
At some point I have to deal with dependency injection and actually new up what we want to inject. Still not sure how I will work that into this standalone library while utilizing the Symfony framework. So suggestions as to how to have Symfony wire that up for me would be great.
My basic question here is about using Symfony in a library setting. Where you would not expect to just need the variables within a controller context.
Like you said if you want to import configuration you need to use your DependecyInjection/MyBundleExtension.php class to load the config (maybe even parse) yourself.
Another way is to use compiler passes to directly manipulate the container but this looks like it would be an overkill for your case.
The main reason is that the Dependency Injection Container (wich contains all your service definitions and config parameters) is compiled.
So you have to inject your extra config before the compilation.
Helpful links:
http://symfony.com/doc/current/service_container/import.html
http://symfony.com/doc/current/service_container/compiler_passes.html
One simple service offers an API to use some of its features. I want to create a composer package which will consume the following API and I want it to be compatible with other PHP projects. I read about the topic and came up with the idea to to use GuzzleHttp to make the requests (I saw it in few other libraries). However I'm confused about the structure of an API consuming library. (It's a REST API).
The API gives access to two resources: Customers and Products.
Products resource has the following methods:
List all available products - GET
Add products - POST
Delete product - DELETE
Customers resources has the same methods.
What I've done so far is the following structure (I'm following psr-4 as suggested):
src/
--MyName/
----PackageName/
------Resources/
------Containers/
------Exceptions/
------Client.php
src/MyName/PackageName is the structure I read in a tutorial about creating a composer package. MyName\PackageName will be my namespace throught it.
File Client.php is a class which loads some configuration about authorization (Basic Auth) and creates new instance of GuzzleHttp\Client. Also I have two methods for building a request (setting HTTP Method, URL & additional parameters).
Also I have a __call() method which instantiates new object from Folder Resources and the first element of the array passed as second argument is the method which should be called.
Folder Resources contains two files Products.php and Customers.php which are classes for handling all methods for those two resources I mentioned above. Every class extends Client.php.
Folder Containers contains files for processing the response data from every resource.
Folder Exceptions contains classes for custom exceptions which might be thrown in the process.
Is that a good approach to a easily maintainable library or I'm missing some of the concepts here?
How to structure a composer package
To cut a long story short: stick to PSR-4, decide for a folder layout and expose this layout in the autoloading section of your composer.json file. The rest is: pick clear and understandable class and method names to expose the API.
Your question is a mix of different things and some things overlap each other.
When talking about the structure of your project, we have to split between source (object-oriented design) and the folder layout (autoloading relevant) and the composer integration (with autoloading description).
Let's go through this in order...
a) source-code
However I'm confused about the structure of an API consuming library.
Is that a good approach to a easily maintainable library or I'm missing some of the concepts here?
The questions is: Is my code clear and precise enough (for myself and for others, e.g. to be consumed in another project)?
The advises to give here are:
Picking clear class and method names makes use easier, like Company\PhotoApi\Client.php
Namespace and Class could probably expose the vendor, e.g person or company producing the source and then you could include the API name, finally the classname.
Follow some basic OO principles
How you fetch the data from the API is taste based. Going with Guzzle is ok, while going with a lighter solutions, like file_get_contents or curl, would probably work out, too. It depends.
in regard to maintainability:
The lower the number of files and the less complex your code is, the better the maintainability. Do you need an Exceptions folder? How many files are there? Why not simply stick to PHP's default exceptions?
You might also consider, that your lib has to change if the API changes, right? And, then if your lib changes, all the code in projects using your lib has to change, right? If there is only a small set of endpoints exposed by the API, then it might be better to use just one object and provide accessor methods for them, instead of using multiple objects as accessors, which might be too fine-grained. This means that projects using your API will (possibly) have fewer lines changes when upgrading. Anyway: your users will tell you, if your API lib is too difficult to use.
While you have something like:
$PhotoApiProducts = new Company\PhotoApi\Products;
$products = $PhotoApiProducts->get();
This is also possible:
$api = new Company\PhotoApi\Client;
$products = $api->getProducts();
$consumers = $api->getConsumers();
b) folder structure
I'm following psr-4 as suggested
In regard to the folder layout, my suggestion is to stick with PSR-4.
But you have to decide the exact folder layout yourself. You might take a look at the examples section of the PSR-4 standard to see different folder layouts respecting PSR-4. http://www.php-fig.org/psr/psr-4/
c) Composer integration
And then finally.. you add a composer.json file describing your project.
The autoloading section is the most important part, because this is were you expose the structure of your project to Composer.
When you have decided to use PSR-4 for your project, then simply say so in the autoloading section and add the mapping from your namespace to source, like
"autoload": {
"psr-4": {
"Foo\\Bar\\": "src/Foo/Bar/"
}
}
Now, a user of your Composer project has to load the composer autoloader during the bootstrap and then he might start using your lib, by just using a namespaced classname from it - then the autoloader will do its work.
Before using Symfony2, I used to have a common lib with a lot of simple but useful functions (for instance, a function which takes "Azè_rtï" in argument and returns "aze-rti").
So, well, I created a Bundle: CommonLibsBundle.
But.. I have only one or two php files. It does not make sense to me to use a controller / view / model in this kind of situation.
What should I do? May I erase all folders in my new bundle (Controller, DependencyInjection, Resources, Tests... + CommonLibsBundle.php) and just put my lib.php in it?
Many thanks,
Bliss
Unless you need to tap into the Symfony framework itself - for configuration or to define services, it doesn't need to be a bundle - it's just a library. Give it a reasonable namespace, and call as required as you would any other component or library.
Even if you wanted to add Symfony-specific services that you could call, there is something to be said to still have an external simple library - usable anywhere, which then is wrapped by a very thin bundle which would only add the Symfony-specific (or Laravel, or ZF, or whatever) services and configuration as required.
I have a problem of not being able to access configuration and path information outside controller context. I am in a Assetic Filter class that has no methods to help me, and I need to know the kernel path along with some configuration. How do I do the Symfony 1 sfContext::getInstance() call in Symfony 2?
If you are writing an assetic filter you are writing a service. In the service definition you can pass parameters from the DIC. For example you can pass the AppKernel absolute path writing:
<argument>%kernel.root_dir%</argument>
If you want to have a semantic configuration for your filter (and for any service in general) it would reside in a DIC extension. By default "MyNamespaceMyBundle" will register the "MyNamespaceMyExtension" extension class inside the DependencyInjection subpackage and this extension will handle configuration from the "my_namespace_my" top level configuration key creating services or setting DIC parameters.
Moreover you would want to have a Configuration class that handles validation, normalization and merging of your configuration. Sadly all of this is more or less not documented anywhere, so best way to achieve your goal is to look at some other bundle (e.g. I learned very much reading FOSUserBundle).
You don't. You must use depency injection somehow. See here why it might have been removed.