Does anyone know how to pass an instance of a custom Service Manager between 2 actions? As the Zend Framework 2 documentation says, a SM will preserve it's instance if the 'shared' option inside the Module.php class will not be set to false. However, getting the service manager via $manager = $this->getServiceLocator()->get('MyServiceManager'); in different actions will return different instances of the MyServiceManager class. What I want to achieve is: I use an API call to a third party service, which obviously returns a response with various information data, however, if the user heads to another action/page where the same data returned previously from the API request is needed would be nice to save it as a MyServiceManager property and being accessed whenever needed from the class instance if it's set rather that sending another API request every time.
If this is possible, I would be more than happy to listen and learn!
The request life cycle (using mod_php, fastcgi/fpm) usually prevent sharing resources. Unlike languages running on application servers, PHP has no built-in way of sharing object instances.
For each incoming request, the web server creates a "new" instance of the php runtime, executes the request and clean itself up.
You could use APC to serialize object instances or memcache to temporarily store results between requests.
Related
In order for my application to work, I need to synchronize regularly data from an outside service (could be an API, or a simple text file, but for now it is an API).
Since this would require creating / updating many entities at once, I need to create a Domain Service. However, I also need to create some DTOs that will contain the response of the remote API right ?
Where should this logic go ? Should I have the following directory structure:
Domain -
Model - // my set of entities and repository interfaces are here
....
Synchronization -
RunSynchronizationService.php // domain service
Application
Synchronization -
SynchronizeData.php // application service
SynchronizationDataSourceInterface.php // used by application service
MySpecificRemoteApiDataSource.php // this implements the interface above
SynchronizationDataSourceResponse.php // this would be returned by each call of SynchronizationDataSourceInterface method, and would contain data normalized, but not validated.
Infrastructure -
MyConcreteImplementationOfModelEntityRepository.php
And when I want to synchronize the data, I smply call Application\Synchronization\SynchronizeData's sync method, wich will take a concrete implementation of SynchronizationDataSourceInterface, call its methods, and validate the returned SynchronizationDataSourceResponse objects before transferring them to Domain\Model\Synchronization\RunSynchronizationService ?
Or should I remove RunSynchronizationService (the Domain Service) and let the Application Service (SynchronizeData.php) create / update the domain entities at each step of the synchronization process ?
Generally when presented with the question as to where an external service interface should live, I try to think of it in terms of it being another repository. The level of abstraction I choose here will depend highly on who will use the service (just the domain/app layers of this project? other projects?), if there are different versions/vendors of the service, and how which service to use is determined.
For your example, I'm going to assume that this application is the only one using the sync service directly. Shared services require common end points (such as the interface and output objects) to be isolated even further to avoid spilling unnecessary objects into other projects.
But in this case, to treat the service as another repository, I'd place the interface for it and standard output the domain expects and uses in the domain.
What is meant by "validation" is a little vague in your description, so I'll try to attack the different views of that here. One or all may apply.
If validation requires you to compare against domain data, it should probably reside in the RunSynchronizationService.php module. That module appears to be responsible for taking the sync data and applying it to your domain.
If validation is on the data coming back from the sync service call, and doesn't require direct access to the domain object graph, I would put that validation on the service implementation of the interface, and expose that validation call on the service interface. To handle the situation of that validation being the same across several versions of the sync service (such as VersionA, VersionB, etc), you can use inheritance and overrides, or common helper functions to the sync service, etc. In my example below, I used inheritance.
It could be you need to do both kinds of validation. First check for sync data problems agnostic to the domain (on the implemented classes), and then against business rules in the domain (in RunSynchronizationService). But likely both of those calls would occur in the RunSynchronizationService as you'd expose the sync data validation call on the interface.
The application layer should be responsible for creating the instance of the service (MySpecificRemoteApiDataSource), and passing it into the RunSynchronizationService.php module as SynchronizationDataSourceInterface. If there are multiple versions, the application layer would likely be responsible for choosing which one (from a configuration, perhaps), and using a factory.
But this again highly depends on the scope of that sync service. If you have external projects relying on it, you might want that factory part of the service layer itself so each of the other projects operate with the same choosing method.
Domain -
Model - // my set of entities and repository interfaces are here
....
Synchronization -
RunSynchronizationService.php // domain service
SynchronizationDataSourceInterface.php // used to define the contract associated with a sync service
SynchronizationDataSourceResponse.php // this would be returned by each call of SynchronizationDataSourceInterface method, and would contain data normalized, but not validated.
Application -
Synchronization -
SynchronizeData.php // application service - Uses a factory or some means of determining which version to use and introduce the domain to the data point.
Infrastructure -
MyConcreteImplementationOfModelEntityRepository.php
Synchornization -
VersionA -
MySpecificRemoteApiDataSource.php // Implements SynchronizationDataSourceInterface.php, inherits from SyncApiDataSourceBase.php
SyncApiDataSourceBase.php // Common logic for sync goes here, such as validation.
TLDR: Writing a service (in the model layer). It talks to ffmpeg. Where should validation go? Should I create a service response object so it is testable? How should it be structured?
Background: I'm designing some classes to retrieve data from an external service. It could be an API, but in fact it's calls to ffmpeg cli, which in effect is an API to the conversion tools themselves.
When talking to an external service, where the data retrieved may not always be the same, how is it best to go about maintaining at least a consistent application state on your end so your application doesn't depend on the external service to work?
I have already separated out the classes thus far, trying to maintain SRP within them:
class CommandDispatcher { }
The Command Dispatcher's sole job is to make a request for data (to ffmpeg) and retrieve the response for that data back.
class Converter { }
The converter's sole responsibility is to take user requests (like convert 1 to 2), and send the basics to the command dispatcher which handles the exec() calls.
Here are my questions:
When talking to an external API / service, should I be creating an APIRequest and an APIResponse object (in this case an FFmpegResponse object)?
I have seen examples of this for OAuth, where there is an OAuth response object. However, that's simple enough because calls to this are done over the HTTP protocol which tends to give back at minimum an error code and a message. Calls to something like ffmpeg don't guarantee a similar response (ffmpeg may not be installed, for example). Is this object merely a domain object (i.e. an entity: some class members and setters and getters)?
Validation. If I am creating an FFmpegResponse object, whose job is it to put the data into the right members of the Response object?
Imagine ffmpeg isn't installed and the CommandDispatcher gets the response back empty. Is it up to the CommandDispatcher to then populate the FFmpegResponse object with an "ffmpeg not installed" error? Should I have a validation object do this?
Remember, I'm trying to stick to the Single Responsibility Principle here, so I'm thinking that the CommandDispatcher shouldn't care about whether the data is valid or not - it should merely ask for data and retrieve it. Where does my validation fit within the model layer for this service?
This isn't only for FFmpeg but will help for future external service calls. What is the best [practice] way to structure your code and classes to maintain SRP yet also a consistent application state regardless of whether or not the external service responds in an expected way?
In your current structure, CommandDispatcher should be either an interface or an abstract class (depending on the necessity of abstract code). You would then create a concrete implementation: FFMpegCommandDispatcher which would encapsulate the understanding of ffmpeg specific responses.
Response objects will then take on a similar structure: CommandResponse would be an abstraction with the concrete implementation FFMpegCommandResponse.
It would be best to create a set of common error conditions (serviceNotAvailable, serviceNotInstalled, serviceDiedAHorribleAndBloodyDeath, etc). Your dispatcher implementation can then set a common error code on the response object and provider implementation specific details. ('Error 1984: FFMpeg is watching you')
If you're concerned (and I would be) about validating input as well. You could create a CommandRequest abstraction, and FFMpegRequest implementation, that will take user input and make sure that it's okay to be sent to the command line.
Backbone tutorials I have read implement some type of a mini-framework (i.e. Slim) with a RESTful architecture performing CRUD on a server db, like this. The Backbone docs state you need a RESTful api, which I believe is due to the Backbone Route and Sync functionality that keeps models up to date, which is a huge aspect of my choosing to use Backbone.
For example, the line below maps a faux url (route) to the 'addWine' function (within a Slim api):
$app->post('/wines', 'addWine');
Assumption 1: If I have a (PHP) CMS backend (and not a mini-framework) I assume I could simply replace the 2nd parameter (addWine) with my own CMS class method call and return a json object.
Assumption 2 But I would not be able to directly call that same class method from a link in the html without causing backbone to lose state and thus it's ability to sync the model data (and remember the browsers history).
Assumption 3 In that case, I will need to use the Slim api and route backbone urls through (Slim) RESTful CRUD calls in order to access my CMS database to keep backbone happy.
If those assumptions are correct, then it would seem backbone is intercepting those HTTP calls - which leaves me wondering how the whole RESTful + Backbone relationship works. Can you explain some of it?
If my assumptions are incorrect, then I need more help than I thought. Can you help with that?
Thanks
I can't speak intimately to your three assumptions, but as for your final question -- Backbone does not "intercept" HTTP calls -- it constructs them, just as any other javascript library would to create an AJAX request.
Backbone is relatively agnostic to your server side language/framework. Here is what Backbone expects any time "sync" is called:
Backbone's sync function uses different HTTP request types based on which method was called. These different HTTP request types are:
POST
GET
PUT
DELETE
Your framework needs to support all of the above to support the "out of the box" functionality of Backbone. This means that you must specify all of the above routes within your application in order to work with Backbone.
One other thing to note is the "create" and "update" method does not carry post data with the request specifically -- instead it sends a content body with a json digest of the data and expects the server side to properly parse a JSON object and deal with it appropriately.
I say yes to all three assumptions and also agree with #Andy Baird.
Also, the only problem to your project is how to notify Backbone that you have updated the database and you would like it to update itself in the front-end. I can only see two solutions:
1) using Javascript's setInterval() - if you do not need the front end to be updated immediately on DB update, you can check for changes every 1 minute, Backbone knows to only update what has changed and add new stuff but of course this is not healthy to the server if you have 1k active people making repeated request every minute
2) using SocketIO or similar service - this way you can send from the server to Backbone either the entire list of modifications to your DB or a simple 'Please refresh, new stuff waiting'. Check this discussion.
We have a need to access a DB that only allows one connection at a time. This screams "singleton" to me. The catch of course is that the singleton connection will be exposed (either directly or indirectly) via a web-service (most probable a SOAP based web-service - located on a separate server from the calling app(s) ) - which means that there may be more than one app / instance attempting to connect to the singleton class.
In PHP, what is the best way to create a global singleton or a web-service singleton?
TIA
This screams "use a DB SERVER" to me. ;-), but...
You could create an SoapServer and use a semaphore to allow only 1 connection at a time
$s1 = sem_get(123, 1);
sem_acquire($s1);
// soapserver code here
sem_release($s1);
In PHP, there is no such thing as a "global" object that resides across all requests . In a java webserver, this would be called "application level data store". In php, the extent of the "global" scope (using the global keyword) is a single request. Now, there is also a cross session data store accessible via $_SESSION, but I'm trying to highlight that no variable in php is truly "global". Individual values emulate being global by being stored to a local file, or to a database, but for something like a resource, you are stuck creating it on each request.
Now, at the request level, you can create a Singleton that will return an initialized resource no matter which scope within the request you call it from, but again, that resource will not persist across or between requests. I know, it is a shortcoming of php, but on the other hand, the speed and stability of the individual requests help make up for this shortcoming.
Edit:
After reading over your question again, I realized you may not be asking for a singleton database access class, but rather something that can resource lock your database? Based on what you said, it sounds like the database may do the locking for you anyway. In other words, it won't allow you to connect if there is already another connection. If that is the case, it seems kind of like you have 2 options:
1) just let all your pages contend for the resource, and fail if they don't get it.
2) Create a queue service that can accept queries, run them, then cache the results for you for later retrieval.
I have a web-app-database 3 tier server setup. Web requests data from app, and app gets its data from the db then processes and returns it to web for display. Standard.
Currently all the data I serialize from app to web gets put into custom defined data structs that the web side then parses for display. In other words, say I have an order that I want to retrieve and send to web. I don't serialize the whole object, but rather build an array with the appropriate data elements while on the app server, then serialize that array over to the web servers.
Now, I am looking into serializing the entire order object over to the web server.
What is the best method you guys have found to serialize objects while maintaining separation of appserver and webserver code? I don't want my webservers having any code which accesses the DB, but I want them to reuse the classes that encapsulate my order and other data as much as possible.
To further refine my question (thanks to Glenn's response), I don't want my webservers to have any of the business logic around say the order class either, that only the appserver needs to know. The objects already use separate serialization to/from the database, and to/from the webservers.
Using the order example, on the appserver, an order should be able to cancel: ie
$order->cancel();
but on the webserver that method should not even be found. It shouldn't simply proxy back (directly) to the appserver order's cancel method, because user action requests should flow through the application's authorization and permissions checking layer.
So how do I get an order object on my webserver that has some (but not all) of the methods and properties of the object on my appserver, with little to no code duplication. One thing I've been thinking is to create a base class with a limited set of properties and methods. It would use an inner class to hold its properties, and I would require all data access to pass in and out of getters and setters. These would in turn call the getters and setters on the inner class.
The web and app servers could then independently extend the base class, and use a custom inner class to hold the properties. Then on the app side for instance, the inner class could be an ORM class extention that can persist data to the DB, and on the web side the inner class could be a simple properties holder class.
But the whole inner class thing seems a little clunky, so I'm still looking for better solutions.
Factor out format specific
serialization code into separate
classes using the Adapter
pattern. Your problem domain
classes become backing store
neutral.
Use relational database specific adapter classes on the app tier to serialize objects to and from the data tier.
Use HTML specific adapter classes on the web tier to serialize objects to and from the web browser.
Use XML (or whatever wire protocol friendly format you deem most appropriate) specific adapter classes on both the web and app tiers to serialize the objects between the web tier and the app tier.
You get extra points if you are clever enough to figure out how to make these adapter classes generic enough such that you don't need a different set of adapter classes per problem domain class.
If I understand your question correctly, you want to serialize the data of your objects, but when they are re-hydrated, they should be so into a different type of object (With limited and/or different functionality)?
You can do this in different ways, depending on what protocol you prefer. For example, you could use SOAP. You should then hydrate the objects into a different class on the client-side, than on the server-side. You could also use PHP's native serialization and either a) Have a different code base on the client (Could get confusing) or b) mock a bit around with the serialized string (Eg. replace the class-name). A crude example can be found in the comments for the PHP manual.