I have a db table (doctrine entity) that I use to store some editable settings for my app, like page title, maintenance mode (on/off), and some other things..
I can load the settings normally using the entity manager and repositories, but I think that's not the best solution...
My questions are:
- can I load the settings only once at some kernel event and then access them the same way I access any other setting saved in yml config files..
how can I cache the database settings, so I would only do one DB query, and then in future page requests, it would use the cached values, instead of doing a DB query for each page request? (of course, everytime I change something in the settings, I would need to purge that cache, so the new settings could take effect)
LiipDoctrineCacheBundle provides a service wrapper around Doctrine's common Cache (documentation) that allows you to use several cache drivers like filesystem, apc, memcache, ...
I would recommend loading your generic container-parameters/settings (like maintainance mode,...) from database in a bundle-extension or a compiler-pass.
route-specific settings (like page title, ...) could be loaded in a kernel event listener. You can find a list of kernel events here.
update/invalidate their cache using a doctrine postUpdate/postPersist/postRemove listener.
Related
I'm currently managing PHP "events" for in a single instance. This is working well and is correctly implemented in my system using something similar to Laravel's events provider.
My question now concerns a system where I need to dispatch events across different instances/users.
For example, I have an account composed of multiple users. Each user is caching the account settings in session after the initial loading of the application.
Now, if a user is doing a modification to the account settings, I'd like to send an event to my other users so they update there settings.
For the time being, I'm thinking about these solutions:
Storing the events in a database table, with each users regularly checking the values, but this will require additional SQL load and would make the caching system obsolete.
Another solution would be to store a flag using REDIS. Each users can regularly check the value of the flag and reload the settings is required. It's similar to the SQL solutions above but will be much more efficient with REDIS. However the implementation would be more complexe, and it might be custom built for this specific event.
I also started to look at ways of sharing data between PHP instances and found this question which is suggesting the usage of shared memory. I'm not very familiar with this concept and I'm still looking at it, but I suppose that it may be possible to build a cross instance event system using it.
Using memcached server in PHP. I'm not familiar with that and still evaluating the possibility of building an event dispatcher system around it.
Using a message queue server. Still evaluating the possibility and also checking is existing event based system in PHP are built with it.
Is there any other solutions I could use to dispatch such events between instances?
Edit:
Proposition 3 has been rejected has shared memory is done in the same server, I'm working with server clustering for the application side.
I need to load database and entity configuration in Symfony2 from tables in the database in runtime.
By default, Symfony database config is stored in config.yml. Table names for the entity are defined in #ORM annotations.
But some of my entities can be stored dynamically in any database with any table name and in advance, it is not defined (except table schema), so I can't store database configuration in config.
I want to set the default database config in config.yml. This database will store three tables:
Contains connections to databases (database_connection_id, host, port, user, password, dbname)
Contains relation (entity_name, table_name)
Contains relation (table_name, database_connection_id)
I need to load this configuration dynamically in web request before making requests to entities using EntityManager or EntityRepository. In other words, I want to process this configuration, set table_name and connection for each entity, before processing web requests in Controller. And then work with entities transparent as usually.
As I understand I need to implement something like Symfony ConfigLoader, but there is no database connection while processing configuration. The table name for the entity can be set using Class Metadata, but I am not sure it is the right decision.
A possible way is to generate symfony config.yml for connections and src/*Bundle/Resources/config/doctrine/*.orm.yml for table names (instead #ORM annotations) from database each web request (or once when database config is changed in default database), clear symfony database cache and then load kernel of Symfony. But this way looks ugly.
Furthermore, background tasks can want to work with other tables for an entity than web requests. Each entity can have more than one table, the background task can generate a new table, web requests at this time uses the previous version of the table.
Can this implement using standard symfony flexible components?
I do not think that this is a viable idea. It would come at a great performance cost since symfony caches config files for production and you want to create your configuration at runtime each request.
If you want to do something similar, create a console command that creates your mappings and config files and let symfony clear and recreate its cache with php app/console clear:cache --env=prod or something similar.
I'm developing a site with Symfony2.4 which stores some "semi-static" info in the database (such as addresses, telephones, social media URL's, etc...) to allow the client modify that data through backend.
The site works fine but I think it should be some way to reduce the accesses to the database for retrieving those data for every request (because is printed in all pages).
Is there any way to cache that data? Is a good practice store it in a user session the first time it enters the site?
Thanks.
You should use APC for instance.
Assuming you have php-apc extension installed and enabled (you can check it in phpinfo) this is all you need to do:
in your config_prod.yml (you don't want results to be cached in dev environment)
doctrine:
orm:
metadata_cache_driver: apc
result_cache_driver: apc
query_cache_driver: apc
and then in your query:
$queryBuilder
(...)
->useQueryCache(true)
->useResultCache(true)
(...)
First time you make this query it will fetch data from database. Next time it will fetch data from cache. You can also set lifetime of cache: example
EDIT: Above link is to symfony 1.x documentation, however usage of useQueryCache and useResultCache are the same in symfony 1.x and Symfony2.x.
For bigger doc on Symfony2 Doctrine configuration check this link as #Francesco Casula mentioned
I'm very new to the concept "caching", so excuse me if my question is too simple.
So,I'm using Codeigniter(PHP framework) and it supports page caching, simply by doing this $this->output->cache(n)//n: number of minutes to remain cached
(I think) Codeigniter's caching will store any requested page in a cache file, and get the page when needed immediately.
Also there's a 3rd part web application called Vanish Cache, it sits between Apache and the client, then it will cache requested pages and re-send them again when needed, isn't that the same thing Codeigniter does, or is it different from that?
Wouldn't it be a waste to cache each page twice, by Codeigniter and Vanish?
assuming they do the exact same thing(cache pages and re-send them to the user),which one is more efficient for dynamic(database driver) websites?
On the surface, they do the same thing, however there are appropriate uses for different levels of cache.
A cache like Varnish, that sits between the web server and the application, provides very high performance. You use it for static content like CSS, static pages and dynamic content that changes very rarely.
An application cache provides a less performant but far more flexibile option. Usually you can cache by time, but also by application/request variables like "current user". This allows you to provide an state-dependant cache with a lot more fine control. For example, you could cache an object's detail page by it's last modified time in the database.
Hey! First post for me but long time reader of Stackoverflow.
Anyway, got a tricky problem wich is getting on my nerves. It's questions about how configurable should a DIC (Dependency Injection Container) be. I'm working on a session handler for our framework we are setting up. The session handler is dependent on a storage service, some simple configuration parameters and is the one that generates a session id with a salt.
Session handler is loaded by our container wich takes all the settings, either as a configurator object or an array of parameters. The container checks the setting for what storage service is called for and loads the service and injects it to the session handler. The storage service does not take any constructor settings at the moment.
Session handler in it's turn generates the session id from the settings it got injected. And from there injects the session id and other settings relevant to the that storage service.
My questions are:
Is it proper to let the Session Handler inject settings to storage service? It's like a 2 step rocket.
Should I inject the session id and other parameters from start when I load the storage service in the container? Wich in the end leaves me with the problem of generating the session ID and have to rely on the storage service to do it.
What problems would arise if I did either way from above proposals?
Am I breaking anything "holy" :)
I am a bit confused by this part:
Session handler in it's turn generates
the session id from the settings it
got injected. And from there injects
the session id and other settings
relevant to the that storage service.
You already said that the session handler is dependent on the storage service. It would be a bad idea for the storage service to also have some dependency on session handler, because then your initialization process is way more complex and error prone than it needs to be.
So here's the question: what settings does the storage service need from the session handler? Can you remove any or all of these dependencies from the storage service? If so, you should do so.
As a practical example: let's say the storage service supports storage of data as named blobs (which may be implemented as files, or as rows in a database table, or whatever else). You use a name dependent on the session id to store the session data. What you need to do is not tell the storage service what the session id is, but have the session handler remember it instead and provide that information to the storage service only whenever it uses storage functionality.
In other words, try to keep your data grouped logically. Is there a specific problem that prevents you from doing so?