Use redis to cache duplicate requests in symfony2 - php

This is my current setup:
snc_redis:
clients:
default:
type: predis
alias: cache
dsn: "redis://127.0.0.1"
doctrine:
metadata_cache:
client: cache
entity_manager: default
document_manager: default
result_cache:
client: cache
entity_manager: [bo, aff, fs]
query_cache:
client: cache
entity_manager: default
I have an API which gets multiple duplicate requests (usually in quick succession), can I use this setup to send back a cached response on duplicate request? Also is it possible to set cache expiry?

From the config sample you provided I'm guessing you want to cache the Doctrine results rather than the full HTTP responses (although the latter is possible, see below).
If so, the easiest way to do this is that whenever you create a Doctrine query, set it to use the result cache which you've set up above to use redis.
$qb = $em->createQueryBuilder();
// do query things
$query = $qb->getQuery();
$query->useResultCache(true, 3600, 'my_cache_id');
This will cache the results for that query for an hour with your cache ID. Clearning the cache is a bit of a faff:
$cache = $em->getConfiguration()->getResultCacheImpl();
$cache->delete('my_cache_id');
If you want to cache full responses - i.e. you do some processing in-app which takes a long time - then there are numerous ways of doing that. Serializing and popping it into redis is possible:
$myResults = $service->getLongRunningResults();
$serialized = serialize($myResults);
$redisClient = $container->get('snc_redis.default');
$redisClient->setex('my_id', $serialized, 3600);
Alternatively look into dedicated HTTP caching solutions like varnish or see the Symfony documentation on HTTP caching.
Edit: The SncRedisBundle provides its own version of Doctrine's CacheProvider. So whereas in your answer you create your own class, you could also do:
my_cache_service:
class: Snc\RedixBundle\Doctrine\Cache\RedisCache
calls:
- [ setRedis, [ #snc_redis.default ] ]
This will do almost exactly what your class is doing. So instead of $app_cache->get('id') you do $app_cache->fetch('id'). This way you can switch out the backend for your cache without changing your app class, just the service description.

In the end I created a cache manager and registered it as a service called #app_cache.
use Predis;
class CacheManager
{
protected $cache;
function __construct()
{
$this->client = new Predis\Client();
}
/**
* #return Predis\Client
*/
public function getInstance()
{
return $this->client;
}
}
In the controller I can then md5 the request_uri
$id = md5($request->getRequestUri());
Check it exists, if it does return the $result
if($result = $app_cache->get($id)) {
return $result;
}
If it doesn't do..whatever...and save the response for next time
$app_cache->set($id,$response);
To set the expiry use the 3rd and 4th parameter ex = seconds and px = milliseconds.
$app_cache->set($id,$response,'ex',3600);

Related

Laravel cache returns corrupt data (redis driver)

I have an API written in Laravel. There is the following code in it:
public function getData($cacheKey)
{
if(Cache::has($cacheKey)) {
return Cache::get($cacheKey);
}
// if cache is empty for the key, get data from external service
$dataFromService = $this->makeRequest($cacheKey);
$dataMapped = array_map([$this->transformer, 'transformData'], $dataFromService);
Cache::put($cacheKey, $dataMapped);
return $dataMapped;
}
In getData() if cache contains necessary key, data returned from cache.
If cache does not have necessary key, data is fetched from external API, processed and placed to cache and after that returned.
The problem is: when there are many concurrent requests to the method, data is corrupted. I guess, data is written to cache incorrectly because of race conditions.
You seem to be experiencing some sort of critical section problem. But here's the thing. Redis operations are atomic however Laravel does its own checks before calling Redis.
The major issue here is that all concurrent requests will all cause a request to be made and then all of them will write the results to the cache (which is definitely not good). I would suggest implementing a simple mutual exclusion lock on your code.
Replace your current method body with the following:
public function getData($cacheKey)
{
$mutexKey = "getDataMutex";
if (!Redis::setnx($mutexKey,true)) {
//Already running, you can either do a busy wait until the cache key is ready or fail this request and assume that another one will succeed
//Definately don't trust what the cache says at this point
}
$value = Cache::rememberForever($cacheKey, function () { //This part is just the convinience method, it doesn't change anything
$dataFromService = $this->makeRequest($cacheKey);
$dataMapped = array_map([$this->transformer, 'transformData'], $dataFromService);
return $dataMapped;
});
Redis::del($mutexKey);
return $value;
}
setnx is a native redis command that sets a value if it doesn't exist already. This is done atomically so it can be used to implement a simple locking mechanism, but (as mentioned in the manual) will not work if you're using a redis cluster. In that case the redis manual describes a method to implement distributed locks
In the end I came to the following solution: I use retry() function from Laravel 5.5 helpers to get cache value until it is written there normally with interval of 1 second.

Doctrine Result caching with Memcached

I've just installed Memcached and i'm trying to use it to cache results of various queries done with Doctrine ORM (Doctrine 2.4.8+, Symfony 2.8+).
My app/config/config_prod.yml have this :
doctrine:
orm:
metadata_cache_driver: memcached
result_cache_driver: memcached
query_cache_driver: memcached
And i tried to useResultCache() on 2 queries like that (i just replaced the cache id here for the example) : return $query->useResultCache(true, 300, "my_cache_id")->getArrayResult();. Special queries here because they're native queries (SQL) due to their complexity, but the method is available for any query (class AbstractQuery) so i assume it should work.
Unfortunately, it doesn't. Everytime i refresh the page, if i just did a change in the database, the change is displayed. I've checked the stats of memcached and it seems there're still some cache hits but i don't really know how, cf. what i just said.
Does anyone has an idea on why the cache doesn't seem to be used here to get the supposedly cached results ? Did i misunderstand something and the TTL is ignored somehow ?
There's no error generated, memcached log is empty.
As requested by #nifr, here is the layout of my code to create the 2 native queries i put a Memcached test on :
$rsm = new ResultSetMapping;
$rsm->addEntityResult('my_entity_user', 'u');
// some $rsm->addFieldResult('u', 'column', 'field');
// some $rsm->addScalarResult('column', 'alias');
$sqlQuery = 'SELECT
...
FROM ...
INNER JOIN ... ON ...
INNER JOIN ... ON ...
// some more join
WHERE condition1
AND condition2';
// some conditions added depending on params passed to this function
$sqlQuery .= '
AND (fieldX = (subrequest1))
AND (fieldY = (subrequest2))
AND condition3
AND condition4
GROUP BY ...
ORDER BY ...
LIMIT :nbPerPage
OFFSET :offset
';
$query =
$this->_em->createNativeQuery($sqlQuery, $rsm)
// some ->setParameter('param', value)
;
return $query->useResultCache(true, 300, "my_cache_id")->getArrayResult();
So, it seems for some reason Doctrine doesn't succeed to get the ResultCacheDriver. I tried to set it before the useResultCache() but i had an exception from Memcached : Error: Call to a member function get() on null.
I've decided to do it more directly by calling Memcached(). I guess i'll do this kind of stuff in controllers and repositories, depending on my needs. After some tests it works perfectly.
Here is what i basically do :
$cacheHit = false;
$cacheId = md5("my_cache_id"); // Generate an hash for your cache id
// We check if Memcached exists, if it's not installed in dev environment for instance
if (class_exists('Memcached'))
{
$cache = new \Memcached();
$cache->addServer('localhost', 11211);
$cacheContent = $cache->get($cacheId);
// We check if the content is already cached
if ($cacheContent != false)
{
// Content cached, that's a cache hit
$content = $cacheContent;
$cacheHit = true;
}
}
// No cache hit ? We do our stuff and set the cache content for future requests
if ($cacheHit == false)
{
// Do the stuff you want to cache here and put it in a variable, $content for instance
if (class_exists('Memcached') and $cacheHit == false) $cache->set($cacheId, $content, time() + 600); // Here cache will expire in 600 seconds
}
I'll probably put this in a Service. Not sure yet what's the "best practice" for this kind of stuff.
Edit : I did a service. But it only works with native sql... So the problem stays unsolved.
Edit² : I've found a working solution about this null issue (meaning it couldn't find Memcached). The bit of code :
$memcached = new \Memcached();
$memcached->addServer('localhost', 11211);
$doctrineMemcached = new \Doctrine\Common\Cache\MemcachedCache();
$doctrineMemcached->setMemcached($memcached);
$query
->setResultCacheDriver($doctrineMemcached)
->useResultCache(true, 300);
Now, i would like to know which stuff should i put in the config_prod.yml to just use the useResultCache() function.

Laravel: Use Memcache instead of Filesystem

Whenever I load a page, I can see Laravel reading a great amount of data from the /storage folder.
Generally speaking, dynamic reading and writing to our filesystem is a bottleneck. We are using Google App Engine and our storage is in Google Cloud Storage, which means that one write or read is equal to a "remote" API request. Google Cloud Storage is fast, but I feel it's slow, when Laravel makes up to 10-20 Cloud Storage calls per request.
Is it possible to store the data in the Memcache instead of in the /storage directory? I believe this will give our systems a lot better performance.
NB. Both Session and Cache uses Memcache, but compiled views and meta is stored on the filesystem.
In order to store compiled views in Memcache you'd need to replace the storage that Blade compiler uses.
First of all, you'll need a new storage class that extends Illuminate\Filesystem\Filesystem. The methods that BladeCompiler uses are listed below - you'll need to make them use Memcache.
exists
lastModified
get
put
A draft of this class is below, you might want to make it more sophisticated:
class MemcacheStorage extends Illuminate\Filesystem\Filesystem {
protected $memcached;
public function __construct() {
$this->memcached = new Memcached();
$this->memcached->addServer(Config::get('view.memcached_host'), Config::get('view.memcached_port');
}
public function exists($key) {
return !empty($this->get($key));
}
public function get($key) {
$value = $this->memcached->get($key);
return $value ? $value['content'] : null;
}
public function put($key, $value) {
return $this->memcached->set($key, ['content' => $value, 'modified' => time()]);
}
public function lastModified($key) {
$value = $this->memcached->get($key);
return $value ? $value['modified'] : null;
}
}
Second thing is adding memcache config in your config/view.php:
'memcached_host' => 'localhost',
'memcached_port' => 11211
Last thing you'll need to do is to overwrite blade.compiler service in one of your service providers, so that it uses your brand new memcached storage:
$app->singleton('blade.compiler', function ($app) {
$cache = $app['config']['view.compiled'];
$storage = $app->make(MemcacheStorage::class);
return new BladeCompiler($storage, $cache);
});
That should do the trick.
Please let me know if you see some typos or error, haven't had a chance to run it.

Access Profiler without a request on a PHPUnit Symfony2 project

How can I access the profile on a unit test context where there is no request?
In my case, I'm making tests to a data access layer that uses some other DBAL, doctrine2 in my case, and I want to assure that only a certain number of queries are made for performance issues. Don't want to put http requests in this scenario.
If I just do code below the collectors don't collect anything, and at the last line an exception is throw because getQueryCount it tries to access a data member set to null:
public function testQueryCount() {
/* #var $profile Profiler */
$em = $this->getContainer()->get('doctrine.orm.entity_manager');
$profile = $this->getContainer()->get('profiler');
$profile->enable();
$eventDataAccess = new EventDataAccess($em);
$e = $eventDataAccess->DoSomething(1);
$a = $profile->get('db');
$numMysqlQueries = $a->getQueryCount();
$this->assertCount(1, $numMysqlQueries)
}
I've got the profiler in the config_test.yml with:
doctrine:
dbal:
connections:
default:
logging: true
framework:
profiler:
only_exceptions: false
collect: true

Can I give a Symfony 2 service the ability to read and write cookies?

I have a service that should be able to read and write cookies. To do that in a Symfony-like manner, the service must have access to the request and the response. I can imagine that it's possible to pass the request to the service through the service configuration, but I don't know how. I'm not sure how I'm going to give the service the ability to write cookies though. Any suggestions on how to do this would be appreciated.
Note: I really don't want to have to manually pass variables to the service every time I use it.
I think you really have a couple of options - it really depends on what you are trying to store in a cookie and at what point in the process you need to read do the work.
I suggest your first option is to create a service, that has access the the request and creates a response, which it returns ...
Define your service in services.yml :
services:
a_service:
class: Acme\DemoBundle\RequestServiceClass
arguments: [#request]
scope: request
Your class :
//Acme\DemoBundle\RequestServiceClass.php
class RequestServiceClass
{
private $request;
public function __construct(Request $request){
$this->request= $request;
}
public function doSomething(){
// get cookie
$value = $this->request->cookies->get('cookie');
// create cookie
$cookie = new Cookie('cookie', 'value', time() + 3600 * 24 * 7);
// create response
$response = new Response();
// set cookie in response
$response->headers->setCookie($cookie);
return $response;
}
}
Then to use your service you do something like this
public myAction()
{
$response = $this->get('a_service')->doSomething();
return $response;
}
The other way of doing it, is to create a kernel.response listener ... its done like this :
Add a service to services.yml :
services:
a_listener:
class: Acme\DemoBundle\MyListener
tags:
- { name: kernel.event_listener, event: kernel.response, method: onKernelResponse }
Your listener class looks like this :
// Acme\DemoBundle\MyListener.php
class MyListener
{
public function onKernelResponse(FilterResponseEvent $event)
{
$response = $event->getResponse();
$request = $event->getRequest();
// get cookie
$value = $request->cookies->get('cookie');
// create cookie
$cookie = new Cookie('cookie', 'value', time() + 3600 * 24 * 7);
// set cookie in response
$response->headers->setCookie($cookie);
}
}
The difference between the 2 methods is what information is available at the time of process - for example the service has access to everything you pass it ... the response listener has access to everything in the request and the response - you could check if the response is as expected (ie format or content) and then set a cookie according to that.
Some links to some useful documentation to read :
kernal.response event
HTTPKernal Component
Services Scopes
HTTP stops at the controllers and listeners of HTTP related events. A service (which is not a controller or an HTTP event listener) should not set or get cookies. Instead the controller should handle the cookie. It gets the data from cookie before consuming the service's methods which accept it as argument and return its new value or using a by-reference argument. This way your service is decoupled from HTTP and can be easily re-used ans tested.

Categories