Question
How can I programmatically tell to a Guzzle Client to delete / eliminate or refresh a Cache folder?
An even better, to target a specific folder X or Y?
The reason is not important, but I can put an example for better understand.
Let's say we have book api (which is mine) an with another service we call it with Guzzle Client and has a cache of 1 hour to get some book prices. The cache is at folder
framework/cache/data/GuzzleFileCache/public/books/prices
curl -X GET https://example.com/books/prices
Now, someone updates the books prices.
If my main service call books/orices will have now the old prices because 1 hour was not passed.
So I've a system that when a book price is updated I can fire an event to my main service and tell whatever I want. Unsurprisingly what I want my main service to do is to invalidate the cache at framework/cache/data/GuzzleFileCache/public/books/prices when that even if fired.
How can I do that?
Code
First of all, note to say that I'm actually using Lumen but it shouldn't be a problem as works the same as Laravel.
So let's say that I've a specific folder at framework/cache/data/GuzzleFileCache/public/blogs and I need at certain point invalidate this cache (for whatever reason).
*I know you can just delete manually the folder, and therefore probably do it programmatically, but is there a better way? *
This is my current code to create/use the cache with a Guzzle Client
$ttl = 3600;
// Create a HandlerStack
$stack = HandlerStack::create();
// Create Folder GuzzleFileCache inside the providen cache folder path
$requestCacheFolderName = 'framework/cache/data';
// Retrieve the bootstrap folder path of your Laravel Project
$cacheFolderPath = 'GuzzleFileCache/public/blogs';
// Instantiate the cache storage: a PSR-6 file system cache with
$cache_storage = new Psr6CacheStorage(
new FilesystemAdapter(
$requestCacheFolderName,
$ttl,
$cacheFolderPath
)
);
// Add Cache Method
$stack->push(
new CacheMiddleware(
new GreedyCacheStrategy(
$cache_storage,
$ttl // the TTL in seconds
)
),
'greedy-cache'
);
So one very ugly way would be to just delete that folder, here is some pseudocode
rmdir('framework/cache/data/GuzzleFileCache/public/blogs');
But I'm looking at something like
use Illuminate\Support\Facades\Http; // <--- laravel guzzle client
Http::cacheInvalidate('framework/cache/data/GuzzleFileCache/public/blogs');
Dependencies
"php": "^8.1.3",
"flipbox/lumen-generator": "^9.1",
"guzzlehttp/guzzle": "^7.4",
"kevinrob/guzzle-cache-middleware": "^4.0",
"laravel/lumen-framework": "^9.0",
"symfony/cache": "^6.1"
References
HTTP Client
Kevinrob/guzzle-cache-middleware
Well. I could not invalidate the cache by tags but at least all the cache together, which is already something. Here some code if may help someone.
use Symfony\Component\Cache\Adapter\FilesystemAdapter;
use Symfony\Component\Cache\Adapter\TagAwareAdapter;
// Instantiate the cache storage: a PSR-6 file system cache with
$cache = new TagAwareAdapter(
// Adapter for cached items
new FilesystemAdapter(
$requestCacheFolderName,
1,
$cacheFolderPath
)
);
$cache->clear();
Related
(Laravel 8, PHP 8)
Hi. I have a bunch of data in the PHP APC cache that I can access across my Laravel application with the apcu commands.
I decided I should fire an async job to process some of that data for the user during a session and throw the results in the database.
So I made a middleware that fires (correctly) when the user accesses the page, and (correctly) dispatches a job called "MemoryProvider".
The dispatch command promply instantiates the MemoryProvider class, running its constructor, and then queues the job for execution.
About a second later, the queue is processed and the handle method in MemoryProvider is run.
I check the content of the php cache with "apcu_cache_info()" and "apcu_exists()" in the middleware and both in the MemoryProvider constructor and in its handle method.
The problem:
The PHP cache appears populated throughout my Laravel app.
The PHP cache appears populated in the middleware.
The PHP cache appears populated in the job's constructor.
The PHP cache appears EMPTY in the job's handle method.
Here's the middleware:
{
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
MemoryProvider::dispatch($request);
return $next($request);
}
Here's the job's (MemoryProvider) constructor:
{
$this->request = $request->all();
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
}
And here's the job's (MemoryProvider) handle method:
{
$a = apcu_cache_info(); // 0 entries
$b = apcu_exists('the:2:0'); // false
}
Question: is this a PHP limitation or a bad Laravel problem? And how can I access the content of my PHP cache in an async class?
p.s. I have apc.enable_cli=1 in php.ini
I found the answer. Apparently, it's a PHP limitation.
According to a good explanation given by gview back in 2017, a cli process doesn't share state or memory with other cli processes. So the apc memory space will never be shared this way.
I did find a workaround for my specific case: instead of running an async process to handle the heavy work in the background, I can get the same effect by simply issuing an AJAX request. The request is handled independently by PHP, with full access to the APC cache, and I can populate my database and let the user know when it's all done (or gradually done, as is the case).
I wish I had thought of this sooner.
I'm having an issue in a Laravel Zero project I'm working on. I'm working on a command that handles direct file transfers between 2 disks—1 SFTP and another local.
I have configured both correctly and tested that I'm able to transfer files between them using the Storage code below. My issue pops up when I try to do this using the spatie/async package to create a pool of concurrent transfers (or maybe just the way I'm trying to do it).
$pool = Pool::create()
->concurrency($limit);
$progress = $this->output->createProgressBar($file_list);
if(!Storage::disk('local')->exists($local_folder_path)) {
Storage::disk('local')->makeDirectory($local_folder_path);
}
foreach($file_list as $filename => $remote_path) {
$pool->add(function() use ($remote_path, $filename, $local_folder_path) {
Storage::disk('local')
->writeStream(
"{$local_folder_path}/{$filename}",
Storage::disk('remote')->readStream($remote_path)
);
return $filename;
})->then(function($filename) use (&$progress) {
$this->info("{$filename} downloaded");
$progress->advance();
})->catch(function($exception) {
$this->error($exception);
});
}
$pool->wait();
$progress->finish();
By the way, the error, RuntimeException: a facade root has not been set, is being printed to my console via the catch() handler for the first item in the pool. I did discover that much.
I've searched for answers to this issue, but all of the articles and other SO/Stack Exchange postings I've come across didn't seem even similar to whatever's causing my issue.
Thanks in advance for any help.
The problem is that your callback (child process) is running without any setup.
A Task is useful in situations where you need more setup work in the child process. Because a child process is always bootstrapped from nothing, chances are you'll want to initialise eg. the dependency container before executing the task.
The facades are setup by the kernel that runs the \LaravelZero\Framework\Bootstrap\RegisterFacades::class.
You can create an instance of the kernel and run the bootstrap method to have your facades setup properly.
$pool->add(function() use ($remote_path, $filename, $local_folder_path) {
$app = require __DIR__.'/../../bootstrap/app.php';
$kernel = $app->make(\Illuminate\Contracts\Console\Kernel::class);
$kernel->bootstrap();
Storage::disk('local')
->writeStream(
"{$local_folder_path}/{$filename}",
Storage::disk('remote')->readStream($remote_path)
);
return $filename;
})
In my app I'm using server-sent events and have the following situation (pseudo code):
$response = new StreamedResponse();
$response->setCallback(function () {
while(true) {
// 1. $data = fetchData();
// 2. echo "data: $data";
// 3. sleep(x);
}
});
$response->send();
My SSE Response class accepts a callback to gather the data (step 1), which actually performs a database query. Now to my problem: As I am trying to avoid polling the database each X seconds, I want to make use of Doctrine's onFlush event to set a flag that the corresponding entity has been actually changed, which would then be checked within fetchData callback. Normally, I would do this by setting a flag on current user session, but as the streaming loop constantly writes data, the session cannot be accessed within the callback. So has anybody an idea how to resolve this problem?
BTW: I'm using Symfony 3.3 and Doctrine 2.5 - thanks for any help!
I know that this question is from a long time ago, but here's a suggestion:
Use shared memory (the php shm_*() functions). That way your flag isn't tied to a specific session.
Be sure to lock and unlock around access to the shared memory (I usually use a semaphore).
I have asked this question yesterday as well, but this one includes code.
Issue
My application have multiple modules and 2 types of user accounts, Some modules are loaded always which are present in application.config.php some of them are conditional i.e. some are loaded for user type A and some for user type B
After going through documentations and questions on Stack Overflow, I understand some of ModuleManager functionalities and started implementing the logic that I though might work.
Some how I figured out a way to load the modules that are not present in application.config.php [SUCCESS] but their configuration is not working [THE ISSUE] i.e. if in onBootstrap method I get the ModuleManager service and do getLoadedModules() I get the list of all the modules correctly loaded. Afterwards if I try to get some service from that dynamically loaded module, it throws exception.
Zend\ServiceManager\ServiceManager::get was unable to fetch or create an instance for jobs_mapper
Please note that, the factories and all other stuff are perfectly fine because if I load the module from application.config.php it works fine
Similarly when I try to access any route from the dynamically loaded module it throws 404 Not Found which made it clear that the configuration from module.config.php of these modules are not loading even though the module is loaded by ModuleManager.
Code
In Module.php of my Application module I implemented InitProviderInterface and added a method init(ModuleManager $moduleManager) where I catch the moduleManager loadModules.post event trigger and load modules
public function init(\Zend\ModuleManager\ModuleManagerInterface $moduleManager)
{
$eventManager = $moduleManager->getEventManager();
$eventManager->attach(\Zend\ModuleManager\ModuleEvent::EVENT_LOAD_MODULES_POST, [$this, 'onLoadModulesPost']);
}
Then in the same class I delcare the method onLoadModulesPost and start loading my dynamic modules
public function onLoadModulesPost(\Zend\ModuleManager\ModuleEvent $event)
{
/* #var $serviceManager \Zend\ServiceManager\ServiceManager */
$serviceManager = $event->getParam('ServiceManager');
$configListener = $event->getConfigListener();
$authentication = $serviceManager->get('zfcuser_auth_service');
if ($authentication->getIdentity())
{
$moduleManager = $event->getTarget();
...
...
$loadedModules = $moduleManager->getModules();
$configListener = $event->getConfigListener();
$configuration = $configListener->getMergedConfig(false);
$modules = $modulesMapper->findAll(['is_agency' => 1, 'is_active' => 1]);
foreach ($modules as $module)
{
if (!array_key_exists($module['module_name'], $loadedModules))
{
$loadedModule = $moduleManager->loadModule($module['module_name']);
//Add modules to the modules array from ModuleManager.php
$loadedModules[] = $module['module_name'];
//Get the loaded module
$module = $moduleManager->getModule($module['module_name']);
//If module is loaded succesfully, merge the configs
if (($loadedModule instanceof ConfigProviderInterface) || (is_callable([$loadedModule, 'getConfig'])))
{
$moduleConfig = $module->getConfig();
$configuration = ArrayUtils::merge($configuration, $moduleConfig);
}
}
}
$moduleManager->setModules($loadedModules);
$configListener->setMergedConfig($configuration);
$event->setConfigListener($configListener);
}
}
Questions
Is it possible to achieve what I am trying ?
If so, what is the best way ?
What am I missing in my code ?
I think there is some fundamental mistake in what you are trying to do here: you are trying to load modules based on merged configuration, and therefore creating a cyclic dependency between modules and merged configuration.
I would advise against this.
Instead, if you have logic that defines which part of an application is to be loaded, put it in config/application.config.php, which is responsible for retrieving the list of modules.
At this stage though, it is too early to depend on any service, as service definition depends on the merged configuration too.
Another thing to clarify is that you are trying to take these decisions depending on whether the authenticated user (request information, rather than environment information) matches a certain criteria, and then modifying the entire application based on that.
Don't do that: instead, move the decision into the component that is to be enabled/disabled conditionally, by putting a guard in front of it.
What you're asking can be done, but that doesn't mean you should.
Suggesting an appropriate solution without knowing the complexity of the application you're building is difficult.
Using guards will certainly help decouple your code, however using it alone doesn't address scalability and maintainability, if that's a concern?
I'd suggest using stateless token-based authentication. Instead of maintaining the validation logic in every application, write the validation logic at one common place so that every request can make use of that logic irrespective of application. Choosing a reverse proxy server (Nginx) to maintain the validation logic (with the help of Lua) gives you the flexibility to develop your application in any language.
More to the point, validating the credentials at the load balancer level essentially eliminates the need for the session state, you can have many separate servers, running on multiple platforms and domains, reusing the same token for authenticating the user.
Identifying the user, account type and loading different modules then becomes a trivial task. By simply passing the token information via an environment variable, it can be read within your config/application.config.php file, without needing to access the database, cache or other services beforehand.
I have several apps based on CakePHP and this basically applies to all of them. When my debug mode is set to 0 (live mode), every time I update the database structure, like new tables and fields, then as soon as my app uses those, I always get the default "An Internal Error Has Occurred" message. It is solved if I set debug to 1 and then use those new fields. Is there a better way to do this? I don't want to enable debugging and doing a test write every time I have to update my database. Also /tmp/cache subfolders are empty, so I don't know where it is stored.
Here's a function I wrote to do exactly that.
function clear_cache() {
$cachePaths = array('js', 'css', 'menus', 'views', 'persistent','models');
foreach($cachePaths as $config) {
clearCache(null, $config);
}
}
It uses the clearCache function in Cake.
You need to clear all cache configs
bin/cake cache clear_all
Source
For cake 2.x, you can delete the cache directory like this:
rm -rf app/tmp/cache/
For CakePHP 2.x, place this line of code anywhere in your application to clear the model cache:
Cache::clear(false, '_cake_model_');
This is decoupled from the low-level cache engine (File, Memcache, Redis, etc), so it should work as-is.
CakePHP 2.x Docs: Caching