I'm trying to use an external cache engine, memcached, to power my CakePHP app.
I have an AWS EC2 instance running the app and also an AWS ElastiCache Cluster with one node using memcached. The memcache and memcached php modules are also installed and enabled.
The configuration in the app.php file is as follows:
'Cache' => [
'default' => [
'className' => 'File',
'path' => CACHE,
],
'elastic' => [
'className' => 'Cake\Cache\Engine\MemcachedEngine',
'compress' => false,
'duration' => '+2 minutes',
'groups' => [],
'host' => 'yyy.euw1.cache.amazonaws.com:11211',
'username' => null,
'password' => null,
'persistent' => false,
'prefix' => 'cake_',
'probability' => 100,
'serialize' => 'php',
'servers' => ['yyy.euw1.cache.amazonaws.com:11211'],
'options' => [],
'lock' => true
]
To select whether or not to query the database, this condition is used:
if (($car = Cache::read('car', 'elastic')) === false) {
$car = $this->Cars->get();
Cache::write('car', $car, 'elastic');
}
Unfortunately, after a long page load I get this error:
"elastic cache was unable to write to DebugKit\Cache\Engine\DebugEngine cache"
Does anyone knows the origin of this error? Can someone guide me through the configuration of memcached for cakephp, using an external cache engine?
Thank you upfront!
Thank you for your reply.
This issue is now closed. We had to allow IP access between EC2 and CloudCache Cluster.
Related
I am using https://laravel.com/docs/6.x/horizon to manage the queue on my application (using redis backend).
The jobs on the queue are ephemeral - each queued job is supposed to make an api call (to a third party) and write the response to db.
Today I noticed lots of jobs are stuck / processing very slowly:
I am running 10 queue workers with sufficient memory on the host node. This is my config/horizon.php
return [
'domain' => null,
'path' => 'horizon',
'use' => 'default',
'prefix' => env('HORIZON_PREFIX', 'horizon:'),
'middleware' => ['web', 'basic.api.credential.auth'],
'waits' => [
'redis:default' => 60,
],
'trim' => [
'recent' => 60,
'completed' => 60,
'recent_failed' => 40320,
'failed' => 40320,
'monitored' => 40320,
],
'fast_termination' => false,
'memory_limit' => 512,
'environments' => [
'production' => [
'queue-1' => [
'connection' => 'redis',
'queue' => ['default'],
'balance' => 'simple',
'processes' => 10,
'tries' => 2,
'timeout' => 1800, // 30 Minutes
'memory' => 512,
],
],
// ...
],
];
I checked the remote api server to see if the bottleneck is there, but it's responding quite fast (less than 1 sec per api call). Checking the server, I can't see any load on it and the overall memory/cpu utilisation is quite low.
Laravel v6.18.37
PHP v7.3.21
Horizon v3.7.2
Any ideas what's causing such a huge slowdown? How do I debug this to find out what's going on?
I even tried restarting the server and it did not help. I am not seeing any timeouts / errors in the logs either.
The problem turned out to be on the database, a few table that we use for lookup was not properly optimised - after adding indexes, the speed improved tremendously.
Sorry false alarm.
So I'm using beyondcode/laravel-websockets to setup a WS server and I want to work with multiple apps so I did this in config\websockets.php:
'apps' => [
[
'id' => env('A_APP_ID'),
'name' => env('A_APP_NAME'),
'key' => env('A_APP_KEY'),
'secret' => env('A_APP_SECRET'),
'path' => env('A_APP_PATH'),
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => true,
],
[
'id' => env('B_APP_ID'),
'name' => env('B_APP_NAME'),
'key' => env('B_APP_KEY'),
'secret' => env('B_APP_SECRET'),
'path' => env('B_APP_PATH'),
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => true,
],
],
However, I want to implement custom handlers for each app and I've been trying this, routes\web.php:
WebSocketsRouter::webSocket('app/{appKey}/bapp', \App\WebSockets\BAppWebSocketHandler::class);
//Also tried this..
WebSocketsRouter::webSocket('app/{appKey}', \App\WebSockets\AAppWebSocketHandler::class);
//and created `AAppWebSocketHandler` which does nothing but calling parent (WebSocketHandler) methods
Problem is it's always using one handler for all apps despite the difference in routes.
Any ideas?
Thanks!
You don't have to define routes while defining multiple apps in the config. Instead configure Echo to work with separate app key and secret. If you want to use custom handler with own logic, then remove them from configs. Also note that, you won't get any channel or pusher client library support. You will have to implement your own authentication as well.
I have two memcached servers. I set up the following config and my expectation was that my PHP app can use alive memcached even if one of the two servers is down. But it did not work. I got "No Memcached servers added" error when I execute memcached's get() method.
'memcached' => [
'driver' => 'memcached',
'options' => [
Memcached::OPT_CONNECT_TIMEOUT => 10,
Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT,
Memcached::OPT_SERVER_FAILURE_LIMIT => 2,
Memcached::OPT_REMOVE_FAILED_SERVERS => true,
Memcached::OPT_RETRY_TIMEOUT => 1,
],
'servers' => [
[
'host' => 'xxx.0.0.1', 'port' => 11211, 'weight' => 100,
],
[
'host' => 'xxx.0.0.2', 'port' => 11211, 'weight' => 100,
],
],
],
I'm using the latest version of memcached servers and client.
memcached 1.4.25-2ubuntu1
php-memcached version 3.0.0b1
libmemcached version 1.0.18
Do you have any ideas?
Edit 1
"No Memcached servers added" error is came from here.
https://github.com/illuminate/cache/blob/master/MemcachedConnector.php
Edit 2
I found Memcached::XXX options are integer values. So options values are not passed to the server. I fixed this but the result was not changed.
'options' => array('10', '1', '2', true, '1')
Edit 3
Laravel's cache example setting is memtioned here
https://github.com/laravel/laravel/blob/master/config/cache.php#L60
Edit 4
I tried it by using PHP without Laravel then I figured out that getVersion() returned null when one of the two server is dead.
<?php
$m = new Memcached();
$m->addServer('127.0.0.1', 11211);
$m->addServer('127.0.0.1', 11212);
$status = $m->getVersion();
if ($status == null){
echo "null";
} else {
echo "not null";
}
* when both the two servers are alive
// not null
// $status = array('127.0.0.1:11211' => '1.4.25', '127.0.0.1:11212' => '1.4.14')
* when one of the two server is dead
// null
// $status = null
I was struggling with this for a couple of days now and I finally found a solution for it https://packagist.org/packages/fingo/laravel-cache-fallback
It is quite easy to install:
composer require fingo/laravel-cache-fallback
Add the provider Fingo\LaravelCacheFallback\CacheFallbackServiceProvider::class into 'config/app.php`
Default fallback order is: redis, memcached, database, cookie, file, array.If needed to change fallback order, publish vendors: php artisan vendor:publish --provider="Fingo\LaravelCacheFallback\CacheFallbackServiceProvider"
I'm using Laravel 5.2 with PhP7, the requirements for the library are:php: ~5.6|~7.0, illuminate/cache: ~5.1
LE: I've implemented a different way to failover from one Memcached server to the other one and if both of them are down then it will failover to the 'file' driver.
config/cache.php:
'memcached' => [
'driver' => 'memcached',
// To use the options we need to install the PHP driver: sudo yum install php-memcached
'options' => [
Memcached::OPT_CONNECT_TIMEOUT => 3,
Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT,
Memcached::OPT_SERVER_FAILURE_LIMIT => 2,
Memcached::OPT_REMOVE_FAILED_SERVERS => true,
Memcached::OPT_RETRY_TIMEOUT => 1,
],
'servers' => [
[
'host' => 'xxx.xxx.xxx.xxx',
'port' => 11211,
'weight' => 90,
],
]
],
'memcached249' => [
'driver' => 'memcached',
// To use the options we need to install the PHP driver: sudo yum install php-memcached
'options' => [
Memcached::OPT_CONNECT_TIMEOUT => 3,
Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT,
Memcached::OPT_SERVER_FAILURE_LIMIT => 2,
Memcached::OPT_REMOVE_FAILED_SERVERS => true,
Memcached::OPT_RETRY_TIMEOUT => 1,
],
'servers' => [
[
'host' => 'xxx.xxx.xxx.xxx',
'port' => 11211,
'weight' => 100,
],
]
],
config/cache_fallback.php
return [
'fallback_order' => [
'memcached',
'memcached249',
'file',
]
];
The only problem is that if the first memcached server is down, it slows down the entire system (but at least it's not breaking the main server). Hope this helps :)
I am at my whits end trying to solve this problem. Our ultimate goal is to deploy a custom docker container of Mautic. I have no problem doing this from their website interface. I've solved all my config problems and it works great. But I need to do this automatically from an API. Customers are going to sign up for our service, and we want to deploy Mautic for them instantly (or as instant as AWS can work).
I'm new to elastic beanstalk and AWS. But what I understand is I need to create an environment and deploy my Dockerrun.aws.json file to it. But I cannot find anywhere in the API that I can specify a file to deploy or even an S3 bucket to use (like you can from the interface). I had hoped by saving a template and using that, it would work, but I just get an empty Docker instance with no container launched.
Here's an example of my PHP api call
$eb = new ElasticBeanstalkClient(array(
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => array(
'key' => '...',
'secret' => '...'
)
));
$newEnvironment = $eb->createEnvironment(array(
'ApplicationName' => 'test',
'TemplateName' => 'foo2',
'EnvironmentName' => '...',
'EnvironmentTier' => array(
'Type' => 'Standard',
'Name' => 'WebServer'
),
'OptionSettings' => array(
[
'Namespace' => 'aws:autoscaling:launchconfiguration',
'OptionName' => 'EC2KeyName',
'Value' => '...'
],
[
'Namespace' => 'aws:rds:dbinstance',
'OptionName' => 'DBUser',
'Value' => '...'
],
[
'Namespace' => 'aws:rds:dbinstance',
'OptionName' => 'DBPassword',
'Value' => '...'
]
)
));
The template foo2 was saved from an environment that has a fully running Mautic docker container.
The problem is, this creates an environment and the RDS resource I need, but does not run my docker container.
Is what I want possible? Or do I have to find another avenue?
Thanks
Figured it out. What I was looking for was $eb->createApplicationVersion(...) which I can use to specify an S3 bucket with my Dockerrun.aws.json file. Documentation
Then I can specify that VersionLabel in my createEnvironment() call like so.
$newEnvironment = $eb->createEnvironment(array(
'ApplicationName' => 'test',
'TemplateName' => 'foo2',
// Right here
'VersionLabel` => 'fooVersion',
'EnvironmentName' => '...',
'EnvironmentTier' => array(
'Type' => 'Standard',
'Name' => 'WebServer'
),
'OptionSettings' => array(
[
'Namespace' => 'aws:autoscaling:launchconfiguration',
'OptionName' => 'EC2KeyName',
'Value' => '...'
],
[
'Namespace' => 'aws:rds:dbinstance',
'OptionName' => 'DBUser',
'Value' => '...'
],
[
'Namespace' => 'aws:rds:dbinstance',
'OptionName' => 'DBPassword',
'Value' => '...'
]
)
));
Or I can just create a version through the Dashboard. Documentation
I'm using zf2 apigility in my web application. With production mode, if config_cache_enabled is true in config/application.config.php, I get this message error when requesting access_token:
The storage configuration for OAuth2 is missing
If I set it to false, I get my access token.
So my problem is to have config_cache_enabled set to true and a successful request for getting the access token in production mode, due to best performance when configuration is cached. How to do that ?
This is my zf-mvc-auth configuration :
'zf-mvc-auth' => array(
'authentication' => array(
'adapters' => array(
'CustomStorage' => array(
'adapter' => 'ZF\\MvcAuth\\Authentication\\OAuth2Adapter',
'storage' => array(
'storage' => 'Application\\Adapter\\OAuth\\CustomPdoAdapter',
'route' => '/oauth',
),
),
),
),
),
This is my oauth2.local.php :
'zf-oauth2' => array(
'db' => array(
'dsn' => 'mysql:dbname=mydatabase;host=localhost',
'username' => 'root',
'password' => '',
),
'allow_implicit' => true,
'access_lifetime' => 3600,
'enforce_state' => true,
'storage' => 'Application\Adapter\OAuth\CustomPdoAdapter',
'storage_settings' => array(
'user_table' => 'users',
),
'options' => array(
'always_issue_new_refresh_token' => true,
),
),
I think it is well configured.
Did you setup your zf-mvc-auth correctly. In the module.config.php you can read that you have to define a storage key. There is also written how you can do this:
To specify the storage instance, you may use one of two approaches:
Specify a "storage" subkey pointing to a named service or an array
of named services to use.
Specify an "adapter" subkey with the value "pdo" or "mongo", and
include additional subkeys for configuring a ZF\OAuth2\Adapter\PdoAdapter
or ZF\OAuth2\Adapter\MongoAdapter, accordingly. See the zf-oauth2
documentation for details.
If you are on production mode and "config_cache_enabled" it's true, you need to delete files on data/cache folder