we am using Predis to connect to a Redis instance hosted on AWS (Elasticache). We are experiencing performance issues and after having tried other scaling-related solutions, we would like to experiment with adding read replicas in our cluster (with cluster mode disabled, no sharding, just read replicas). Elasticache offers this feature out of the box, but it the documentation of Predis is not very clear on how to use different write/read endpoints.
We currently initialize RedisClient in this way
$redisClient = new RedisClient(['host' => 'the primary endpoint']);
How can we add a read replica endpoint in constructor?
The documentation of PRedis was a bit vague (or outdated). This is how we managed to make it work, in case someone is facing the same issue:
$parameters = [
['host' => $primaryEndpoint, 'role' => 'master', 'alias' => 'master'],
['host' => $replicaEndpoint, 'role' => 'slave', 'alias' => 'slave'],
];
$this->redis = new RedisClient($parameters,
['replication' => true, 'throw_error' => true, 'async' => true]);
The role and alias properties are important.
According to PRedis docs, the replication option should be set as 'replication' => 'predis' but this did not work. Using 'replication' => true did.
Related
On my new server, I have to use a proxy to make the different API calls to other services.
I'm using Mailjet, and making the calls with the official PHP Wrapper (with no composer : https://github.com/mailjet/mailjet-apiv3-php-no-composer ).
I try to configure the proxy like that :
$mj = new \Mailjet\Client(
APIKEY,
APIKEY2,
true,
[
'version' => 'v3.1',
'connect_timeout' => 4,
'proxy' => [
'http' => someurl,
'https' => someurl
]
]
);
As you can see I'm also trying to edit the "CONNECT_TIMEOUT" variable. (I tried to set the variables in uppercase too, same result)
Unfortunaly, through all my tests I could have observe that only version and url variables are considered and set as asked. Any other variable, random or supposed to be taken care of, are not setted and stay with their default value.
I guess I'm not configuring as I should, perhaps those options should be elsewhere in the call, but even the Mailjet Support couldn't have inform me...
In the mean time I edited the /Mailjet/src/Mailjet/Client.php file from
private $requestOptions = [
self::TIMEOUT => 15,
self::CONNECT_TIMEOUT => 2,
];
to
private $requestOptions = [
self::TIMEOUT => 15,
self::CONNECT_TIMEOUT => 4,
self::PROXY => someurl
];
It works, but I prefer not having to edit this file and pass the variables as I should.
That class has a public method called setConnectionTimeout which you can use in your Client instance (e.g $mj->setConnectionTimeout(4))
I am struggling with this issue for some time.
I am using the sftp adapter to connect to another server where i read/write files a lot.
For thumbnail creation i use background jobs with laravel horizon to retrieve pdf contents from the remote sftp server and then generate a jpg and place in local filesystem.
For first setup i need to make around 150k of thumbnails.
When i use a lot of processes in horizon the remote server can't handle this number of connections.
I must limit to max 2 processes at the moment (10 secs~ * 150k~) not optimal.
I want to cache the connection because i know it is possible and probably solves my problem, but can't get it to work:(
The only reference/tutorial/example/docs i could find is
https://medium.com/#poweredlocal/caching-s3-metadata-requests-in-laravel-bb2b651f18f3
https://flysystem.thephpleague.com/docs/advanced/caching/
When i use the code from the example like this:
Storage::extend('sftp-cached', function ($app, $config) {
$adapter = $app['filesystem']->createSftpAdapter($config);
$store = new Memory();
return new Filesystem(new CachedAdapter($adapter->getDriver()->getAdapter(), $store));
});
I get the error: Driver [] is not supported.
Is there anyone here who can help me a bit further on this?
It appears necessary to adjust your configuration:
In your config/filesystems.php file, add a 'caching' key to your storage:
'default' => [
'driver' => 'sftp-cached',
// ...
'cache' => [
'store' => 'apc',
'expire' => 600,
'prefix' => 'laravel',
],
],
This example is based on official documentation (https://laravel.com/docs/5.6/filesystem#caching), but it is not described well how the 'store' key is used here (where memcached is the example), and you would need to change the implementation of your driver to new Memcached($memcached); (with an instance to inject) instead.
In your case, since the sftp-cached driver implements $store = new Memory();, the cache config must reflect this with 'store' => 'apc' (which is RAM based cache). Available 'store' drivers are found in config/cache.php.
(If you use APC and get an error message Call to undefined function Illuminate\Cache\apc_fetch(), this PHP extension must be installed, see e.g. http://php.net/manual/en/apcu.installation.php)
Finally, I believe the 'prefix' key in config/filesystems.php must be set to the same as the cache key prefix in config/cache.php (which is 'prefix' => 'cache' by default).
I am maintaining a system with many databases. One “central" db, and many other “client” dbs. Each time a client registers, we create a client db using an SQL file. The system is using Propel + PHP + MySQL.
Now the problem is, there are changes when we do version-up. It’s possible to use Propel migration for central db, but there are MANY client dbs, and in propel.yaml/ propel-config.php we only have ONE connection string for client like this:
$manager = new \Propel\Runtime\Connection\ConnectionManagerSingle();
$manager->setConfiguration(array(
'classname' => 'Propel\\Runtime\\Connection\\ConnectionWrapper',
'dsn' => 'mysql:host=127.0.0.1;dbname=' . $shopDbName . ';charset=UTF8',
'user' => 'dba',
'password' => ‘******',
'attributes' =>
array(
'ATTR_EMULATE_PREPARES' => false,
),
));
in which $shopDbName is the global variable identified by a string sent from client devices.
So, how can I automate the process of migration for client dbs in this case, please?
I've been using the filesystem adapter for cacheing data.
E.g..
$cache = StorageFactory::factory(array(
'adapter' => array(
'name' => 'filesystem'
'options' => array('ttl' => 1800, 'cache_dir' => './data/cache'),
),
));
But when using the getItem() function AFTER the TTL clocks over it returns false on success etc, which it should... However, I've noticed that the file remains on the system. Is there a way of forcing the use of the cached file?
Scenario being.. My cache is outdated, when it runs some expensive functions they return nothing or it times out.. So I'd like to use the cache instead!
Just wondering if thats possible?
Thanks!
Here is a useful link to the official ZF2 documentation for the specific StorageAdapter that you are using (filesystem).
I have created an ec2 client using the method mentioned in the AWS docs. I am using the aws.phar file for the SDK. The ec2 client is created properly because when I var_dump the client, it returns the Ec2Client object. But when I attempt to access the describeInstanceStatus from the ec2 client it throws a You are not authorized to perform this operation. exception. This is my code.
use Aws\Ec2\Ec2Client;
require 'aws.phar';
$ec2Client = Ec2Client::factory(array(
'key' => '<aws access key>',
'secret' => '<aws secret key>',
'region' => 'us-east-1'
));
try{
$ec2Client->describeInstanceStatus(array(
'DryRun' => false,
'InstanceIds' => array('InstanceId'),
'Filters' => array(
array(
'Name' => 'availability-zone',
'Values' => array('us-east-1'),
),
),
'MaxResults' => 10,
'IncludeAllInstances' => false,
));}
catch(Exception $e){
echo $e->getMessage();
}
Please tell me where am I getting this wrong. I've tried googling it, looked in the AWS forums but to no result. Thank you.
The error is coming from the Access that you have been granted/denied via AWS IAM.
The user, whose access/secret keys you are using in the code, does not have privilege to describe instances. This privilege is configured in the IAM policy which is applied to this user.
There is nothing wrong with your code. You need to look into the IAM policy about what all privileges are granted/denied to this user.