I am struggling with this issue for some time.
I am using the sftp adapter to connect to another server where i read/write files a lot.
For thumbnail creation i use background jobs with laravel horizon to retrieve pdf contents from the remote sftp server and then generate a jpg and place in local filesystem.
For first setup i need to make around 150k of thumbnails.
When i use a lot of processes in horizon the remote server can't handle this number of connections.
I must limit to max 2 processes at the moment (10 secs~ * 150k~) not optimal.
I want to cache the connection because i know it is possible and probably solves my problem, but can't get it to work:(
The only reference/tutorial/example/docs i could find is
https://medium.com/#poweredlocal/caching-s3-metadata-requests-in-laravel-bb2b651f18f3
https://flysystem.thephpleague.com/docs/advanced/caching/
When i use the code from the example like this:
Storage::extend('sftp-cached', function ($app, $config) {
$adapter = $app['filesystem']->createSftpAdapter($config);
$store = new Memory();
return new Filesystem(new CachedAdapter($adapter->getDriver()->getAdapter(), $store));
});
I get the error: Driver [] is not supported.
Is there anyone here who can help me a bit further on this?
It appears necessary to adjust your configuration:
In your config/filesystems.php file, add a 'caching' key to your storage:
'default' => [
'driver' => 'sftp-cached',
// ...
'cache' => [
'store' => 'apc',
'expire' => 600,
'prefix' => 'laravel',
],
],
This example is based on official documentation (https://laravel.com/docs/5.6/filesystem#caching), but it is not described well how the 'store' key is used here (where memcached is the example), and you would need to change the implementation of your driver to new Memcached($memcached); (with an instance to inject) instead.
In your case, since the sftp-cached driver implements $store = new Memory();, the cache config must reflect this with 'store' => 'apc' (which is RAM based cache). Available 'store' drivers are found in config/cache.php.
(If you use APC and get an error message Call to undefined function Illuminate\Cache\apc_fetch(), this PHP extension must be installed, see e.g. http://php.net/manual/en/apcu.installation.php)
Finally, I believe the 'prefix' key in config/filesystems.php must be set to the same as the cache key prefix in config/cache.php (which is 'prefix' => 'cache' by default).
Related
we am using Predis to connect to a Redis instance hosted on AWS (Elasticache). We are experiencing performance issues and after having tried other scaling-related solutions, we would like to experiment with adding read replicas in our cluster (with cluster mode disabled, no sharding, just read replicas). Elasticache offers this feature out of the box, but it the documentation of Predis is not very clear on how to use different write/read endpoints.
We currently initialize RedisClient in this way
$redisClient = new RedisClient(['host' => 'the primary endpoint']);
How can we add a read replica endpoint in constructor?
The documentation of PRedis was a bit vague (or outdated). This is how we managed to make it work, in case someone is facing the same issue:
$parameters = [
['host' => $primaryEndpoint, 'role' => 'master', 'alias' => 'master'],
['host' => $replicaEndpoint, 'role' => 'slave', 'alias' => 'slave'],
];
$this->redis = new RedisClient($parameters,
['replication' => true, 'throw_error' => true, 'async' => true]);
The role and alias properties are important.
According to PRedis docs, the replication option should be set as 'replication' => 'predis' but this did not work. Using 'replication' => true did.
What are the best-practices concerning the usage of docker secrets within a PHP script?
Use case: I've got a Docker stack which is comprised of (1) a web service based on an image which couples php with an apache server and (2) a db service based on the latest mysql image.
Within /var/www/html on my web service, I've got a config.php which defines a number of variables representing database connection parameters (username, password, etc.). This config.php file is included wherever database connections are established throughout the codebase. I have docker secrets defined corresponding to each of the parameters that I want to define in config.php - what is the best way to use those secrets in the definitions in config.php?
By default, each docker secret is mounted to the file /run/secrets/<secret name>.
My naive solution was simply to use fopen() and fgets() as follows:
function getSecret($secret){
$secret_file = fopen("/run/secrets/{$secret}");
$secret = fgets($secret_file);
fclose($secret_file);
return $secret;
}
config.php:
return [
'database' => [
'host' => getSecret('db_host'),
'user' => getSecret('db_user'),
'password' => getSecret('db_password),
...
]
];
Does this look like a sensible approach?
I ended up going with file_get_contents("/run/secrets/...");
An undocumented 'feature' of this is that by default it adds a newline at the end of the file(0x0a), so that needs to be trimmed.
The end result is:
$dbpasswd = rtrim(file_get_contents("/run/secrets/mysql_password"));
The advantage is that you don't need to deal with opening and closing the file.
The need for trimming is an annoyance though.
http://php.net/manual/en/function.file-get-contents.php
I've been using the filesystem adapter for cacheing data.
E.g..
$cache = StorageFactory::factory(array(
'adapter' => array(
'name' => 'filesystem'
'options' => array('ttl' => 1800, 'cache_dir' => './data/cache'),
),
));
But when using the getItem() function AFTER the TTL clocks over it returns false on success etc, which it should... However, I've noticed that the file remains on the system. Is there a way of forcing the use of the cached file?
Scenario being.. My cache is outdated, when it runs some expensive functions they return nothing or it times out.. So I'd like to use the cache instead!
Just wondering if thats possible?
Thanks!
Here is a useful link to the official ZF2 documentation for the specific StorageAdapter that you are using (filesystem).
foo_constants.php or fooConstants.php?
It seems laravel would do some name conversion when you use Config::get('...'), which one do you use?
foo.php
Why specify constants at all? Convention I've generally seen is single word filenames. I think in general most 'config' type settings will be constant in an environment even if it is variable between environments.
Take a look at the aws/aws-sdk-php-laravel composer package as an example. That file is named config.php in the package, but gets published to aws.php.
rydurham/Sentinel is another popular package. It also only has a single-word filename.
Update
In the situation you describe in your comment, I would do something like this:
<?php // File: foo.php
return [
'sheep' => [
'clothing' => 'wool',
'chews_on' => 'cud',
],
'wolf' => [
'clothing' => 'fur',
'chews_on' => 'sheep',
],
];
And you can access both of those via Config::get('foo.sheep') and Config::get('foo.wolf'), respectively. When they're defined on the server, they're still 'on the server' so to speak. If you wish to release the values stored in foo.sheep to the public you can, and you can do so without also exposing foo.wolf.
I'm using Kohana 3 and I have an issue while logging in with an user.
I use this line to log in:
$success = Auth::instance()->login($_POST['login_user'], $_POST['login_password'], $remember);
And I got this error message:
Session_Exception [ 1 ]: Error reading session data. ~ SYSPATH/classes/kohana/session.php [ 326 ]
I have the sessions table created with the follow SQL:
CREATE TABLE `sessions` (
`session_id` varchar(24) NOT NULL,
`last_active` int(10) unsigned DEFAULT NULL,
`contents` text,
PRIMARY KEY (`session_id`),
KEY `sessions_fk1` (`last_active`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
And also the session.php inside the config folder:
<?php defined('SYSPATH') or die('No direct script access.');
return array(
'database' => array(
/**
* Database settings for session storage.
*
* string group configuation group name
* string table session table name
* integer gc number of requests before gc is invoked
* columns array custom column names
*/
'group' => 'default',
'table' => 'sessions',
'gc' => 500,
'columns' => array(
/**
* session_id: session identifier
* last_active: timestamp of the last activity
* contents: serialized session data
*/
'session_id' => 'session_id',
'last_active' => 'last_active',
'contents' => 'contents'
),
),
);
?>
What might be the problem here?
Thanks!
I don't know if this will help, but I had a similiar problem.
The cause of this was that using one library (Facebook SDK), session was initialized on it's onw, and session handling was done using the $_SESSION variable. I noticed that there were two cookies - session (Kohanas session id) and PHPSESSID. This probably was the problems cause.
I modified the library so that id doesn't start the session on its own and the problem was solved.
So, you should probalby check if session isn't started elsewhere.
Session_Exception [ 1 ]: Error reading session data. ~ SYSPATH/classes/kohana/session.php [ 326 ]
Depends what version you're running, but this is caused by an exception being thrown when session data is being unserialized in read. You can see the bug report about it here: Session read errors not properly ignored. The solution would be to upgrade to the latest version if you haven't already.
Something else you need to look at is your session data. You'll have to see why your data is corrupt and can't be read properly. This could be an error generated from code in __sleep.
Workaround or solution for me was setting php.ini
session.auto_start = 0
Of course, restart your web-server
Not sure if you figured this out. But I had the same issue and it was related to my php config. I'm using NGNIX and php-fpm. By default my session files were trying to get saved to a directory that didn't exist. So I changed the session.save_path to a valid path and that fixed it.
One way to solve this is to instantiate a session instance before you create a Facebook SDK instance. For example:
$this->_session = Session::instance('native');
$this->_facebook = new Facebook(array(
'appId' => 'app_id',
'secret' => 'app_secret',
));
If you take a look at the code inside the constructor of the Facebook class you'll see it checks if a session has been started:
public function __construct($config) {
if (!session_id()) {
session_start();
}
parent::__construct($config);
if (!empty($config['sharedSession'])) {
$this->initSharedSession();
}
}
So if you create the session first it'll skip that block of code.
I have had such problems when switching to an online server (more than once :(, so it should be better put some clear guidance).
Recommendations:
§) If you are using Database session adapter:
Session::$default = 'database';
i.- Check that your DB credentials are correct.
ii.- Check that the table assigned to sessions data has correct type and size.
§) If you are using Encryption for your sessions data (config/session.php or config/.../session.php):
return array(
'cookie' => array(
// ...
'encrypted' => TRUE,
// ...
),
'database' => array(
// ...
'encrypted' => TRUE,
// ...
),
);
i- Check that you have mcryptinstalled:
$ php -m|grep mcrypt
mcrypt // installed
ii- Check that you are using the same key was used to encrypt data (config/encrypt.php or config/.../encrypt.php):
return array(
'default' => array(
'key' => 'yabadabadoo!',
'cipher' => MCRYPT_RIJNDAEL_128,
'mode' => MCRYPT_MODE_NOFB,
),
Workarounds
§) If is possible delete all sessions data and try again.
i) For native adapter: Delete all (or just the corresponding to your app) sessions files located in...
Stores session data in the default location for your web server. The
storage location is defined by session.save_path in php.ini or defined
by ini_set.
ii) For cookie adapter: Manually delete the sessions cookies in the browsers affected or programmatically (in case of many users affected): (PHP) How to destroy the session cookie correctly?
iii) For database adapter: TRUNCATE TABLE sessions (delete all records of sessions table)