I am a newbie to php and cakephp, recently I was assigned a job to implement memcache in my app so that its performance can be increased. Can anyone suggest some documentation on this topic for me?
Thanks.
This might be a bit late ... but the Cake core has support for Memcached built in (at least in the latest versions, 2.0.x and 2.1).
Have a look at Config/core.php in your app, and you should see these lines (commented):
Cache::config('default', array(
'engine' => 'Memcache', //[required]
'duration' => 3600, //[optional]
'probability' => 100, //[optional]
'prefix' => Inflector::slug(APP_DIR) . '_', //[optional] prefix every cache file with this string
'servers' => array(
'127.0.0.1:11211' // localhost, default port 11211
), //[optional]
'persistent' => true, // [optional] set this to false for non-persistent connections
'compress' => false, // [optional] compress data in Memcache (slower, but uses less memory)
));
You can uncomment these lines and test it out with a Memcached install. Make sure you have Memcached installed somewhere (localhost or elsewhere) and you are pointing to it.
Memcache is one of the supported Cache engines by the built-in Cache class. The Cache class is a wrapper for interacting with your Cache and you can read everything about it here: http://book.cakephp.org/2.0/en/core-libraries/caching.html
Warlock
Here is a more specific implementation of Memcache and Cakephp that may help with your bottle necks
Send your database on vacation by using CakePHP + Memcached
http://nuts-and-bolts-of-cakephp.com/2009/06/17/send-your-database-on-vacation-by-using-cakephp-memcached/
Related
how to upload an image to scaleway storage by laravel or PHP methods?
Laravel uses FlySystem under the hood to abstract file storage. It provides several drivers out of the box including: S3, Rackspace, FTP etc.
If you want to support Scaleway, you would need to write a Custom Driver, which you can read more about it here.
Edit: It seems from the documentation of Scaleway, it supports AWS CLI clients, which means, this should be quite easy to add support for in FlySytem. I tried the following and it worked.
I added a new driver in config/filesystems.php as follows:
'scaleway' => [
'driver' => 's3',
'key' => '####',
'secret' => '#####',
'region' => 'nl-ams',
'bucket' => 'test-bucket-name',
'endpoint' => 'https://s3.nl-ams.scw.cloud',
]
and then, to use the disk, I did the following:
\Storage::disk('scaleway')->put('file.txt', 'Contents');
My file was uploaded.
EDIT: I also made a PR to get Scaleway accepted in the list of adapters for League's FlySystem. It got merged. You can see it live here.
I am trying to use redis in my application but I am not sure if my app is using redis or file driver as I can't create tags but I can create normal keys fine.
I have set CACHE_DRIVER=redis and also in my cache.php I have:
'default' => env('CACHE_DRIVER', 'redis'),
also in my database.php there is:
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
The reasons for my suspicion are I cannot create tags and running redis-cli flushall under homestead(ssh) does not seem to get rid of the cahce. I had to use Cache::flush() in laravel instead.
So How can I effectively find out which cache driver my application is using?
Its pretty simple , you can use redis cli monitor command to check the get/set are happening or not
redis-cli monitor
And try to run the application .. if u able to see the keys then redis cache is running
u can also check the redis key by following command
redis-cli
Then enter following
keys *
I hope its useful.
You should simply query your Redis DB like so:
redis-cli
Then, being on redis console:
SELECT <DB-NUMBER>
KEYS *
If you see some keys like
1) PREFIX:tag:TAG:key
2) PREFIX:tag:DIFFERENT-TAG:key
it is quite a hint that Laravel is using Redis as its cache backend. Otherwise, take a look at
<YOUR-APP-DIR>/storage/framework/cache
If you find some files/sub-folders in there, it is quite a hint, Laravel is using file-based caching.
In my Laravel 5 set up, I included the Flysystem package, configured a disk in config/filesystems.php (ftp)
'ftp' => [
'driver' => 'ftp',
'host' => env('FTP_HOST'),
'username' => env('FTP_USER'),
'password' => env('FTP_PASS'),
'passive' => TRUE,
],
Then I'm able to perform ftp upload and download with the following
instructions:
Storage::disk('ftp')->put($filePath, $contents);
Storage::disk('ftp')->get($filePath);
Until here everything is fine.
Problems start when I'm uploading big files. Above 200MB.
PHP memory limit is reached and execution stops with fatal error.
In fact when Storage->put is called my PC memory increases dramatically.
I've read somewhere that a solution might be to use Streams to perform read write from my "virtual" disk.
Actually I'm still missing how to implement it in order to optimize memory usage during these operations.
Both CDbHttpSession and CHttpSession seem to be ignoring the timeout value and garbage collect data after a fairly short time (less than 12 hours). What could be the problem?
'session' => array(
'class'=> 'CDbHttpSession',
'autoCreateSessionTable' => true,
'autoStart'=>true,
'timeout' => 1209600,
'cookieMode' => 'only',
'sessionName' => 'ssession',
),
May be this is what you are looking for
Setting the timeout for CHttpSession just sets the
session.gc_maxlifetime PHP setting. If you run your application or
Debian or Ubuntu, their default PHP has the garbage collector disabled
and runs a cron job to clean it up.
In my apps I set the session dir somewhere in protected/runtime to
separate my session from other apps. This is important on shared
hosting sites and it's a good habbit. The downside is that I have to
remember to set up a cronjob to clean the files in that folder.
Anyway, you should also set a timeout when calling CWebUser.login to
log in a user.
from Yii Forum Post
Check duration parameter in CWebUser.login
we're using DynamoDB in order to synchronize sessions between more than one EC2 machine under ELBs.
We noticed that this method slow down a lot the scripts.
Specifically, I made a js that calls 10 times 3 different php scripts on the server.
1) The first one is just an echo timestamp(); and takes about 50ms as roundtrip time.
2) The second one is a php script that connect through mysqli to the RDS MySQL and takes the same time (about 50-60ms).
3) The third script use the DynamoDB session keeping method described in official AWS documentation and takes about 150ms (3 times slower!!).
I'm cleaning the garbage every night (as documentation say) and the DynamoDB metrics seems OK (attached below).
The code I use is this:
use Aws\DynamoDb\DynamoDbClient;
use Aws\DynamoDb\Session\SessionHandler;
ini_set("session.entropy_file", "/dev/urandom");
ini_set("session.entropy_length", "512");
ini_set('session.gc_probability', 0);
require 'aws.phar';
$dynamoDb = DynamoDbClient::factory(array(
'key' => 'XXXXXX',
'secret' => 'YYYYYY',
'region' => 'eu-west-1'
));
$sessionHandler = SessionHandler::factory(array(
'dynamodb_client' => $dynamoDb,
'table_name' => 'sessions',
'session_lifetime' => 259200,
'consistent_read' => true,
'locking_strategy' => null,
'automatic_gc' => 0,
'gc_batch_size' => 25,
'max_lock_wait_time' => 15,
'min_lock_retry_microtime' => 5000,
'max_lock_retry_microtime' => 50000,
));
$sessionHandler->register();
session_start();
Am I doing something wrong, or is it normal all that time to retrieve the session?
Thanks.
Copying correspondence from an AWS engineer in AWS forums: https://forums.aws.amazon.com/thread.jspa?messageID=597493
Here a couple things to check:
Are you running your application on EC2 in the same region as your DynamoDB table?
Have you enabled OPcode caching to ensure that the classes used by the SDK do not need to be loaded from disk and parsed each time your
script is run?
Using a web server like Apache and connecting to a DynamoDB session
will require a new SSL connection to be established on each request.
This is because PHP doesn't (currently) allow you to reuse cURL
connection handles between requests. Some database drivers do allow
for a persistent connections between requests, which could account for
the performance difference.
If you follow up on the AWS forums thread, an AWS engineer should be able to help you with your issue. This thread is also monitored if you want to keep it open.