After reading quite some posts (Like this one from Zechariah Campbell) about Laravel Horizon with a Redis Queue and trying to configure it and customize it I couldn't find out if it's possible to maximize the total amount of processes run by laravel horizon.
Using the strategy "simple" and having one process per Queue (which is what I want) might cause cpu or internal memory issues when having like 1000 queues which will cause 1000 processes to run? (Every Queue gets their own process by default)
Is there any way to maximize the total amount of processes spawned by horizon, regardless of the amount of queues? So 10 processen en 20 queues and when one queue is empty or even better one job of one queue was processed, the worker will pick another queue?
My current configuration
horizon.php
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'balance' => 'simple',
'processes' => 1,
'tries' => 2,
],
],
queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'queue',
'queue' => "1000,2000,3000",
'retry_after' => 90,
'block_for' => null,
],
laravel 5.8 and horizon 3.0
currently I use beanstalkd but want to migrate to redis queue with horizon because of the lack of maintainance of beanstalkd and some nasty bugs.
UPDATE 2019-05-17:
I tried the config maxProcesses:
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'balance' => 'simple',
'maxProcesses' => 1,
'processes' => 1,
'tries' => 2,
],
],
But even then three processes are created, one for each queue.
As of Horizon 4 (and previous versions), it will always assign at least 1 worker to each queue. There's no way around it.
So, if your minProcess is 1 but you have 3 queues then you'll have at least 3 processes (workers). I guess the minProcess setting is more of a minProcessPerQueue thing.
Related
I'm running two Laravel apps in a high availability setup, with a load balancer directing traffic between them.
Both apps use the same Redis DB (AWS ElasticCache) for queues , and are set up with Laravel Horizon.
Both apps have the same configurations and 3 workers each. "high", "medium" and "low".
Most jobs are running fine, but there's one job that takes longer than others and is causing an issue.
The failing job is running on the 'low' worker.
So the job is processed by one horizon. It's processing and after 1 minute and 30 seconds, the second laravel horizon is also taking the job and start processing it. Since this job can't run in parallel, the job fails.
It looks like the lock system isn't working properly, since both Laravel Horizon instances are taking the job.
Does horizon have a lock system or do I have to implement my own ?
I also have no idea why 90s after the job is taken by horizon, the 2nd horizon is taking it.
config/horizon.php
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['high', 'default', 'low'],
'balance' => 'auto',
'processes' => 1,
'tries' => 1,
'timeout' => 1250,
'memory' => 2048,
],
],
],
config/queue.php
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 1260,
'block_for' => null,
],
],
WithoutOverlapping middleware should help you with that
https://laravel.com/docs/10.x/queues#preventing-job-overlaps
I am trying to use redis in my application but I am not sure if my app is using redis or file driver as I can't create tags but I can create normal keys fine.
I have set CACHE_DRIVER=redis and also in my cache.php I have:
'default' => env('CACHE_DRIVER', 'redis'),
also in my database.php there is:
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
The reasons for my suspicion are I cannot create tags and running redis-cli flushall under homestead(ssh) does not seem to get rid of the cahce. I had to use Cache::flush() in laravel instead.
So How can I effectively find out which cache driver my application is using?
Its pretty simple , you can use redis cli monitor command to check the get/set are happening or not
redis-cli monitor
And try to run the application .. if u able to see the keys then redis cache is running
u can also check the redis key by following command
redis-cli
Then enter following
keys *
I hope its useful.
You should simply query your Redis DB like so:
redis-cli
Then, being on redis console:
SELECT <DB-NUMBER>
KEYS *
If you see some keys like
1) PREFIX:tag:TAG:key
2) PREFIX:tag:DIFFERENT-TAG:key
it is quite a hint that Laravel is using Redis as its cache backend. Otherwise, take a look at
<YOUR-APP-DIR>/storage/framework/cache
If you find some files/sub-folders in there, it is quite a hint, Laravel is using file-based caching.
In my Laravel 5 set up, I included the Flysystem package, configured a disk in config/filesystems.php (ftp)
'ftp' => [
'driver' => 'ftp',
'host' => env('FTP_HOST'),
'username' => env('FTP_USER'),
'password' => env('FTP_PASS'),
'passive' => TRUE,
],
Then I'm able to perform ftp upload and download with the following
instructions:
Storage::disk('ftp')->put($filePath, $contents);
Storage::disk('ftp')->get($filePath);
Until here everything is fine.
Problems start when I'm uploading big files. Above 200MB.
PHP memory limit is reached and execution stops with fatal error.
In fact when Storage->put is called my PC memory increases dramatically.
I've read somewhere that a solution might be to use Streams to perform read write from my "virtual" disk.
Actually I'm still missing how to implement it in order to optimize memory usage during these operations.
Both CDbHttpSession and CHttpSession seem to be ignoring the timeout value and garbage collect data after a fairly short time (less than 12 hours). What could be the problem?
'session' => array(
'class'=> 'CDbHttpSession',
'autoCreateSessionTable' => true,
'autoStart'=>true,
'timeout' => 1209600,
'cookieMode' => 'only',
'sessionName' => 'ssession',
),
May be this is what you are looking for
Setting the timeout for CHttpSession just sets the
session.gc_maxlifetime PHP setting. If you run your application or
Debian or Ubuntu, their default PHP has the garbage collector disabled
and runs a cron job to clean it up.
In my apps I set the session dir somewhere in protected/runtime to
separate my session from other apps. This is important on shared
hosting sites and it's a good habbit. The downside is that I have to
remember to set up a cronjob to clean the files in that folder.
Anyway, you should also set a timeout when calling CWebUser.login to
log in a user.
from Yii Forum Post
Check duration parameter in CWebUser.login
I am a newbie to php and cakephp, recently I was assigned a job to implement memcache in my app so that its performance can be increased. Can anyone suggest some documentation on this topic for me?
Thanks.
This might be a bit late ... but the Cake core has support for Memcached built in (at least in the latest versions, 2.0.x and 2.1).
Have a look at Config/core.php in your app, and you should see these lines (commented):
Cache::config('default', array(
'engine' => 'Memcache', //[required]
'duration' => 3600, //[optional]
'probability' => 100, //[optional]
'prefix' => Inflector::slug(APP_DIR) . '_', //[optional] prefix every cache file with this string
'servers' => array(
'127.0.0.1:11211' // localhost, default port 11211
), //[optional]
'persistent' => true, // [optional] set this to false for non-persistent connections
'compress' => false, // [optional] compress data in Memcache (slower, but uses less memory)
));
You can uncomment these lines and test it out with a Memcached install. Make sure you have Memcached installed somewhere (localhost or elsewhere) and you are pointing to it.
Memcache is one of the supported Cache engines by the built-in Cache class. The Cache class is a wrapper for interacting with your Cache and you can read everything about it here: http://book.cakephp.org/2.0/en/core-libraries/caching.html
Warlock
Here is a more specific implementation of Memcache and Cakephp that may help with your bottle necks
Send your database on vacation by using CakePHP + Memcached
http://nuts-and-bolts-of-cakephp.com/2009/06/17/send-your-database-on-vacation-by-using-cakephp-memcached/