I'm running two Laravel apps in a high availability setup, with a load balancer directing traffic between them.
Both apps use the same Redis DB (AWS ElasticCache) for queues , and are set up with Laravel Horizon.
Both apps have the same configurations and 3 workers each. "high", "medium" and "low".
Most jobs are running fine, but there's one job that takes longer than others and is causing an issue.
The failing job is running on the 'low' worker.
So the job is processed by one horizon. It's processing and after 1 minute and 30 seconds, the second laravel horizon is also taking the job and start processing it. Since this job can't run in parallel, the job fails.
It looks like the lock system isn't working properly, since both Laravel Horizon instances are taking the job.
Does horizon have a lock system or do I have to implement my own ?
I also have no idea why 90s after the job is taken by horizon, the 2nd horizon is taking it.
config/horizon.php
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['high', 'default', 'low'],
'balance' => 'auto',
'processes' => 1,
'tries' => 1,
'timeout' => 1250,
'memory' => 2048,
],
],
],
config/queue.php
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 1260,
'block_for' => null,
],
],
WithoutOverlapping middleware should help you with that
https://laravel.com/docs/10.x/queues#preventing-job-overlaps
Related
After reading quite some posts (Like this one from Zechariah Campbell) about Laravel Horizon with a Redis Queue and trying to configure it and customize it I couldn't find out if it's possible to maximize the total amount of processes run by laravel horizon.
Using the strategy "simple" and having one process per Queue (which is what I want) might cause cpu or internal memory issues when having like 1000 queues which will cause 1000 processes to run? (Every Queue gets their own process by default)
Is there any way to maximize the total amount of processes spawned by horizon, regardless of the amount of queues? So 10 processen en 20 queues and when one queue is empty or even better one job of one queue was processed, the worker will pick another queue?
My current configuration
horizon.php
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'balance' => 'simple',
'processes' => 1,
'tries' => 2,
],
],
queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'queue',
'queue' => "1000,2000,3000",
'retry_after' => 90,
'block_for' => null,
],
laravel 5.8 and horizon 3.0
currently I use beanstalkd but want to migrate to redis queue with horizon because of the lack of maintainance of beanstalkd and some nasty bugs.
UPDATE 2019-05-17:
I tried the config maxProcesses:
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'balance' => 'simple',
'maxProcesses' => 1,
'processes' => 1,
'tries' => 2,
],
],
But even then three processes are created, one for each queue.
As of Horizon 4 (and previous versions), it will always assign at least 1 worker to each queue. There's no way around it.
So, if your minProcess is 1 but you have 3 queues then you'll have at least 3 processes (workers). I guess the minProcess setting is more of a minProcessPerQueue thing.
I'm working with a custom file storage disk and it works as expected on my local machine, but on my Forge server the generated asset URLs don't work.
My filesystem config looks like this:
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
'assets' => [
'driver' => 'local',
'root' => storage_path('app/public/assets'),
'url' => env('APP_URL').'/assets',
'visibility' => 'public',
],
];
I have a file at {myapp root}/storage/app/public/assets/file.jpg and I'm generating URLs with Storage::disk('assets')->url('file.jpg'); and this works fine, it gives me the expected URL, which is (locally) myapp.test/assets/file.jpg. In production I get the correct URL as well, myapp.com/assets/file.jpg, but this link 404s. If I insert "storage" into the URL, so that it's myapp.com/storage/assets/file.jpg, it works.
I've php artisan storage:linked everything in both environments, everything else is exactly the same, but on a real server the URLs all 404.
UPDATE SEP 2020
I have a much simpler filesystem setup now, but IIRC the solution here in the end was just to symlink public/assets to storage/app/public/assets on the Forge server. At the time this was ugly to do automatically, but in Laravel 7+ (maybe earlier too) there's a new links key in config/filesystems.php that makes this very easy.
In general, I wouldn't recommend nesting public disks/symlinks like this though. I should have just put things in storage/app/assets and kept it flatter overall.
It's still a mystery to me why this all appeared to work fine locally...
I am trying to use redis in my application but I am not sure if my app is using redis or file driver as I can't create tags but I can create normal keys fine.
I have set CACHE_DRIVER=redis and also in my cache.php I have:
'default' => env('CACHE_DRIVER', 'redis'),
also in my database.php there is:
'redis' => [
'client' => 'predis',
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
The reasons for my suspicion are I cannot create tags and running redis-cli flushall under homestead(ssh) does not seem to get rid of the cahce. I had to use Cache::flush() in laravel instead.
So How can I effectively find out which cache driver my application is using?
Its pretty simple , you can use redis cli monitor command to check the get/set are happening or not
redis-cli monitor
And try to run the application .. if u able to see the keys then redis cache is running
u can also check the redis key by following command
redis-cli
Then enter following
keys *
I hope its useful.
You should simply query your Redis DB like so:
redis-cli
Then, being on redis console:
SELECT <DB-NUMBER>
KEYS *
If you see some keys like
1) PREFIX:tag:TAG:key
2) PREFIX:tag:DIFFERENT-TAG:key
it is quite a hint that Laravel is using Redis as its cache backend. Otherwise, take a look at
<YOUR-APP-DIR>/storage/framework/cache
If you find some files/sub-folders in there, it is quite a hint, Laravel is using file-based caching.
In my Laravel 5 set up, I included the Flysystem package, configured a disk in config/filesystems.php (ftp)
'ftp' => [
'driver' => 'ftp',
'host' => env('FTP_HOST'),
'username' => env('FTP_USER'),
'password' => env('FTP_PASS'),
'passive' => TRUE,
],
Then I'm able to perform ftp upload and download with the following
instructions:
Storage::disk('ftp')->put($filePath, $contents);
Storage::disk('ftp')->get($filePath);
Until here everything is fine.
Problems start when I'm uploading big files. Above 200MB.
PHP memory limit is reached and execution stops with fatal error.
In fact when Storage->put is called my PC memory increases dramatically.
I've read somewhere that a solution might be to use Streams to perform read write from my "virtual" disk.
Actually I'm still missing how to implement it in order to optimize memory usage during these operations.
Both CDbHttpSession and CHttpSession seem to be ignoring the timeout value and garbage collect data after a fairly short time (less than 12 hours). What could be the problem?
'session' => array(
'class'=> 'CDbHttpSession',
'autoCreateSessionTable' => true,
'autoStart'=>true,
'timeout' => 1209600,
'cookieMode' => 'only',
'sessionName' => 'ssession',
),
May be this is what you are looking for
Setting the timeout for CHttpSession just sets the
session.gc_maxlifetime PHP setting. If you run your application or
Debian or Ubuntu, their default PHP has the garbage collector disabled
and runs a cron job to clean it up.
In my apps I set the session dir somewhere in protected/runtime to
separate my session from other apps. This is important on shared
hosting sites and it's a good habbit. The downside is that I have to
remember to set up a cronjob to clean the files in that folder.
Anyway, you should also set a timeout when calling CWebUser.login to
log in a user.
from Yii Forum Post
Check duration parameter in CWebUser.login