CloudControl: Restart workers on deployment - php

Is there any way to restart a worker when deploying. If the worker is not running start it while deploying.
The workers are registered in Procfile, but i always have to start them manually with an api request.

You can use cctrlapp APP_NAME deploy --restart-workers, this will stop all running workers and start them again with the new deploy version.
But this doesn't start workers if they don't exist. This is tricky to automate because not all workers in the Procfile are long running workers and you could also have workers which are started multiple times.

Related

WebSockets problem while deploying Symfony application

I am using Ratchet WebSockets on my Symfony app. There is a Symfony Process Component implementation on WebSocket notification handler. It means that some Symfony commands is started using the Symfony Process Component after some particular message comes into socket.
Socket server itself is being handled by Supervisor. Now after every code deployment I run the supervisor restart command to apply the new code to the socket server.
And here comes the problem. If I deploy the code changes while any of the Symfony Command is running - socket disconnects and only way to raise it up is to kill the supervisor process and restart the supervisor again. Of course this way all the running processes dies.
My goal would be to continue running Symfony processes even after the restart of Supervisor. After these processes finished, the next ones would run on the newly deployed code. I wonder what is the standard handling of continuous delivery on WebSockets is.

Laravel Horizon not executing pending jobs - Kubernetes and Docker environment

We have two different pods in Kubernetes for our Laravel app,
one running apache serving on port 80, (CMD /usr/sbin/apache2ctl -D FOREGROUND)
and another running worker (Laravel Horizon) (CMD php /var/www/artisan horizon)
The issue is when I check the horizon dashboard, it says 'Active', and I can see the Jobs in the 'Pending Jobs' section, but they never actually execute. They are just sitting there idle.
Now, when I SSH in the pod running apache and manually and run the command 'php artisan horizon' than it actually executes all pending jobs.
I have already ensured the followings:
Both the pods are connected with the same Redis database service
Horizon Prefix is the same for both the pods
Double check your APP_ENV matches one of the environments in the horizon.php config. Otherwise horizon will not start any queue workers.
By default only local and production environments are provided:
https://laravel.com/docs/8.x/horizon#environments
After struggling for days, I got the answer to this problem.
While using Redis as a cache, queue, or broadcast broker in the docker environment, we need to make sure that the following environment variables are defined properly and they must be the same across all the pods.
CACHE_PREFIX
REDIS_PREFIX
REDIS_CACHE_DB
HORIZON_PREFIX
Hope this will help others trying to deploy the Laravel apps using Kubernetes and Docker.
In my case, I need to change my app environment from prod to production
APP_ENV=production
In my case I added the jobs into "emails" queue, but horizon.php config file didn't specify this queue name for supervisor-1 :-)
I just restart Redis server
/etc/init.d/redis-server restart
If jobs or listeners send requests to external services and cannot reach destination hosts and the connection timeout value is set to something very big or job timeout value is also set to a big value then the jobs might also be in pending state long enough so that it may seem that horizon is not executing them.

Laravel 5.6. Stop a worker AFTER job execution with supervisor

Is it possible to send a stop signal to the worker in such a way, that it will stop only AFTER processing the job.
Currently I have a job, that takes some time AND can't be interrupted, cause I have only one try/attempt.
Sometimes I need to stop workers to redeploy my code. Is there a way to stop Laravel's worker only after finishing current job and before starting a new one?
I'm using supervisor for restarting the queue workers.
Cause currently on each deploy I'm loosing 1 job and my client loses money :(
P.S.
This is NOT a duplicate of Laravel Artisan CLI safely stop daemon queue workers cause he was using Artisan CLI and I'm using supervisor.
autorestart=true in supervisor + php artisan queue:restart solves the issue.
There is a built-in feature for this:
php artisan queue:restart
This command will instruct all queue workers to gracefully "die" after they finish processing their current job so that no existing jobs are lost. Since the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
Supervisord has an XML-RPC Api which you could use from your php code. I sugesst you use Zend's XML-RPC Client

Laravel 5.5 Worker Memory Issue

I have a Laravel worker set up on AWS Elastic Beanstalk. It is a t2.micro instance.
I am noticing that whenever the worker gets touched from AWS SQS, the memory on the EC2 instance spikes to 99% consumption and then comes back down.
This does not happen on any other instance, just this specific worker instance.
Does anyone have an idea why this might be happening?
Are you certain that only one worker is running? Chances are you are executing php artisan queue:work multiple times as a cron job, where as it should only be executed once as a daemon and monitored with supervisor.

Issue with Redis managing Laravel Queues

I am using Laravel forge with Redis as the queue driver.
I have updated the code for my application to send push notifications a few times over, but the notifications sent are as in the old code.
Changing the queue driver to database, sends the notifications as per the latest updates. However when I switched it back to Redis, it still shows old version of the notification.
I have done "FLUSHALL" via redis-cli, but it didn't fix it.
Also I use Laravel Horizon to manage queues.
How I can fix this? Thanks in advance.
Edit: Other thing I noticed was all code driven dispatches were queued on Redis. I have listed the solution in the answer in the hopes it would help someone else.
What I received from Forge support:
Hello,
There might be a worker that's stuck you can try and run php artisan
horizon:purge which should kill all rogue worker processes, and then
restart the daemon. It's advised to run the purge command in your
deployment script to make sure all stale processes are killed.
-- Mohamed Said forge#laravel.com
However this how I got it sorted:
php artisan horizon:terminate
php artisan queue:restart
And then the code was working properly
Stop redis, Stop Horizon workers. Start redis and then start horizon workers.
But before these all clear cache.
I had similar problem and in my case it was just the matter of restart all the services.

Categories