How to restart supervisor for a Laravel deployment? - php

I currently use a cron job to call php artisan queue:work --once every minute to work on my jobs queue in production.
I would like to use supervisor instead to handle my queues.
In the docs in the section of supervisor-configuration it states:
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. You may gracefully restart all of the workers by issuing the queue:restart command:
php artisan queue:restart
This command will instruct all queue workers to gracefully "die" after they finish processing their current job so that no existing jobs are lost. Since the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
I don't understsand the last sentence. So lets say I have installed and configured the supervisor as described here and I manually logged into the server through ssh and started supervisor:
sudo supervisorctl start laravel-worker:*
Do I need to call php artisan queue:restart on deployment? If so, then this will only kill all current workers, how do I tell supervisor to restart the queue workers? Do I need to call sudo supervisorctl restart laravel-worker:* in the deployment after php artisan queue:restart?

I struggled with this for quite some time. It is a little confusing, and the docs tend to point to each other rather than explain how the whole system works.
The whole point of installing supervisor on your server is to automate the queue process within any Laravel apps running on the machine. If you look at the example file on the help page you linked to, all it is doing is going into a specific Laravel instance and starting the queue.
When supervisor starts, it is the equivalent of:
php artisan queue:start
within your Laravel folder. Supervisor is running the process. But, you still have control to restart the queue, either through sudo supervisorctl restart laravel-worker:* or a php artisan queue:restart within a single Laravel folder. You do not need to call a restart to supervisor if you are manually restarting the queue with the aritsan command - that would be redundant. You can test this out by doing a restart of the queue and monitoring code changes updates, or looking at the queue itself to see that all jobs restart.
The 'gotcha' with all of this is that if you introduce new code and deploy it, you must remember to restart the queue for that instance.
To make things more complicated, but to eventually make it simple, take a look at Laravel Horizon, which basically takes the place of the supervisor in a way that is a bit easier to maintain.

Related

Laravel Restart specific queue

Is there away to restart a specific queue in laravel. I know you can restart the all the queue by running php artisan queue:restart, but how to run something like php artisan queue:restart --queue='MyEmailsQueue'. Better even to restart the queue by hitting an endpoint like Artisan::queue('send-emails') restart

Laravel + Beanstalkd: How to run "queue:listen" as a service

I'm using Beanstalkd as a work queue in my project.
Now, my project is completed and I have to deploy it on VPS(production server).
There is something that confusing me! should I ssh to production server and manually type php artisan queue:listen ? (it's crap)
Is there any server to run queue:listen as service?
You should use something like Supervisor to run the queue in production. This will allow you to run the process in the background, specify the number of workers you want processing queued jobs and restart the queue should the process fail.
As for the queue you choose to use, that's up to you. In the past I've used Beanstalkd locally installed on an instance and Amazon SQS. The local instance was fine for basic email sending and other async tasks, SQS was great for when the message volume was massive and needed to scale. There are other SaaS products too such as IronMQ, but the usual reason people run into issues in production are because they're not using Supervisor.
You can install Supervisor with apt-get. The following configuration is a good place to start:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
numprocs=8
stdout_logfile=/home/user/app.com/worker.log
This will do the following:
Give the queue worker a unique name
Run the php artisan queue:work command
Automatically start the queue worker on system restart and automatically restart the queue workers should they fail
Run the queue worker across eight processes (this can be increased or reduced depending on your needs)
Log any output to /home/user/app.com/worker.log
To start Supervisor, you'd run the following (after re-reading the configuration/restarting):
sudo supervisorctl start laravel-worker:*
The documentation gives you some more in-depth information about using Supervisor to run Laravel's queue processes.

How to automate the deployment of new versions of a PHP script running in the background without downtime?

I have a AMQP consumer (a RabbitMQ consumer) written in PHP always active running in the background. This script is run in multiple nodes and 12 times per node: 12 unix background processes running:
php -f consumer.php &.
If a new version of the code must be deployed, at the moment I always have to kill ALL these processes manually and launch them again one by one, in each node.
Is there a way to automate the deployment of background scripts? I.e. put it in a deployment pipeline and then having them reloaded, similarly to using https://deployer.org.
Is there a way to avoid downtime?
Any way ReactPHP would help in this case?
Found the answer in the Laravel docs (the solution works for any always running background process, not just PHP and Laravel). Supervisor!
Configuring Supervisor
Supervisor configuration files are typically stored in the /etc/supervisor/conf.d directory. Within this directory, you may create any number of configuration files that instruct supervisor how your processes should be monitored. For example, let's create a laravel-worker.conf file that starts and monitors a queue:work process:
Starting Supervisor
Once the configuration file has been created, you may update the Supervisor configuration and start the processes using the following commands:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*
It will even help me starting as many processes I want with a single configuration file and single command. Again, from the Laravel docs:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
By calling sudo supervisorctl start laravel-worker:*, 8 background processes will run, which also restart in case of error.
If I just want to restart with a new released version, I call the restart command directly:
supervisorctl restart laravel-worker:*
I'll just integrate this as a Deployer task in my CI/CD pipeline.

Laravel jobs pushed to Amazon SQS but not processing

I'm running Laravel 5.3. I'm attempting to test a queue job, and I have my queue configured to use Amazon SQS. My app is able to push a job onto the queue, and I can see the job in SQS. But it stays there, never being processed. I've tried running php artisan queue:work, queue:listen, queue:work sqs... None of them are popping the job off the queue. I'm testing this locally with Homestead. Is there a trick to processing jobs from SQS?
I faced the same problem. I was using supervisor. This worked for me:
Mentioned queue driver in command (sqs):
command=php /var/www/html/artisan queue:work sqs --tries=3
Then ran these commands:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl restart all
Posting this just in case it helps anyone.
The instructions in the following post worked for me: https://blog.wplauncher.com/sqs-queue-in-laravel-not-working/. In essence, make sure you do the following:
create your standard queue in SQS
update your config/queue.php file to use your SQS credentials (I would suggest - - adding more env vars to your .env file and referencing them in this file)
update your QUEUE_DRIVER in your .env, so it's set to QUEUE_DRIVER=SQS
update your supervisor configuration file (typically /etc/supervisor/conf.d/laravel-worker.conf)
update and restart supervisor (see the 3 commands mentioned by #Dijkstra)

How does Laravel queue work and what if php artisan queue:listen stops

I have installed beanstaled and its working fine with laravel. The point where I am puzzled is that we have to do
php artisan queue:listen
to start listening queue. Right now, I am using it on amazone ec2 instance remotely through putty. but what is i close terminal? Will the jobs created through the code will work? Is it manually calling php artisan queue:listen or php artisan queue:work all time. Which does not seems fair.
If once php artisan queue:listen done, will it keep on running even if we close terminal?
Actually I dont know.
you need to install supervisor also. Here is a tutorial on using beanstalkd with laravel:
http://fideloper.com/ubuntu-beanstalkd-and-laravel4
Here are details on supervisor also:
http://supervisord.org/installing.html
I personally use a redis instance and run my queue with supervisor from there.
I find its a bit more memory effective then beanstalkd personally but each to there own.
Supervisor will execute the queue:listen command from artisan and this will run a job, if you have multiple supervisor processes then you can run multiple in line items.
depending on what you are doing i would almost look into python and multithereading also as i have used this for a few things i used to use a queue for and it has provided even better results.
example config file for supervisor:
[program:myqueue]
command=php artisan queue:listen --env=your_environment
directory=/path/to/laravel
stdout_logfile=/path/to/laravel/app/storage/logs/myqueue_supervisord.log
redirect_stderr=true
autostart=true
autorestart=true
You can also make use of Laravel's Task Scheduler i.e add the php artisan queue:listen command to the scheduler and sets its frequency to whatever you wants.
So that will make sure to call queue listen process automatically.
Hope it will make sense.

Categories