Laravel jobs pushed to Amazon SQS but not processing - php

I'm running Laravel 5.3. I'm attempting to test a queue job, and I have my queue configured to use Amazon SQS. My app is able to push a job onto the queue, and I can see the job in SQS. But it stays there, never being processed. I've tried running php artisan queue:work, queue:listen, queue:work sqs... None of them are popping the job off the queue. I'm testing this locally with Homestead. Is there a trick to processing jobs from SQS?

I faced the same problem. I was using supervisor. This worked for me:
Mentioned queue driver in command (sqs):
command=php /var/www/html/artisan queue:work sqs --tries=3
Then ran these commands:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl restart all
Posting this just in case it helps anyone.

The instructions in the following post worked for me: https://blog.wplauncher.com/sqs-queue-in-laravel-not-working/. In essence, make sure you do the following:
create your standard queue in SQS
update your config/queue.php file to use your SQS credentials (I would suggest - - adding more env vars to your .env file and referencing them in this file)
update your QUEUE_DRIVER in your .env, so it's set to QUEUE_DRIVER=SQS
update your supervisor configuration file (typically /etc/supervisor/conf.d/laravel-worker.conf)
update and restart supervisor (see the 3 commands mentioned by #Dijkstra)

Related

How to restart supervisor for a Laravel deployment?

I currently use a cron job to call php artisan queue:work --once every minute to work on my jobs queue in production.
I would like to use supervisor instead to handle my queues.
In the docs in the section of supervisor-configuration it states:
Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted. So, the simplest way to deploy an application using queue workers is to restart the workers during your deployment process. You may gracefully restart all of the workers by issuing the queue:restart command:
php artisan queue:restart
This command will instruct all queue workers to gracefully "die" after they finish processing their current job so that no existing jobs are lost. Since the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
I don't understsand the last sentence. So lets say I have installed and configured the supervisor as described here and I manually logged into the server through ssh and started supervisor:
sudo supervisorctl start laravel-worker:*
Do I need to call php artisan queue:restart on deployment? If so, then this will only kill all current workers, how do I tell supervisor to restart the queue workers? Do I need to call sudo supervisorctl restart laravel-worker:* in the deployment after php artisan queue:restart?
I struggled with this for quite some time. It is a little confusing, and the docs tend to point to each other rather than explain how the whole system works.
The whole point of installing supervisor on your server is to automate the queue process within any Laravel apps running on the machine. If you look at the example file on the help page you linked to, all it is doing is going into a specific Laravel instance and starting the queue.
When supervisor starts, it is the equivalent of:
php artisan queue:start
within your Laravel folder. Supervisor is running the process. But, you still have control to restart the queue, either through sudo supervisorctl restart laravel-worker:* or a php artisan queue:restart within a single Laravel folder. You do not need to call a restart to supervisor if you are manually restarting the queue with the aritsan command - that would be redundant. You can test this out by doing a restart of the queue and monitoring code changes updates, or looking at the queue itself to see that all jobs restart.
The 'gotcha' with all of this is that if you introduce new code and deploy it, you must remember to restart the queue for that instance.
To make things more complicated, but to eventually make it simple, take a look at Laravel Horizon, which basically takes the place of the supervisor in a way that is a bit easier to maintain.

How to Launch multiple QUEUES same time to achieve multi threading in LARAVEL?

I want to fetch the data from the server at the same time by using multithreading.
The reason behind multi-threading is to avoid load on the server and its resources.
So, I found laravel queues and since I am new in laravel I don't know much about it but after r&d I did the work and develop a job that dispatches queues one after another BUT I want that queue will start at the same time
You can install and use Supervisor, Then you can config it by numprocs=8 property to run e.g. 8 queue processes.
Here is the Laravel documentation about installing and configuring Supervisor: https://laravel.com/docs/master/queues#supervisor-configuration
You can also categorize your queues and dispatch your job to a particular queue (here is the docs) then use Supervisor with a config file for each queue.
Installing Supervisor
Supervisor is a process monitor for the Linux operating system, and
will automatically restart your queue:work process if it fails. To
install Supervisor on Ubuntu, you may use the following command:
sudo apt-get install supervisor
Configuring Supervisor
Supervisor configuration files are typically stored in the
/etc/supervisor/conf.d directory. Within this directory, you may
create any number of configuration files that instruct supervisor how
your processes should be monitored. For example, let's create a
laravel-worker.conf file that starts and monitors a queue:work
process:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
stopwaitsecs=3600
In this example, the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically
restarting them if they fail. You should change the queue:work sqs
portion of the command directive to reflect your desired queue
connection.

Laravel + Beanstalkd: How to run "queue:listen" as a service

I'm using Beanstalkd as a work queue in my project.
Now, my project is completed and I have to deploy it on VPS(production server).
There is something that confusing me! should I ssh to production server and manually type php artisan queue:listen ? (it's crap)
Is there any server to run queue:listen as service?
You should use something like Supervisor to run the queue in production. This will allow you to run the process in the background, specify the number of workers you want processing queued jobs and restart the queue should the process fail.
As for the queue you choose to use, that's up to you. In the past I've used Beanstalkd locally installed on an instance and Amazon SQS. The local instance was fine for basic email sending and other async tasks, SQS was great for when the message volume was massive and needed to scale. There are other SaaS products too such as IronMQ, but the usual reason people run into issues in production are because they're not using Supervisor.
You can install Supervisor with apt-get. The following configuration is a good place to start:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
numprocs=8
stdout_logfile=/home/user/app.com/worker.log
This will do the following:
Give the queue worker a unique name
Run the php artisan queue:work command
Automatically start the queue worker on system restart and automatically restart the queue workers should they fail
Run the queue worker across eight processes (this can be increased or reduced depending on your needs)
Log any output to /home/user/app.com/worker.log
To start Supervisor, you'd run the following (after re-reading the configuration/restarting):
sudo supervisorctl start laravel-worker:*
The documentation gives you some more in-depth information about using Supervisor to run Laravel's queue processes.

How to automate the deployment of new versions of a PHP script running in the background without downtime?

I have a AMQP consumer (a RabbitMQ consumer) written in PHP always active running in the background. This script is run in multiple nodes and 12 times per node: 12 unix background processes running:
php -f consumer.php &.
If a new version of the code must be deployed, at the moment I always have to kill ALL these processes manually and launch them again one by one, in each node.
Is there a way to automate the deployment of background scripts? I.e. put it in a deployment pipeline and then having them reloaded, similarly to using https://deployer.org.
Is there a way to avoid downtime?
Any way ReactPHP would help in this case?
Found the answer in the Laravel docs (the solution works for any always running background process, not just PHP and Laravel). Supervisor!
Configuring Supervisor
Supervisor configuration files are typically stored in the /etc/supervisor/conf.d directory. Within this directory, you may create any number of configuration files that instruct supervisor how your processes should be monitored. For example, let's create a laravel-worker.conf file that starts and monitors a queue:work process:
Starting Supervisor
Once the configuration file has been created, you may update the Supervisor configuration and start the processes using the following commands:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*
It will even help me starting as many processes I want with a single configuration file and single command. Again, from the Laravel docs:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
By calling sudo supervisorctl start laravel-worker:*, 8 background processes will run, which also restart in case of error.
If I just want to restart with a new released version, I call the restart command directly:
supervisorctl restart laravel-worker:*
I'll just integrate this as a Deployer task in my CI/CD pipeline.

Redis queues in Laravel with Homestead

I'm trying to use Redis for my queues.
Currently I'm on Homestead and I run php artisan queue:work --daemon --tries=3 in my virtual machine.
To test queues I write something in the log. When I use the sync driver, the logger can write, but it cannot when I use the redis one.
I also checked out the running processes and the redis-server is running, what's wrong?
Run redis-cli monitor and see if it shows anything being added when you push to the queue.
If nothing shows up, it means the queue isn't actually talking to redis.

Categories