We have two different pods in Kubernetes for our Laravel app,
one running apache serving on port 80, (CMD /usr/sbin/apache2ctl -D FOREGROUND)
and another running worker (Laravel Horizon) (CMD php /var/www/artisan horizon)
The issue is when I check the horizon dashboard, it says 'Active', and I can see the Jobs in the 'Pending Jobs' section, but they never actually execute. They are just sitting there idle.
Now, when I SSH in the pod running apache and manually and run the command 'php artisan horizon' than it actually executes all pending jobs.
I have already ensured the followings:
Both the pods are connected with the same Redis database service
Horizon Prefix is the same for both the pods
Double check your APP_ENV matches one of the environments in the horizon.php config. Otherwise horizon will not start any queue workers.
By default only local and production environments are provided:
https://laravel.com/docs/8.x/horizon#environments
After struggling for days, I got the answer to this problem.
While using Redis as a cache, queue, or broadcast broker in the docker environment, we need to make sure that the following environment variables are defined properly and they must be the same across all the pods.
CACHE_PREFIX
REDIS_PREFIX
REDIS_CACHE_DB
HORIZON_PREFIX
Hope this will help others trying to deploy the Laravel apps using Kubernetes and Docker.
In my case, I need to change my app environment from prod to production
APP_ENV=production
In my case I added the jobs into "emails" queue, but horizon.php config file didn't specify this queue name for supervisor-1 :-)
I just restart Redis server
/etc/init.d/redis-server restart
If jobs or listeners send requests to external services and cannot reach destination hosts and the connection timeout value is set to something very big or job timeout value is also set to a big value then the jobs might also be in pending state long enough so that it may seem that horizon is not executing them.
Related
I am using Laravel forge with Redis as the queue driver.
I have updated the code for my application to send push notifications a few times over, but the notifications sent are as in the old code.
Changing the queue driver to database, sends the notifications as per the latest updates. However when I switched it back to Redis, it still shows old version of the notification.
I have done "FLUSHALL" via redis-cli, but it didn't fix it.
Also I use Laravel Horizon to manage queues.
How I can fix this? Thanks in advance.
Edit: Other thing I noticed was all code driven dispatches were queued on Redis. I have listed the solution in the answer in the hopes it would help someone else.
What I received from Forge support:
Hello,
There might be a worker that's stuck you can try and run php artisan
horizon:purge which should kill all rogue worker processes, and then
restart the daemon. It's advised to run the purge command in your
deployment script to make sure all stale processes are killed.
-- Mohamed Said forge#laravel.com
However this how I got it sorted:
php artisan horizon:terminate
php artisan queue:restart
And then the code was working properly
Stop redis, Stop Horizon workers. Start redis and then start horizon workers.
But before these all clear cache.
I had similar problem and in my case it was just the matter of restart all the services.
Is there any way to restart a worker when deploying. If the worker is not running start it while deploying.
The workers are registered in Procfile, but i always have to start them manually with an api request.
You can use cctrlapp APP_NAME deploy --restart-workers, this will stop all running workers and start them again with the new deploy version.
But this doesn't start workers if they don't exist. This is tricky to automate because not all workers in the Procfile are long running workers and you could also have workers which are started multiple times.
I now have a stable Beanstalkd and Laravel 4 Queue setup running on one machine. My question is, how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd? Maybe a very obvious question to some but I can't figure it out. I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd?
You'll need to have a working instance of your laravel application on the same server as the listener/workers.
This means deploying your application both to the web server and to server that is listening for jobs.
Then, on the listening server, you can call php artisan queue:listen in order to listen for new jobs and create a worker to handle the job.
I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
On top of the above question, and similar to most artisan commands, you will likely also need to define which environment the queue:listen command should use:
$ php artisan queue:listen --env=production
In this way, your laravel app that is used to handle the workers (the app on the listening server) will know what configurations to use, including knowing what database credentials to use. This also likely means that both the web server and your job/listening server needs to have access to your database.
Lastly, you could also create 2 separate Laravel applications - One for your web application and one purely to handle processing job. Then they could each have their own configuration, and you'll have 2 (probably smaller?) code bases. But still, you'll have 2 code bases instead of 1.
In that regard, do whatever works best for your situation.
is there a way how to easily run a PHP application as from command line on Windows Azure?
I have a standard Web Application (on Azure) and I want to communicate using WebSockets.
So I need to have a WebSocket Server running all the time on Azure.
I use Wrench project which I need to run "all the time" to listen on some port and deal with messages from JavaScript-sended WebSocket.
So again - how easily run a "persistent" PHP application on Azure?
Thank you in advance.
Sandrino's answer is fine, but I prefer ProgramEntryPoint for doing this sort of thing. The trouble with a background task is that (unless you build something on your own) nothing is monitoring it. Using ProgramEntryPoint, Windows Azure will monitor the process, and if it exits for any reason, the role instance will be restarted.
EDIT:
Sandrino points out that the PHP program isn't the only thing running. (There's also a website.) In that case, I'd recommend launching php.exe in Run() in WebRole.cs. Process.Start it and then do a .WaitForExit() on it. That way, if the process exits, the role itself will exit from Run(), causing the role instance to restart. See http://blog.smarx.com/posts/using-other-web-servers-on-windows-azure for an example.
In order to run your PHP script as a command line application you should use the PHP CLI (command line interface).
php.exe -f "yourWebSocketServce.php" -- -arg1 -arg2 -arg3
Now, in order to run this in Windows Azure you'll need to define a startup task that runs this command. You'll see that the default task type is simple, which means that the startup of your role will block until the task finishes. But in your case running the WebSocket in PHP will be a blocking process, that's why you should change the type to background (this will make sure the instance continues starting up while your WebSocket server is running).
Here is a WebSockets service on Azure. - Live XSockets.NET
Have a look at http://live.xsockets.net, an easy way of getting started, but it depends on what you are about to do on the "server side". This service i mention can be uses as a "message" dispatcher, to ntify "clients" on changes etc.. Hmm in other words it is a way of boosting "regular" web-apps..
In my application (Laravel 5.1) I have different commands which are fairly simple:
Get ip from RabbitMQ
Attempt to establish a connection to that ip
Update DB entry
Since it can take a while to connect to the ip (up to 30sec). I am forced to launch multiple instances of that script. For that I've created console commands which are launched by systemd.
Once I went in production I was quite surprised by the amount of memory which these scripts were consuming. Each script (as reported by memory_get_usage) was using about 21mb on startup. Considering that ideally I would need to run about 50-70 on those scripts on the same time it's a quiet big issue for me.
Just as a test I've installed clean Laravel 5.1 project and launched its default artisan inspire command, php reported 19mb.
Is it normal for laravel or am I missing something crucial here?