I now have a stable Beanstalkd and Laravel 4 Queue setup running on one machine. My question is, how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd? Maybe a very obvious question to some but I can't figure it out. I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd?
You'll need to have a working instance of your laravel application on the same server as the listener/workers.
This means deploying your application both to the web server and to server that is listening for jobs.
Then, on the listening server, you can call php artisan queue:listen in order to listen for new jobs and create a worker to handle the job.
I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
On top of the above question, and similar to most artisan commands, you will likely also need to define which environment the queue:listen command should use:
$ php artisan queue:listen --env=production
In this way, your laravel app that is used to handle the workers (the app on the listening server) will know what configurations to use, including knowing what database credentials to use. This also likely means that both the web server and your job/listening server needs to have access to your database.
Lastly, you could also create 2 separate Laravel applications - One for your web application and one purely to handle processing job. Then they could each have their own configuration, and you'll have 2 (probably smaller?) code bases. But still, you'll have 2 code bases instead of 1.
In that regard, do whatever works best for your situation.
Related
We have two different pods in Kubernetes for our Laravel app,
one running apache serving on port 80, (CMD /usr/sbin/apache2ctl -D FOREGROUND)
and another running worker (Laravel Horizon) (CMD php /var/www/artisan horizon)
The issue is when I check the horizon dashboard, it says 'Active', and I can see the Jobs in the 'Pending Jobs' section, but they never actually execute. They are just sitting there idle.
Now, when I SSH in the pod running apache and manually and run the command 'php artisan horizon' than it actually executes all pending jobs.
I have already ensured the followings:
Both the pods are connected with the same Redis database service
Horizon Prefix is the same for both the pods
Double check your APP_ENV matches one of the environments in the horizon.php config. Otherwise horizon will not start any queue workers.
By default only local and production environments are provided:
https://laravel.com/docs/8.x/horizon#environments
After struggling for days, I got the answer to this problem.
While using Redis as a cache, queue, or broadcast broker in the docker environment, we need to make sure that the following environment variables are defined properly and they must be the same across all the pods.
CACHE_PREFIX
REDIS_PREFIX
REDIS_CACHE_DB
HORIZON_PREFIX
Hope this will help others trying to deploy the Laravel apps using Kubernetes and Docker.
In my case, I need to change my app environment from prod to production
APP_ENV=production
In my case I added the jobs into "emails" queue, but horizon.php config file didn't specify this queue name for supervisor-1 :-)
I just restart Redis server
/etc/init.d/redis-server restart
If jobs or listeners send requests to external services and cannot reach destination hosts and the connection timeout value is set to something very big or job timeout value is also set to a big value then the jobs might also be in pending state long enough so that it may seem that horizon is not executing them.
I feel a little bit silly for asking this question but I can't seem to find an answer on the internet for this problem. After searching for several hours I figured out that on a linux server you use Supervisor to run "php artisan queue:listen" (either with or without daemon) continuously on your website to handle jobs pushed to the queue. This is all well and good, but what if I want to do this on a Windows Azure web app? After searching around the solutions I found were:
Make a chron job to run "php artisan queue:listen" every minute (or every X minutes), I really dislike this solution and wanted to avoid it specially if the site gets more traffic;
Add a WebJob that runs "php artisan queue:listen" continuously (the problem here is I don't know how to write the script for the WebJob...);
I want to ask you guys for help on to know which of these is the correct solution, if there is a better one and if the WebJob is the best one how do I write the script for this? Thanks in advance.
In short, Supervisor is a modern alternative to nohup (no hang up) with a few other bits and pieces tacked on. In short, there's other resources that can keep a task running in the background (daemon) and the solution I use for Windows based projects (very few tbh) is Forever which I discovered via: https://stackoverflow.com/a/18226392/5912664
C:\myprojectroot > forever -c php artisan queue:listen --queue=some_nice_queue --tries=3
How?
Install node for Windows, then with npm install Forever
C:\myprojectroot > npm install -g forever
If you're stuck for getting Node running on Windows, I recommend the Windows Package Manager, Chocolatey
https://chocolatey.org/packages?q=node
Be sure to check for any logfiles that Forever creates, as I had left one long enough to consume 30Gb of disk space!
For Azure you can make a new webjob to your web app, and upload a .cmd file including a command like this.
php %HOME%\site\wwwroot\artisan queue:work --daemon
and defining that as a triguered and 0 * * * * * frequency cron.
that way work for me.
best.
First of all you cannot use a WebJob with Laravel on Azure. The Azure PHP Web App is hosted on Linux. WebJobs do not work with Linux at this moment.
The best way to do chron jobs in Laravel on Azure is to create an Azure Logic App. You use the Recurrence trigger and then a HTTP action to send a POST request to your Laravel Web App. You use this periodic heartbeat to run whatever actions you need to do. Be sure to add authentication to your POST request.
The next problem you will have is that POST will be synchronous so the work you are doing cannot be extensive or your HTTP request will time out or you will reach the time limit on PHP scripts (60 seconds).
The solution is not Laravel Jobs because here again you need something running in the background to process the queues.
The solution is also not PHP threads. The standard Azure PHP Web App does not support PHP Threads. You can of course build your own Web App and enable PHP threads, but this is really swimming upstream.
You simply have to live with synchronous logic. So the work you are doing with the heartbeat should take no more than about 60 seconds.
If you need more extensive processing then you really need to off load it to another place: another Web App, an Azure Function, etc.
But why not do that in the first place? The reason is cost and complexity. If you have something simple...like a daily report...you simply connect the report to the heartbeat and all the facilities for producing the report are right there in Laravel. To separate the daily report into its own container would require setup and the Web App it runs in would incur costs...not worth it in my view for something simple.
I have an application which uses queues in order to send emails.
In a production environment, should I run the queue:listen command in the same application server where the application resides? Or should I do outsorcing?
So far,I've been in a dev environment working with two command lines, one for the php artisan serve command in order to get the application running and the other for the php artisan queue:listen command. If outsorcing is better for production environment, would I have to modify my code so I can work with Beanstalkd, Amazon SQS or another?
Reference Link ::
http://laravelcoding.com/blog/laravel-5-beauty-sending-mail-and-using-queues#14-about-queues
Helped me with my approach for sending notification message to gcm and to the registered mobiles.
So what exactly is php artisan serve doing? I currently have a site up and running on apache and I am trying to get a websocket framework up for real time chat. The websocket is a php daemon that runs in the background and listens for events, see the package here.
So I am using the command
php artisan serve brainsocket:start --port=8080
to start the server and everything works great, however this only works while I have the terminal open and I have read in 3-4 SO posts that artisan serve is NOT to be used in production. So how can I run the laravel package start function on port 8080 without php artisan serve, and so that it will be persistent after I close the terminal?
I'm surprised this hasn't been answered yet.
In production you want to run a real web server like Apache or Nginx.
With Nginx you would use php-fpm as your runtime and you would proxy requests to it.
Here's an example from Nginx's website.
https://www.nginx.com/resources/wiki/start/topics/examples/phpfcgi/
In my application (Laravel 5.1) I have different commands which are fairly simple:
Get ip from RabbitMQ
Attempt to establish a connection to that ip
Update DB entry
Since it can take a while to connect to the ip (up to 30sec). I am forced to launch multiple instances of that script. For that I've created console commands which are launched by systemd.
Once I went in production I was quite surprised by the amount of memory which these scripts were consuming. Each script (as reported by memory_get_usage) was using about 21mb on startup. Considering that ideally I would need to run about 50-70 on those scripts on the same time it's a quiet big issue for me.
Just as a test I've installed clean Laravel 5.1 project and launched its default artisan inspire command, php reported 19mb.
Is it normal for laravel or am I missing something crucial here?