In this moment i have a shared hosting server.
In my app i want to use Laravel's queue system, but i can't maintain the command php artisan queue:work beacuse i can't install a supervisor.
With a little bit of an effort, i can move my app on a VPS, but i don't have much of experience with servers and i'm a little scared that my app will be offline much time.
Considering lack of experience on server side, i have this questions:
Is it ok to use Laravel queues with cron jobs? Can it break in any way?
Only for this problem should i upgrade to an VPS or i should remain on this shared hosting server ( i have ssh access here )?
Quick answer: you should not use Laravel queues without a process monitor such as Supervisor.
It all depends of what you want to achieve, but an alternative to queues would be using the laravel scheduler: you can trigger the scheduler with a cron task (every minute for example), and dispatch jobs easily.
And if you really want to use the queues, a solution could be to add your jobs to the queue, and process them using a cron task every minute running the following command: php artisan queue:work. But I would recommend the previous solution.
Related
I am using Laravel forge with Redis as the queue driver.
I have updated the code for my application to send push notifications a few times over, but the notifications sent are as in the old code.
Changing the queue driver to database, sends the notifications as per the latest updates. However when I switched it back to Redis, it still shows old version of the notification.
I have done "FLUSHALL" via redis-cli, but it didn't fix it.
Also I use Laravel Horizon to manage queues.
How I can fix this? Thanks in advance.
Edit: Other thing I noticed was all code driven dispatches were queued on Redis. I have listed the solution in the answer in the hopes it would help someone else.
What I received from Forge support:
Hello,
There might be a worker that's stuck you can try and run php artisan
horizon:purge which should kill all rogue worker processes, and then
restart the daemon. It's advised to run the purge command in your
deployment script to make sure all stale processes are killed.
-- Mohamed Said forge#laravel.com
However this how I got it sorted:
php artisan horizon:terminate
php artisan queue:restart
And then the code was working properly
Stop redis, Stop Horizon workers. Start redis and then start horizon workers.
But before these all clear cache.
I had similar problem and in my case it was just the matter of restart all the services.
I currently have a couple sites setup on a server, and they all use beanstalkd for their queues. Some of these sites are staging sites. While I know it would be ideal to have the staging site on another server, it doesn't make financial sense to spin up another server for it in this situation.
I recently ran into a very confusing issue where I was deploying a staging site, which included a reseeding of the database. I have an observer setup on some model saves that will trigger a job to be queued, which usually ends up sending out an email. The staging site does not actually have any queue workers setup to run them, but the production site (on the same server) does have the queue workers running.
What appeared to be happening is the staging site was generating the queue jobs, and the production site was running those queue jobs! This causes random users of mine to be spammed with email, since it would serialize the model from staging, and when it unserialized it running the job, it actually matched up with an actual production user.
It seems like it would be very common to have multiple sites on a server running queues, so I'm curious if there is a way to avoid this issue. Elasticsearch has the concept of a 'cluster', so you can run multiple search 'clusters' on one server. I'm curious if beanstalkd or redis or any other queue provider have this ability, so we don't have crosstalk between completely separate websites.
Thanks!
Beanstalkd has the concept of tubes:
Tubes are job queues.
A common use case of tubes would be to have completely different sets
of producers and consumers running through a single beanstalk instance
such that a given consumer will not know what to do with jobs produced
by some of the producers. Producer1 can enqueue jobs into Tube1 and
Consumer1 can pick up the jobs from there completely independently of
what Producer2 and Consumer2 are doing with Tube2, for example.
For example, if you're using pheanstalk, the producer will call useTube():
$pheanstalk = new Pheanstalk();
$pheanstalk->useTube('foo')->put(...);
And the worker will call watch():
$pheanstalk = new Pheanstalk();
$pheanstalk->watch('foo')->ignore('default')->reserve();
This is an old question but have you tried running multiple beanstalkd daemons? Simply bind to another port.
Example:
beanstalkd -p 11301 &
Use & to fork into background.
I working currently on a simple Reminder Laravel app for Heroku. I've written my API and everything works well. Now I want to run a scheduled task on Heroku.
Basically I'm looking for something to execute the following command on a separate dyno:
php EmailScheduler.php
or
php artisan schedule:emails
This command should be a long running process where e.g. each minute the same task is being executed over and over.
In this task I want to query my database and get all reminders that are due and send them as an email so I have to be able to access the business logic of my application like eloquent models.
I know about the scheduler add-on or about certain "cron" solutions but I would like a PHP only solution so just a long running script or task that might sleep most of the time and wake up each minute or so.
In Ruby (Rails) for example I could require the environment, thereby have access to my business logic and then fire up a timer task using the concurrent-ruby library.
I already tried to require the
bootstrap/autoload.php
bootstrap/app.php
files but it seems like I have to do more than just this, like boot the application.
How can I solve this using the Laravel framework and possibly other third-party libraries ?
Task Scheduling in laravel is done by making a cron job. By defining a cron job, one can schedule a task to be excuted at a defind period.
You follow the official documentation of laravel for scheduling: https://laravel.com/docs/5.3/scheduling
In 2017, you indeed couldn't use cron on Heroku. As of 2021, you can use Cron To Go, an Heroku add-on, to set up a cron job that runs on one-off dynos in Heroku. Simply follow the guide here to run Laravel's scheduler once a minute.
I feel a little bit silly for asking this question but I can't seem to find an answer on the internet for this problem. After searching for several hours I figured out that on a linux server you use Supervisor to run "php artisan queue:listen" (either with or without daemon) continuously on your website to handle jobs pushed to the queue. This is all well and good, but what if I want to do this on a Windows Azure web app? After searching around the solutions I found were:
Make a chron job to run "php artisan queue:listen" every minute (or every X minutes), I really dislike this solution and wanted to avoid it specially if the site gets more traffic;
Add a WebJob that runs "php artisan queue:listen" continuously (the problem here is I don't know how to write the script for the WebJob...);
I want to ask you guys for help on to know which of these is the correct solution, if there is a better one and if the WebJob is the best one how do I write the script for this? Thanks in advance.
In short, Supervisor is a modern alternative to nohup (no hang up) with a few other bits and pieces tacked on. In short, there's other resources that can keep a task running in the background (daemon) and the solution I use for Windows based projects (very few tbh) is Forever which I discovered via: https://stackoverflow.com/a/18226392/5912664
C:\myprojectroot > forever -c php artisan queue:listen --queue=some_nice_queue --tries=3
How?
Install node for Windows, then with npm install Forever
C:\myprojectroot > npm install -g forever
If you're stuck for getting Node running on Windows, I recommend the Windows Package Manager, Chocolatey
https://chocolatey.org/packages?q=node
Be sure to check for any logfiles that Forever creates, as I had left one long enough to consume 30Gb of disk space!
For Azure you can make a new webjob to your web app, and upload a .cmd file including a command like this.
php %HOME%\site\wwwroot\artisan queue:work --daemon
and defining that as a triguered and 0 * * * * * frequency cron.
that way work for me.
best.
First of all you cannot use a WebJob with Laravel on Azure. The Azure PHP Web App is hosted on Linux. WebJobs do not work with Linux at this moment.
The best way to do chron jobs in Laravel on Azure is to create an Azure Logic App. You use the Recurrence trigger and then a HTTP action to send a POST request to your Laravel Web App. You use this periodic heartbeat to run whatever actions you need to do. Be sure to add authentication to your POST request.
The next problem you will have is that POST will be synchronous so the work you are doing cannot be extensive or your HTTP request will time out or you will reach the time limit on PHP scripts (60 seconds).
The solution is not Laravel Jobs because here again you need something running in the background to process the queues.
The solution is also not PHP threads. The standard Azure PHP Web App does not support PHP Threads. You can of course build your own Web App and enable PHP threads, but this is really swimming upstream.
You simply have to live with synchronous logic. So the work you are doing with the heartbeat should take no more than about 60 seconds.
If you need more extensive processing then you really need to off load it to another place: another Web App, an Azure Function, etc.
But why not do that in the first place? The reason is cost and complexity. If you have something simple...like a daily report...you simply connect the report to the heartbeat and all the facilities for producing the report are right there in Laravel. To separate the daily report into its own container would require setup and the Web App it runs in would incur costs...not worth it in my view for something simple.
I have a PHP project (Symfony2) that uses RabbitMQ. I use its as simple message queue to delay some jobs (sending mails, important data from APIs). The consumers run on the webserver and their code is part of the webserver repo - they are deployed in the same with with the web.
The questions are:
How do I start the consumers as daemons and make sure they always run?
When deploying the app, how do I shut down consumers "gracefully" so that they stop consuming but finish processing the message they started?
If it's any important, for deployment I use Capifony.
Thank you!
It maybe worth looking at something supervisord which is written in python. I've used it before for running workers for Gearmand which is a job queue that fullfils a similar role to the way your using RabbitMQ.