So basically I couldnt find my answer elsewhere. I'm having an issue with Laravel 5.x in where I have models called 'Bumpers' which can have a certain cron-time value (say 'every 25 minutes').
Many bumpers can have the same cron-time, meaning that at a certain time, many of these bumpers may be executed using the Laravel Scheduler. However, I need to run these bumpers 10 seconds apart from eachother. So if 10 bumpers are triggered at 8:00:00PM, I need the queue to run with 10 seconds interval until all bumpers have been executed.
I've tried adding sleep(); to the $schedule->call(function(){executeStuff(); sleep(10)});. That worked for a bit but unfortunately it only works for queues not exceeding 60 seconds because then the Scheduler is ran again:
// This function is called on "artisan schedule:run"
protected function schedule(Schedule $schedule)
{
// Get only the active bumpers.
$bumpers = Bumper::where('status', 1)->get();
// Register all bumpers through their occurrence value.
foreach ($bumpers as $bumper) {
// Check for member/admin. All other bumpers are neglected.
if ($bumper->user->canBump()) {
$schedule->call(function () use ($bumper) {
sleep(10);
$bumper->post();
})->cron($bumper->occurrence);
}
}
}
Any solutions?
Related
(Note: I am a beginner in using AWS SQS for queues) I have a function where I would like to insert tens of thousands records into an excel, save the excel to AWS S3 and display into frontend datatable. This function executes using AWS SQS queue with Supervisor as worker in a Laravel 9 webapp.
The error that I am getting is:
Job\SomeJob has been attempted too many times or run too long. The
job may have previously timed out. {"exception":"[object]
(Illuminate\Queue\MaxAttemptsExceededException(code: 0)
(Symfony\Component\ErrorHandler\Error\FatalError(code: 0): Maximum
execution time of 60 seconds exceeded at
/var/app/current/vendor/laravel/framework/src/Illuminate/Collections/Arr.php:314)
I have no clue why I am getting this error but the job is actually successful. This error will show up in the failed_jobs table and I have a function where if there are any failed_jobs, run a script to email to the manager and I believe you guys know what happens after that.
What I have tried is to Log::info() every line before and after a process to know which is causing the error.
My Supervisor setting for SQS:
[program:sqs-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/app/current/artisan queue:work sqs --sleep=3 --tries=1 --timeout=1800
autostart=true
autorestart=true
user=webapp
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/html/worker.log
How I dispatch the job:
class SomeOtherController extends Controller{
public function show(){
dispatch(new SomeJob($id));
return 'job run';
}
}
The job content is:
Class SomeJob implements ShouldQueue{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $timeout = 1800;
public $id;
public function __construct($id){
$this->id = $id;
}
public function handle(){
Log::info('start job');
$apps = Application::where('client_id', $id)->get(); //15000+ records
Log::info('start foreach');
$count = 0; //to count each 100 records, do Log::info()
foreach($apps as $key => $app){
if($count == 100){
Log::info('This is the '.$key.' record');
$count = 0;
}
//the actual job is much more lenghty and complicated
$list = new ApplicationTable();
$list->client_id = $app->client_id;
$list->name = $app->name;
$list->booking = $app->booking->name;
$list->price = $app->price + $app->discount + $app->gst;
$list->save();
$count++;
}
Log::info('end foreach');
//Some process to generate and store excel to S3
$lists = ApplicationTable::where('client_id','=',$id)->get();
(new ReportExport($lists))->store('application_report');
$s3 = Storage::disk('s3');
$s3_path = 'ApplicationReport';
$s3->put($s3_path, file_get_contents('application_report'));
//unlink
unlink($path);
$user_email = $apps->first()->user->email;
if(isset($user_email)){
\Mail::to($user_email)->send(new ApplicationReportMail($id));
}
log::info('end job');
return true;
}
}
What I am expecting is that the log will show all processes and ends with 'end job' without any error. But what I am getting is:
[20XX-XX-XX 12:56:34] start job
[20XX-XX-XX 12:56:36] start foreach
[20XX-XX-XX 12:56:41] This is the 100 record
[20XX-XX-XX 12:56:47] This is the 200 record
[20XX-XX-XX 12:56:52] This is the 300 record
[20XX-XX-XX 12:56:57] This is the 400 record
[20XX-XX-XX 12:57:04] local.ERROR: App\Jobs\SomeJob has been attempted too many times or run too long. The job may have previously timed out. {"exception":"[object] (Illuminate\\Queue\\MaxAttemptsExceededException(code: 0): App\\Jobs\\SomeJob has been attempted too many times or run too long. The job may have previously timed out. at /var/app/current/vendor/laravel/framework/src/Illuminate/Queue/Worker.php:746)"
[20XX-XX-XX 12:57:06] This is the 500 record
[20XX-XX-XX 12:57:10] This is the 600 record
...
[20XX-XX-XX 13:09:46] This is the 11400 record
[20XX-XX-XX 13:09:52] This is the 11500 record
[20XX-XX-XX 13:09:53] Maximum execution time of 60 seconds exceeded {"userId":144,"exception":"[object] (Symfony\\Component\\ErrorHandler\\Error\\FatalError(code: 0): Maximum execution time of 60 seconds exceeded at /var/app/current/vendor/laravel/framework/src/Illuminate/Collections/Arr.php:314)"
[20XX-XX-XX 13:16:20] local.INFO: end foreach
[20XX-XX-XX 13:16:23] local.INFO: end job
As you can see from the logs, the job was running and after roughly 30-60 seconds, Laravel throws the MaxAttemptsExceededException exception. Then at 13:09:53 get another FatalError exception where it says 60 seconds timeout exceeded and the log stopped. It continues after 13:16:20 to finish the process...
For anyone curious on what the config for the queue inside the failed_jobs table:
...,"maxTries":null,"maxExceptions":null,"failOnTimeout":false,"backoff":null,"timeout":1800,"retryUntil":null,...
Really appreciate any input and clarification on this matter. I have search for a solution but to no success.
(Symfony\Component\ErrorHandler\Error\FatalError(code: 0): Maximum
execution time of 60 seconds exceeded at
/var/app/current/vendor/laravel/framework/src/Illuminate/Collections/Arr.php:314)
Firstly, regarding the above error, you need to increase your 'maximum execution time' PHP configuration.
Fatal error: Maximum execution time of 30 seconds exceeded
That can be resoloved by adding set_time_limit(1800); at the beginning of your Job's SomeJob::handle(...) method body.
Job\SomeJob has been attempted too many times or run too long. The
job may have previously timed out. {"exception":"[object]
(Illuminate\Queue\MaxAttemptsExceededException(code: 0)
Secondly, regarding the above error,
Based on Job Expirations & Timeouts and Job has been attempted too many times or run too long, you need to increase the retry_after value to the maximum number of seconds your jobs should reasonably take to complete processing.
Amazon SQS will retry the job based on the Default Visibility Timeout which is managed within the AWS console. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.
Increasing 'Amazon SQS's default visibility timeout' value to a higher value a couple of seconds more than your queue timeout configuration should resolve that issue. If your --timeout option is longer than your retry_after or 'Amazon SQS's default visibility timeout' configuration value, your jobs may be processed twice.
Refer to: Configuring queue parameters (console) for information about configuring 'Amazon SQS's visibility timeout' for a queue using the console.
Lastly, ensure that you're actually using the right queue connection by checking if QUEUE_CONNECTION=sqs is set in your .env file.
I have use carbon, but it's not working according to my needs.
What I want is to apply check on time in minutes, like after every 15 min check, to work and then minute reset to zero.
Like:
if(time > 15 ) {
do this...
reset to zero
} else {
do this
}
You need task schedule or command make:
The simple form is call your controller every15min, go to app/console/Kernel.php
protected function schedule(Schedule $schedule)
{
$schedule->call('yourcontroller#youtfunction')->everyFifteenMinutes();
}
And you run this: php artisan schedule:run
You can make command, but this ius more complex
Here is oficial documentation: https://laravel.com/docs/7.x/scheduling
I have Laravel cron issue ,In Console Kernel I have defined job which will hit Rollovercron.php file every 10 mins and every time it will hit it will pass one country. atleast 100 Countries are defined in an array and will be passed one by one to Rollovercron.php according to foreach loop. Rollovercron.php file takes minimum 2 hrs to run for single country.
I have multiple issues with this cron job:
100 elements in an array not getting fetched one by one means I can see 'GH' country(Ghana) has run for 5 times continuously and many of the countries are skipped.
when ever I get country missing issue I do composer update and clear cache frequently.
I want my cron should run smoothly and fetch all countries not even single country should miss and I should not need to do composer update for this all the time.
Please help me in this ,struggling for this since many months.
bellow is Kernel.php file:
<?php
namespace App\Console;
use Illuminate\Console\Scheduling\Schedule;
use Illuminate\Foundation\Console\Kernel as ConsoleKernel;
use DB;
class Kernel
extends ConsoleKernel
{
/**
* The Artisan commands provided by your application.
*
* #var array
*/
protected $commands = [
\App\Console\Commands\preAlert::class,
\App\Console\Commands\blCron::class,
\App\Console\Commands\mainRollover::class,
\App\Console\Commands\refilingSync::class,
\App\Console\Commands\TestCommand::class,
\App\Console\Commands\rollOverCron::class,
\App\Console\Commands\FrontPageRedis::class,
\App\Console\Commands\filingStatusRejectionQueue::class,
\App\Console\Commands\VesselDashboardRedis::class,
\App\Console\Commands\Bookingcountupdate::class,
// \App\Console\Commands\Voyagetwovisit::class,
];
/**
* Define the application's command schedule.
*
* #param \Illuminate\Console\Scheduling\Schedule $schedule
* #return void
*/
protected function schedule(Schedule $schedule)
{
$countrylist=array('NL','AR','CL','EC','DE','PH','ID','TT','JM','KR','BE','VN','US','BR','CM','MG','ZA','MU','RU','DO','GT','HN','SV', 'PR','SN', 'TN', 'SI','CI','CR','GM','GN','GY','HR','LC','LR','MR','UY','KH','BD','TH','JP','MM','AT','IE','CH','LB','PY','KE','YT','TZ','MZ','NA','GQ','ME');
foreach ($countrylist as $country) {
$schedule->command('rollOverCron:send ' . $country)
->everyTenMinutes()
->withoutOverlapping();
}
foreach ($countrylist as $country) {
$schedule->command('mainRollover:send ' . $country)
->daily()
->withoutOverlapping();
}
$schedule->command('filingStatusRejectionQueue')
->hourly()
->withoutOverlapping();
$schedule->command('Bookingcountupdate')
->everyTenMinutes()
->withoutOverlapping();
$schedule->command('preAlert')
->hourly()
->withoutOverlapping();
}
protected function commands()
{
require base_path('routes/console.php');
}
}
/**
* Register the Closure based commands for the application.
*
* #return void
*/
Laravel scheduling, knowing how it works helps, so you can debug it when it doesn't work as expected. This does involve diving in the source.
You invoke command on the scheduler, this returns an event.
Let's check how Laravel decides what defines overlapping, we see it expires after 1440 minutes, aka 24 hours.
So after one day, if the scheduled items have not run these scheduled items just stop being scheduled.
We see that a mutex is being used here. Let's see where it comes from. It seems it's provided in the constructor.
So lets see which mutex is being provided. In the exec and the call functions the mutex defined in the Scheduler constructor is used.
The mutex used there is an interface, probably used as a Facade, and the real implementation is most likely in CacheSchedulingMutex, which creates a mutex id using the mutexName from the event and the current time in hours and minutes.
Looking at the mutexName we see that the id exists out of the expression and command combined.
To summarise, all events called in one Scheduler function, share the same mutex that is used in checking if method calls don't overlap, but the mutex generates an unique identifier for each command, including differing parameters, and based on the time.
Your scheduled jobs will expire after 24 hours, which means that with jobs that take 2 hours to complete, you'll get about 12 jobs in a day completed. More if the jobs are small, less if the jobs take longer. This is because PHP is a single threaded process by default.
First task 1, then task 2, then task 3. Etc... This means that if each tasks takes 2 hours, then after 12 tasks their queued jobs expire because the job has been running for 1440 minutes and then the new jobs are scheduled and it starts again from the top.
Luckily there is a way to make sure they run simultaneously.
I suggest you add ->runInBackground() to your scheduling calls.
$schedule->command('rollOverCron:send ' . $country)
->everyTenMinutes()
->withoutOverlapping()
->runInBackground()
->emailOutputTo(['ext.amourya#cma-cgm.com','EXT.KKURANKAR#cma-cgm.com']);cgm.com']);
}
I want to create a queue (AMAZON SQS) that only runs jobs every X sec. So if suddenly 50 jobs are submitted, the end up in the queue. The queue listener then pulls a job, does something and waits X sec. After that, the next job is pulled. Another X sec pause. Etc etc
For the queue listener, the sleep option option only determines how long the worker will "sleep" if there are no new jobs available. So it will only sleep if there is nothing in the queue.
Or should I just put in a pause(x) in my PHP code?
[edit] I just tested the sleep method with a FIFO and standard AWS SQS queue and this messes up the whole queue. Suddenly jobs are (sucesssfully) resubmitted 3 times after which the go into failed state. Moreover, the delay that is given in my code (3-4 min) was ignored, instead a one minute was taken
<?php
namespace App\Jobs;
use App\City;
class RetrieveStations extends Job
{
protected $cities;
/**
* Create a new job instance.
*
* #return void
*/
public function __construct ($cities)
{
$this->cities = $cities;
}
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
// code here
doSomething()
sleep(X);
}
}
I have the exact same problem to solve. I'm using Laravel 5.8 and I don't see how I can get the queue worker to wait a fixed period between jobs.
I'm now thinking of using a scheduled task to handle this. I can schedule a task to run, say, every 5 minutes and run the following artisan command:
$schedule->command('queue:work --queue=emails --once')->everyFiveMinutes();
This will take one job from the queue and run it. Unfortunately, there's not much more granular control over how often a job is processed.
Exactly, you need to set asleep your php code, there is no other way.
Php sleep
I have a simple web application that is written with the Laravel 4.2 framework. I have configured the Laravel queue component to add new queue items to a locally running beastalkd server.
Essentially, there is a POST route that will add an item to the beanstalkd tube.
I then have supervisord set up to run artisan queue:listen as three separate processes. The issue that I am seeing is that the different queue:listen processes will end up spawning anywhere between one to three queue:worker processes for just one inserted job.
The end result being that one job inserted into the queue is sometimes being processed by multiple workers at the same time, something I am obviously trying to avoid.
The job code is relatively simple:
<?php
use App\Repositories\URLRepository;
class ProcessDataJob {
private $urls;
public function __construct(URLRepository $urls)
{
$this->urls = $urls;
}
public function fire($job, $data)
{
$input = $data['post'];
if (!isset($data['post']) && !is_array($data['post'])) {
Log::error('[Job #'.$job->getJobId().'] $input was empty inside CreateAuditJob. Deleting job. Quitting. Bye!');
$job->delete();
return false;
}
//
// ... code that will take a few hours to run.
//
$job->delete();
Log::info('[Job #'.$job->getJobId().'] ProcessDataJob was successful, deleting the job!');
return true;
}
}
The fun part being that most of the (duplicated) queue workers fail when deleting the job with this left in the errorlog:
exception 'Pheanstalk_Exception_ServerException' with message 'Job 3248 NOT_FOUND: does not exist or is not reserved by client'
The ttr (Time to Run) is set to 172,800 seconds (or 48 hours), which is much larger then the time it would take for the job to complete.
what's the job time_to_run when queued? If running the job takes longer than time_to_run seconds, the job is automatically re-queued and becomes eligible to be run by the next worker.
A reserved job has time_to_run seconds to be deleted, released or touched. Calling touch() restarts the timeout timer, so workers can use it to give themselves more time to finish. Or use a large enough value when queueing.
I've found the beanstalkd protocol document helpful
https://github.com/kr/beanstalkd/blob/master/doc/protocol.md
Since you are running with Laravel, verify your queue.php configuration file.
Change the ttr value from 60 (default) to something else that is better for you.
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'default',
'ttr' => 600, //Example
),