Our jobs collect data from external APIs. If one of the jobs errors out because we unexpectedly reach the API daily limit (i.e. HTTP status 429) it is pointless to retry the job again, or even process any similar jobs, till next day.
Is there a way to prevent the current job to be attempted again after a specific event occurs? Ideally I should be able for example to set a flag in the failed job so I can check it on the next attempt (like suggested here)
Edit: I incorrectly referred to the jobs I didn't want to retry as "failed", however I meant if an error (exception) occurs during the API call. I edited the question.
It turns out the solution I was looking for is obvious: just fail() the job:
public function handle()
{
try {
// execute job stuff and throw a custom
// exception if a certain event occurs
} catch (MyCustomException $e) {
$this->fail($e); // it won't be put back in the queue
}
}
Related
When running custom artisan command which dispatches job, if any exception occur in job it should call handler.php but it doesn't get called.
I need to send an email for each type of exception caught in job.
=====edited=====
I see. so you want the jobs to retry a certain number of times and if it fails you want to send an email with the specific reason.
In this case you should not catch the exceptions inside your job with try-catch blocks. instead create the failed-table using php artisan queue:failed-table and handle the exceptions inside the failed function which you can create in the job like:
public function failed(Throwable $exception)
{
// Send user notification of failure, etc...
}
for further information see the laravel docs:
https://laravel.com/docs/8.x/queues#dealing-with-failed-jobs
In order for Google Cloud Tasks to automatically be re-queued I need my Laravel 7 app to return a 500 error, but everything short of a call to abort() seems to want to return a 200. I know that this ought to work:
return response('No dice son, you gotta work-a late.', 500);
...but no, the task still receives 200 and thus it's deleted from the queue as though it had succeeded.
The reason I'd prefer to avoid abort('Fall down go boom') is that it also raises an exception in Stackdriver, and that's unnecessary since I'm returning this particular error when a third party's API fails to provide data. In the event of errors on my side I'm killing the job outright.
The flow of my code is that I raise a custom exception when the third-party API returns null, then catch that exception and in the handler I call another method that does the work of cleaning up the job, which is when I intended to return 500... though right now I'm calling abort('No worky').
Is there some minutia in the docs that I managed to overlook?
In order to send HTTP 500, this usually would be:
return abort(500, 'Internal Server Error');
And the method usally doesn't matter, as it is just a helper, which returns Response.
Changing the error message does not chenge the fact that it's an intenral server error - and when returning 500, it is probably normal to have it logged as an error.
I call an API to send SMS and save it's response using Redis::throttle to limit the rate of the call to 5 calls every 60s with :
Redis::throttle('throttle:sms')->allow(5)->every(60)->then(function(){
//->API call
//->Save response
},function($error){//could not obtain lock
return $this->release(60);//Put back in queue in 60s
});
I didn't specify any $tries because if the lock cannot be obtain, it count as a try and if I process a long queue and the lock cannot be obtain many time the job will fail without any "real" errors.
But I dont want the job to be processed forever, if there is a real error (like if the response cannot be saved) it should fail without retry especially if the error appends after the API call as a retry will send another SMS (which I definitely don't want).
What I've tried :
Redis::throttle('throttle')->allow(5)->every(60)->then(function(){
try{
$response = MyAPICall();
$test = 8/0;
saveResponse($response);
} catch(\LimiterTimeoutException $e){
throw $e;
} catch(Exception $e){
Log::error($e);
$this->fail($exception = null);
//$this->delete();
}
},function($error){//could not obtain lock
Log::info($error);
return $this->release(60);//Put back in queue in 60s
});
If there is an exception because the lock cannot be obtain, I throw it back to let the queue handle it but if it's another exception, I log the error and fail or delete the job.
But it's not working with either delete() or fail(), the job is always retried.
How can I remove the job if there is an exception other than the LimiterTimeoutException ?
I was missing a "\" before Exception in my catch. Here is the fix code :
Redis::throttle('throttle:sms')->allow(5)->every(60)->then(function(){
$response = myAPICall();
try{
$test = 8/0;
SaveResponse($response);
}
catch (LimiterTimeoutException $exception){
throw $exception;//throw back exception to let redis handle it below
}
catch (\Exception $exception) {
Log::error($exception);
$this->fail($exception);
}
},function($error){//could not obtain lock
return $this->release(60);//Put back in queue in 60s
});
I added $this->fail($exception) to make the job to show up as "failed" in Horizon.
Using the below transaction, I fire an event, which runs the two listeners listed below the event. The issue is when the first event AddQuestionToQuestionsTable fails for any reason, the DB data gets rolled back correctly because of DB::rollback, but the laravel notification email I have setup and fired in
the second listener QuestionAddedNotificationSend gets sent out regardless of the error, so whether there was an error or not.
If there was an error in the transaction, we would not want to send the email. Note: I may add additional listeners that also insert into the DB, so I would need to know how to only fire off the emails when the transaction is successful.
DB::beginTransaction();
try {
event(new LaravelQuestionPosted($question, $user));
// Listeners: AddQuestionToQuestionsTable
// Listeners: QuestionAddedNotificationSend
} catch (\Exception $e) {
DB::rollback();
// something went wrong
}
DB::commit();
Anyone know how to make it work as intended?
You can pass true as the third parameter when dispatching the event with event() to have the listen handling halted when any of the handlers returns a non-null value or throws an exceptions.
Say for example I am trying to initialize a calendar with recurring events.
try {
$api->createCalendar()
$api->createCalendarSettings()
$api->createRecurringEvent()
} catch (Exception $exception) {
}
I want these 3 calls to act as a unit. If any one of them fails, previously successful actions must be undone. Is there a smart way of doing this other than to call the reverse action for every action in the catch block? The problem with doing that is that I don't know which actions already succeeded and the reverse actions might also blow up.