There is an Api called sendMessage and the uri is chat/sendMessage and the body of this uri is :
{
'Message': " Hello ",
'targetUser: "SomeOne"
}
so what the code does after calling this uri it goes to MessageController and use the Send() function and will send the message to the targetUser, But What I want to do is to make this to Scheduled Chat , meaning that I want to Change the Body of uri to This :
{
'Message': " Hello ",
'targetUser: "SomeOne",
'Date' : " 4 5 30 3 6" // which means : 04:05 on 30/march on Saturday.This is Cron type date in laravel
}
and After calling the uri this will do the job on that Date for me.
So I created the ChatCron.php file in App/Console/Command Directory then I have to handle the function in handle() and then Call it from Kernel.php.
The problem is :
I actually can't figure it out how to Connect ChatCron.php to MessageController .
I don't know how to tell Laravel that : After calling the uri go to ChatCron.php and set the timer for calling the Send() function in MessageController.
What I know right now is that I can call the handle function of ChatCron.php in Kernel.php in a specific date with:
$schedule->command('cron:chat')
->cron(' 4 5 30 3 6 ')
->background();
Related
when I create a job and before start it , I need run a function ,
i update the class DatabaseJob
<?php
namespace App\Queue\Jobs;
use App\Models\Tenancy;
use App\Http\DatabaseHelper;
use App\Http\Helper;
class DatabaseJob extends \Illuminate\Queue\Jobs\DatabaseJob
{
public function fire()
{
Helper::createJobLog($this->job->id);
parent::fire();
}
}
but it seems the function createJobLog is fired only when the Job start ,I need it when the job created not started .
In a service provider you can listen for the Illuminate\Queue\Events\JobQueued event. Something similar to:
Event::listen(JobQueued::class, function ($event) {
// Of course, if you need to log only the database jobs then you can check the job type
if (!$event->job instanceOf DatabaseJob) {
return;
}
Helper::createJobLog($event->job->getJobId());
});
You may call the function createJobLog() when the job is dispatched. Jobs can be set with a timestamp to delay its start time, if you don’t want the job started immediately after it is being dispatched.
I am trying to debug some bizarre behaviour of my PHP application. It is running Laravel 6 + AWS SQS. The program downloads call recordings from a VoIP provider's API using a job. The API has a heavy rate limit of 10req/minute, so I'm throttling the requests on my side. The job is configured to try to complete within 24 hours using retryUntil method. However, the job disappears from the queue after 4 tries. It doesn't fail. The job's failed method never gets executed (I've put logging and Sentry::capture in there). It's not on the failed_jobs table. The last log says "Cannot complete job, retrying in ... seconds", which is right before the release call. However, the job simply disappears from the queue and never gets executed again.
I am logging the number of attempts, max tries, timeoutAt, etc. Everything seems to be configured properly. Here's (the essence of) my code:
public function handle()
{
/** #var Track $track */
$track = Track::idOrUuId($this->trackId);
$this->logger->info('Downloading track', [
'trackId' => $track->getId(),
'attempt' => $this->attempts(),
'retryUntil' => $this->job->timeoutAt(),
'maxTries' => $this->job->maxTries(),
]);
$throttleKey = sprintf('track.download.%s', $track->getUser()->getTeamId());
if (!$this->rateLimiter->tooManyAttempts($throttleKey, self::MAX_ALLOWED_JOBS)) {
$this->downloadTrack($track);
$this->rateLimiter->hit($throttleKey, 60);
} else {
$delay = random_int(10, 100) + $this->rateLimiter->availableIn($throttleKey);
$this->logger->info('Throttling track download.', [
'trackId' => $track->getId(),
'delay' => $delay,
]);
$this->release($delay);
}
}
public function retryUntil(): DateTimeInterface
{
return now()->addHours(24);
}
public function failed(Exception $exception)
{
$this->logger->info('Job failed', ['exception' => $exception->getMessage()];
Sentry::captureException($exception);
}
I found the problem and I'm posting it here for anyone who might struggle in the future. It all came down to a simple configuration. In AWS SQS the queue I am working with has a configured DLQ (Dead-Letter Queue) and Maximum receives set to 4. According to the SQS docs
The Maximum receives value determines when a message will be sent to the DLQ. If the ReceiveCount for a message exceeds the maximum receive count for the queue, Amazon SQS moves the message to the associated DLQ (with its original message ID).
Since this is an infra configuration, it overwrites any Laravel parameters you might pass to the job. And because the message is simply removed from the queue, the processing job does not actually fail, so the failed method is not executed.
The problem
I'm dispatching a job to execute an action that needs a resource ready to be correctly executed, so if it fails it needs to be retried after some time.
But what really happens is that if it fails, it's not executed ever again.
I'm using Supervisor to manage the queue and the database driver and I have changed nothing in my default queue.php config file.
Using Laravel 5.8.
What I've tried
I've already tried to manually set the number of tries inside the job class like
public $tries = 5;
and also the same thing with retry delay with
public $retryAfter = 60;
My code
I'm implementing this job based on the default job template made with make:job, and my constructor and handle methods look like this:
public function __construct($event, $data)
{
$this->event = $event;
$this->data = $data;
}
public function handle()
{
Log::info('Job started | ' . $this->event . ' | Attempt: ' . $this->attempts());
// Executes some logic and throws an Exception if it fails
Log::info('Job succeeded | ' . $this->event);
}
Finally it doesn't reach "job succeeded" log and doesn't log any other attempt other than 1.
Is there some concept I'm missing or is this code wrong somehow?
That was a very stupid problem actually. But if anyone is as dumb as me ever, I'll let the solution here.
In my queue.php default driver was set as sync which causes the program to just run the job handle method but it does not queue it, 'cause as I've said I didn't change anything. So I just set it as database and it was fixed.
Our emails are failing to send using Laravel with a Redis Queue.
The code that triggers the error is this: ->onQueue('emails')
$job = (new SendNewEmail($sender, $recipients))->onQueue('emails');
$job_result = $this->dispatch($job);
In combination with this in the job:
use InteractsWithQueue;
Our error message is:
Feb 09 17:15:57 laravel: message repeated 7947 times: [ production.ERROR: exception 'Swift_TransportException' with message 'Expected response code 354 but got code "550", with message "550 5.7.0 Requested action not taken: too many emails per second "' in /home/laravel/app/vendor/swiftmailer/swiftmailer/lib/classes/Swift/Transport/AbstractSmtpTransport.php:383 Stack trace: #0 /home/laravel/app/vendor/swiftmailer/swiftmailer/lib/classes/Swift/Transport/AbstractSmtpTransport.php(281):
Our error only happens using Sendgrid and not Mailtrap, which spoofs emailing sending. I've talked with Sendgrid and the emails never touched their servers and their service was fully active when my error occurred. So, the error appears to be on my end.
Any thoughts?
Seems like only Mailtrap sends this error, so either open another account or upgrade to a paid plan.
I finally figured out how to set up the entire Laravel app to throttle mail based on a config.
In the boot() function of AppServiceProvider,
$throttleRate = config('mail.throttleToMessagesPerMin');
if ($throttleRate) {
$throttlerPlugin = new \Swift_Plugins_ThrottlerPlugin($throttleRate, \Swift_Plugins_ThrottlerPlugin::MESSAGES_PER_MINUTE);
Mail::getSwiftMailer()->registerPlugin($throttlerPlugin);
}
In config/mail.php, add this line:
'throttleToMessagesPerMin' => env('MAIL_THROTTLE_TO_MESSAGES_PER_MIN', null), //https://mailtrap.io has a rate limit of 2 emails/sec per inbox, but consider being even more conservative.
In your .env files, add a line like:
MAIL_THROTTLE_TO_MESSAGES_PER_MIN=50
The only problem is that it doesn't seem to affect mail sent via the later() function if QUEUE_DRIVER=sync.
For debugging only!
If you don't expect more then 5 emails and don't have the option to change mailtrap, try:
foreach ($emails as $email) {
...
Mail::send(... $email);
if(env('MAIL_HOST', false) == 'smtp.mailtrap.io'){
sleep(1); //use usleep(500000) for half a second or less
}
}
Using sleep() is a really bad practice. In theory this code should only execute in test environment or in debug mode.
Maybe you should make sure it was really sent via Sendgrid and not mailtrap. Their hard rate limit seems currently to be 3k requests per second against 3 requests per second for mailtrap on free plan :)
I used sleep(5) to wait five seconds before using mailtrap again.
foreach ($this->suscriptores as $suscriptor) {
\Mail::to($suscriptor->email)
->send(new BoletinMail($suscriptor, $sermones, $entradas));
sleep(5);
}
I used the sleep(5) inside a foreach. The foreach traverses all emails stored in the database and the sleep(5) halts the loop for five seconds before continue with the next email.
I achieved this on Laravel v5.8 by setting the Authentication routes manually. The routes are located on the file routes\web.php. Below are the routes that needs to be added to that file:
Auth::routes();
Route::get('email/verify', 'Auth\VerificationController#show')->name('verification.notice');
Route::get('email/verify/{id}', 'Auth\VerificationController#verify')->name('verification.verify');
Route::group(['middleware' => 'throttle:1,1'], function(){
Route::get('email/resend', 'Auth\VerificationController#resend')->name('verification.resend');
});
Explanation:
Do not pass any parameter to the route Auth::routes(); to let configure the authentication routes manually.
Wrap the route email/resend inside a Route::group with the middleware throttle:1,1 (the two numbers represents the max retry attempts and the time in minutes for those max retries)
I also removed a line of code in the file app\Http\Controllers\Auth\VerificationController.php in the __construct function.
I removed this:
$this->middleware('throttle:6,1')->only('verify', 'resend');
You need to rate limit emails queue.
The "official" way is to setup Redis queue driver. But it's hard and time consuming.
So I wrote custom queue worker mxl/laravel-queue-rate-limit that uses Illuminate\Cache\RateLimiter to rate limit job execution (the same one that used internally by Laravel to rate limit HTTP requests).
In config/queue.php specify rate limit for emails queue (for example 2 emails per second):
'rateLimit' => [
'emails' => [
'allows' => 2,
'every' => 1
]
]
And run worker for this queue:
$ php artisan queue:work --queue emails
I had this problem when working with mail trap. I was sending 10 mails in one second and it was treated as spam. I had to make delay between each job. Take look at solution (its Laravel - I have system_jobs table and use queue=database)
Make a static function to check last job time, then add to it self::DELAY_IN_SECONDS - how many seconds you want to have between jobs:
public static function addSecondsToQueue() {
$job = SystemJobs::orderBy('available_at', 'desc')->first();
if($job) {
$now = Carbon::now()->timestamp;
$jobTimestamp = $job->available_at + self::DELAY_IN_SECONDS;
$result = $jobTimestamp - $now;
return $result;
} else {
return 0;
}
}
Then use it to make sending messages with delay (taking in to consideration last job in queue)
Mail::to($mail)->later(SystemJobs::addSecondsToQueue(), new SendMailable($params));
I'm trying to put in a check for Phirehose to stop running after 10 seconds or 100 tweets...basically, I want to be able to stop the script.
I was told I could customize the statusUpdate() function or the heartBeat() function, but I'm uncertain how to do that. Right now, I'm just testing with the filter-track.php example.
How do I customize the functions, and where should I call them in the class?
class FilterTrackConsumer extends OauthPhirehose
{
/**
* Enqueue each status
*
* #param string $status
*/
public function enqueueStatus($status)
{
/*
* In this simple example, we will just display to STDOUT rather than enqueue.
* NOTE: You should NOT be processing tweets at this point in a real application, instead they should be being
* enqueued and processed asyncronously from the collection process.
*/
$data = json_decode($status, true);
if (is_array($data) && isset($data['user']['screen_name'])) {
print $data['user']['screen_name'] . ': ' . urldecode($data['text']) . "\n";
}
}
public function statusUpdate()
{
return "asdf";
}
}
// The OAuth credentials you received when registering your app at Twitter
define("TWITTER_CONSUMER_KEY", "");
define("TWITTER_CONSUMER_SECRET", "");
// The OAuth data for the twitter account
define("OAUTH_TOKEN", "");
define("OAUTH_SECRET", "");
// Start streaming
$sc = new FilterTrackConsumer(OAUTH_TOKEN, OAUTH_SECRET, Phirehose::METHOD_FILTER);
$sc->setLang('en');
$sc->setTrack(array('love'));
$sc->consume();
To stop after 100 tweets, have a counter in that function receiving the tweets, and call exit when done:
class FilterTrackConsumer extends OauthPhirehose
{
private $tweetCount = 0;
public function enqueueStatus($status)
{
//Process $status here
if(++$this->tweetCount >= 100)exit;
}
...
(Instead of exit you could throw an exception, and put a try/catch around your $sc->consume(); line.)
For shutdown "after 10 seconds", this is easy if it can be roughly 10 seconds (i.e. put a time check in enqueueStatus(), and exit if it has been more than 10 seconds since the program started), but hard if you want it to be exactly 10 seconds. This is because enqueueStatus() is only called when a tweet comes in. So, as an extreme example, if you get 200 tweets in the first 9 seconds, but then it goes quiet and the 201st tweet does not arrive for 80 more seconds, your program would not exit until the program has been running 89 seconds.
Taking a step back, wanting to stop Phirehose is normally a sign it is the wrong tool for the job. If you just want to poll 100 recent tweets, every now and again, then the REST API, doing a simple search, is better. The streaming API is more for applications that intend to run 24/7, and want to react to tweets as soon as they are, well, tweeted. (More critically, Twitter will rate-limit, or close, your account if you connect too frequently.)