LARAVEL: Is re queue a job is a bad idea? - php

I would like to know re queueing a laravel job is a bad idea or not. i had a scenario where i need to pull users post from facebook once they integrated there facebook account to my application. i want to pull {x} days historic data. facebook api like any other api limit there api request per minute. i keep track the request headers and once rate limit reached i saved those information in database and for each re queue i check whether i am eligible to make a call to facebook api
here is the code snippet for a better visualization
<?php
namespace App\Jobs;
class FacebookData implements ShouldQueue
{
/**
* The number of seconds the job can run before timing out.
*
* #var int
*/
public $timeout = 120;
public $userid;
public function __construct($id)
{
$this->userid=$id;
}
public function handle()
{
if($fbhelper->canPullData())
{
$res=$fbhelper->getData($user->id);
if($res['code']==429)
{
$fbhelper->storeRetryAfter($res);
self::dispatch($user->id);
}
}
}
}
The above snippet is a rough idea. is this a good idea? the reason why i post this question is the self::dispatch($user->id); looks like a recursion and it will try until $fbhelper->canPullData() returns true.that probably will take 6 minutes.i am worried about any impact would happen in my application.Thanks in advance

Retrying job is not a bad idea, it is just build into jobs design already. Laravel have retries for this matter, that jobs can do unreliable operations.
As an example in a project i have been working on, an external API we are working with has 1-5 http 500 errors per 100 requests we are sending. This is thou handled by the built in retry functionality of Laravel.
As of Laravel 5.4 you can set it in the class like so. This will do exactly what you want, without defining the logic. Finally for hitting the retry limit, you can define a function called retryAfter(), which specifies when the job should be retried.
class FacebookData {
public $tries = 5;
public function retryAfter() {
//wait 6 minutes
return 360;
}
}
If you want to keep your logic where you only retry 429 errors, i would use the inverse of that to delete the job, if its anything else than a 429.
if ($res['code'] !== 429) {
$this->delete();
}

Related

Laravel application, connection is refused while the concurrent client connection is crossing 200

I am developing a web application using laravel framework. As part of load test, we hit the application with parallel connections through a tool. We found that the application is not accepting more than 200 concurrent connections at a time. Beyond the 200 concurrent connections, the http connection is getting refused.
I have configured more than 200 (upto 1000) as throttle count in api.php. But it doesnt resolve the issue. If I configure less than 200, then it perfectly works up to the throttle count. But while going beyond 200, the parallel connection always fails to the client.
In your App\Providers\RouteServiceProvider class generally is where you would add global rate limiting by using the configureRateLimiting method, so it should look something like this for your case.
use Illuminate\Cache\RateLimiting\Limit;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\RateLimiter;
/**
* Configure the rate limiters for the application.
*
* #return void
*/
protected function configureRateLimiting()
{
RateLimiter::for('global', function (Request $request) {
return Limit::perMinute(1000); //add any number of calls per minute that fit your criteria
});
}

Turn PDU device on with Laravel Scheduler and Laravel Redirect

I have a Power Delivery Unit device connected to my local network, that is controlled by a simple web page and I´m trying to use Laravel Schedule to turn the device on and off at specific times of the day. So I´ve created a PDUController with the following methods: (both approaches work fine)
public function on()
{
return redirect()->away(
"http://192.168.0.100/control_outlet.htm?outlet6=1&outlet7=1&op=0&submit=Apply"
);
}
public function off()
{
return Redirect::to(
"http://192.168.0.100/control_outlet.htm?outlet6=1&outlet7=1&op=1&submit=Apply"
);
}
If you visit the routes:
Route::get('/pdu/on', [PDUController::class, 'on'])->name('pdu.on');
Route::get('/pdu/off', [PDUController::class, 'off'])->name('pdu.off');
Everything works as expected and you can turn the device on or off accordingly. Now when I use the Scheduler on Kernel.php like:
protected function schedule(Schedule $schedule)
{
$schedule->call('App\Http\Controllers\PDUController#on')->everyMinute();
}
It doesn´t work.
I know the flow is right, because I can Log::debug() every step of the process correctly, but the device is not receiveing the signal and thus is not turning on.
Could you help, please? Thanks!
The Redirect functions return a redirect response to the browser. Since the scheduler runs through Artisan, there is no browser to redirect, so no connection is made. Instead, use the HTTP Client to create a connection to the page.

Rate limiting PHP function

I have a php function which gets called when someone visits POST www.example.com/webhook. However, the external service which I cannot control, sometimes calls this url twice in rapid succession, messing with my logic since the webhook persists stuff in the database which takes a few ms to complete.
In other words, when the second request comes in (which can not be ignored), the first request is likely not completed yet however I need this to be completed in the order it came in.
So I've created a little hack in Laravel which should "throttle" the execution with 5 seconds in between. It seems to work most of the time. However an error in my code or some other oversight, does not make this solution work everytime.
function myWebhook() {
// Check if cache value (defaults to 0) and compare with current time.
while(Cache::get('g2a_webhook_timestamp', 0) + 5 > Carbon::now()->timestamp) {
// Postpone execution.
sleep(1);
}
// Create a cache value (file storage) that stores the current
Cache::put('g2a_webhook_timestamp', Carbon::now()->timestamp, 1);
// Execute rest of code ...
}
Anyone perhaps got a watertight solution for this issue?
You have essentially designed your own simplified queue system which is the right approach but you can make use of the native Laravel queue to have a more robust solution to your problem.
Define a job, e.g: ProcessWebhook
When a POST request is received to /webhook queue the job
The laravel queue worker will process one job at a time[1] in the order they're received, ensuring that no matter how many requests are received, they'll be processed one by one and in order.
The implementation of this would look something like this:
Create a new Job, e.g: php artisan make:job ProcessWebhook
Move your webhook processing code into the handle method of the job, e.g:
public function __construct($data)
{
$this->data = $data;
}
public function handle()
{
Model::where('x', 'y')->update([
'field' => $this->data->newValue
]);
}
Modify your Webhook controller to dispatch a new job when a POST request is received, e.g:
public function webhook(Request $request)
{
$data = $request->getContent();
ProcessWebhook::dispatch($data);
}
Start your queue worker, php artisan queue:work, which will run in the background processing jobs in the order they arrive, one at a time.
That's it, a maintainable solution to processing webhooks in order, one-by-one. You can read the Queue documentation to find out more about the functionality available, including retrying failed jobs which can be very useful.
[1] Laravel will process one job at a time per worker. You can add more workers to improve queue throughput for other use cases but in this situation you'd just want to use one worker.

How to handle deadlock in Doctrine?

I have a mobile application and server based on Symfony which gives API for the mobile app.
I have a situation, where users can like Post. When users like Post I add an entry in ManyToMany table that this particular user liked this particular Post (step 1). Then in Post table I increase likesCounter (step 2). Then in User table I increase gamification points for user (because he liked the Post) (step 3).
So there is a situation where many users likes particular Post at the same time and deadlock occurs (on Post table or on User table).
How to handle this? In Doctrine Docs I can see solution like this:
<?php
try {
// process stuff
} catch (\Doctrine\DBAL\Exception\RetryableException $e) {
// retry the processing
}
but what should I do in catch part? Retry the whole process of liking (steps 1 to 3) for instance 3 times and if failed return BadRequest to the mobile application? Or something else?
I don't know if this is a good example cause maybe I could try to rebuild the process so the deadlock won't happen but I would like to know what should I do if they actually happen?
I disagree with Stefan, deadlocks are normal as the MySQL documentation says:
Normally, you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.
See: MySQL documentation
However, the loop suggested by Stefan is the right solution. Except that it lacks an important point: after Doctrine has thrown an Exception, the EntityManager becomes unusable and you must create a new one in the catch clause with resetManager() from the ManagerRegistry instance.
When I had exactly the same concern as you, I searched the web but couldn't find any completely satisfactory answer. So I got my hands dirty and came back with an article where you'll find an implementation exemple of what I said above:
Thread-safe business logic with Doctrine
What I'd do is post all likes on a queue and consume them using a batch consumer so that you can group the updates on a single post.
If you insist on keeping you current implementation you could go down the road you yourself suggested like this:
<?php
for ($i = 0; $i < $retryCount; $i++) {
try {
// try updating
break;
} catch (\Doctrine\DBAL\Exception\RetryableException $e) {
// you could also add a delay here
continue;
}
}
if ($i === $retryCount) {
// throw BadRequest
}
This is an ugly solution and I wouldn't suggest it. Deadlocks shouldn't be "avoided" by retrying or using delays. Also have a look at named locks and use the same retry system, but don't wait for the deadlock to happen.
The problem is that after Symfony Entity Manager fails - it closes db connection and you can't continue you work with db even if you catch the ORMException.
First good solution is to process your 'likes' async, with rabbitmq or other queue implementation.
Step-by-step:
Create message like {type: 'like', user:123, post: 456}
Publish it in queue
Consume it and update 'likes' count.
You can have several consumers that try to obtain lock on based on postId. If two consumers try to update same post - one of them will fail obtaining the lock. But it's ok, you can consume failed message after.
Second solution is to have special table e.g. post_likes (userId, postId, timestamp). Your endpoint could create new rows in this table synchronously. And you can count 'likes' on some post with this table. Or you can write some cron script, which will update post likes count by this table.
I've made a special class to retry on deadlock (I'm on Symfony 4.4).
Here it is :
class AntiDeadlockService
{
/**
* #var EntityManagerInterface
*/
private $em;
public function __construct(EntityManagerInterface $em)
{
$this->em = $em;
}
public function safePush(): void
{
// to retry on deadlocks or other retryable exceptions
$connection = $this->em->getConnection();
$retry = 0;
$maxRetries = 3;
while ($retry < $maxRetries) {
try {
if (!$this->em->isOpen()) {
$this->em = $this->em->create(
$connection = $this->em->getConnection(),
$this->em->getConfiguration()
);
}
$connection->beginTransaction(); // suspend auto-commit
$this->em->flush();
$connection->commit();
break;
} catch (RetryableException $exception) {
$connection->rollBack();
$retry++;
if ($retry === $maxRetries) {
throw $exception;
}
}
}
}
}
Use this safePush() method instead of the $entityManager->push() one ;)

How do I customize and use Phirehose functions?

I'm trying to put in a check for Phirehose to stop running after 10 seconds or 100 tweets...basically, I want to be able to stop the script.
I was told I could customize the statusUpdate() function or the heartBeat() function, but I'm uncertain how to do that. Right now, I'm just testing with the filter-track.php example.
How do I customize the functions, and where should I call them in the class?
class FilterTrackConsumer extends OauthPhirehose
{
/**
* Enqueue each status
*
* #param string $status
*/
public function enqueueStatus($status)
{
/*
* In this simple example, we will just display to STDOUT rather than enqueue.
* NOTE: You should NOT be processing tweets at this point in a real application, instead they should be being
* enqueued and processed asyncronously from the collection process.
*/
$data = json_decode($status, true);
if (is_array($data) && isset($data['user']['screen_name'])) {
print $data['user']['screen_name'] . ': ' . urldecode($data['text']) . "\n";
}
}
public function statusUpdate()
{
return "asdf";
}
}
// The OAuth credentials you received when registering your app at Twitter
define("TWITTER_CONSUMER_KEY", "");
define("TWITTER_CONSUMER_SECRET", "");
// The OAuth data for the twitter account
define("OAUTH_TOKEN", "");
define("OAUTH_SECRET", "");
// Start streaming
$sc = new FilterTrackConsumer(OAUTH_TOKEN, OAUTH_SECRET, Phirehose::METHOD_FILTER);
$sc->setLang('en');
$sc->setTrack(array('love'));
$sc->consume();
To stop after 100 tweets, have a counter in that function receiving the tweets, and call exit when done:
class FilterTrackConsumer extends OauthPhirehose
{
private $tweetCount = 0;
public function enqueueStatus($status)
{
//Process $status here
if(++$this->tweetCount >= 100)exit;
}
...
(Instead of exit you could throw an exception, and put a try/catch around your $sc->consume(); line.)
For shutdown "after 10 seconds", this is easy if it can be roughly 10 seconds (i.e. put a time check in enqueueStatus(), and exit if it has been more than 10 seconds since the program started), but hard if you want it to be exactly 10 seconds. This is because enqueueStatus() is only called when a tweet comes in. So, as an extreme example, if you get 200 tweets in the first 9 seconds, but then it goes quiet and the 201st tweet does not arrive for 80 more seconds, your program would not exit until the program has been running 89 seconds.
Taking a step back, wanting to stop Phirehose is normally a sign it is the wrong tool for the job. If you just want to poll 100 recent tweets, every now and again, then the REST API, doing a simple search, is better. The streaming API is more for applications that intend to run 24/7, and want to react to tweets as soon as they are, well, tweeted. (More critically, Twitter will rate-limit, or close, your account if you connect too frequently.)

Categories