I am new for redis and currently I am using PHP resque for redis. How can I define a job in php resque?
This has changed in the newest version of PHP-resque, released 13 October 2012. According to the changelog, "Wrap job arguments in an array to improve compatibility with ruby resque."
which means if you've upgraded to PHP-Resque 1.2 you will access jobs from $args[0].
Queueing Jobs
Jobs are queued as follows:
require_once 'lib/Resque.php';
// Required if redis is located elsewhere
Resque::setBackend('localhost:6379');
$args = array(
'name' => 'Chris'
);
Resque::enqueue('default', 'My_Job', $args);
Defining Jobs
Each job should be in it's own class, and include a perform method.
class My_Job
{
public function perform()
{
// Work work work
echo $this->args['name'];
}
}
When the job is run, the class will be instantiated and any arguments will be set as an array on the instantiated object, and are accessible via $this->args.
Any exception thrown by a job will result in the job failing - be careful here and make sure you handle the exceptions that shouldn't result in a job failing.
Related
(Laravel 8, PHP 8)
Hi. I have a bunch of data in the PHP APC cache that I can access across my Laravel application with the apcu commands.
I decided I should fire an async job to process some of that data for the user during a session and throw the results in the database.
So I made a middleware that fires (correctly) when the user accesses the page, and (correctly) dispatches a job called "MemoryProvider".
The dispatch command promply instantiates the MemoryProvider class, running its constructor, and then queues the job for execution.
About a second later, the queue is processed and the handle method in MemoryProvider is run.
I check the content of the php cache with "apcu_cache_info()" and "apcu_exists()" in the middleware and both in the MemoryProvider constructor and in its handle method.
The problem:
The PHP cache appears populated throughout my Laravel app.
The PHP cache appears populated in the middleware.
The PHP cache appears populated in the job's constructor.
The PHP cache appears EMPTY in the job's handle method.
Here's the middleware:
{
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
MemoryProvider::dispatch($request);
return $next($request);
}
Here's the job's (MemoryProvider) constructor:
{
$this->request = $request->all();
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
}
And here's the job's (MemoryProvider) handle method:
{
$a = apcu_cache_info(); // 0 entries
$b = apcu_exists('the:2:0'); // false
}
Question: is this a PHP limitation or a bad Laravel problem? And how can I access the content of my PHP cache in an async class?
p.s. I have apc.enable_cli=1 in php.ini
I found the answer. Apparently, it's a PHP limitation.
According to a good explanation given by gview back in 2017, a cli process doesn't share state or memory with other cli processes. So the apc memory space will never be shared this way.
I did find a workaround for my specific case: instead of running an async process to handle the heavy work in the background, I can get the same effect by simply issuing an AJAX request. The request is handled independently by PHP, with full access to the APC cache, and I can populate my database and let the user know when it's all done (or gradually done, as is the case).
I wish I had thought of this sooner.
I'm adding functionality to a pre-existing app, using Laravel 5.8.38 and the SQS queue driver.
I'm looking for a way to log the receipt handle of queue messages as they're processed, so that we can manually delete messages from the queue for jobs that have gone horribly wrong (without the receipt ID, we'd have to wait for the visibility timeout to be reached).
I'm not super familiar with Laravel and am trying to figure things out as I go. We have two types of queued jobs:
a custom class implementing Illuminate\Contracts\Queue\ShouldQueue, that also uses the Illuminate\Queue\InteractsWithQueue, Illuminate\Foundation\Bus\Dispatchable and Illuminate\Bus\Queueable traits (our class gets queued directly)
a custom command, extending Illuminate\Console\Command, that runs via Illuminate\Foundation\Console\QueuedCommand
For the custom class, browsing through the source for InteractsWithQueue and Illuminate/Queue/Jobs/SqsJob I discovered I could access the receipt handle directly:
$sqsJob = $this->job->getSqsJob();
\Log::info("Processing SQS job {$sqsJob["MessageId"]} with handle {$sqsJob["ReceiptHandle"]}");
This works great! However, I can't figure out how to do a similar thing from the console command.
Laravel's QueuedCommand implements ShouldQueue as well as Illuminate\Bus\Queueable, so my current guess is that I'll need to extend this, use InteractsWithQueue, and retrieve and log the receipt handle from there. However if I do that, I can't figure out how I would modify Artisan::queue('app:command', $commandOptions); to queue my custom QueuedCommand class instead.
Am I almost there? If so, how can I queue my custom QueuedCommand class instead of the Laravel one? Or, is there a better way to do this?
Ok I had just posted this question and then realised a suggestion a colleague offered provided a way forward.
So, here's my solution in case it helps anyone else!
Laravel fires a Illuminate\Queue\Events\JobProcessing event when processing any new queue job. I just needed to register a listener in app/Providers/EventServiceProvider.php:
protected $listen = [
'Illuminate\Queue\Events\JobProcessing' => [
'App\Listeners\LogSQSJobDetails',
],
];
and then provide the listener to handle it:
namespace App\Listeners;
use Illuminate\Queue\Events\JobProcessing;
class LogSQSJobDetails
{
public function __construct()
{
}
public function handle(JobProcessing $event)
{
$sqsJob = $this->job->getSqsJob();
\Log::info("Processing SQS job {$sqsJob["MessageId"]} with handle {$sqsJob["ReceiptHandle"]}");
}
}
This works great - and means I can also now remove the addition to my custom class from earlier.
I have an artisan command that fires a job called PasswordResetJob which iterates as it calls a method forcePasswordReset in a repository class OrgRepository, the method updates a user's table. The whole process works fine.
Now I'm trying to write a Laravel test to mock the OrgRepository class and assert that the forcePasswordReset method is called at least once, which should be the case, based on the conditions I provided to the test. In the test, I call the artisan command to fire job; (I'm using sync queue for testing) this works fine as the job gets called and the user's table gets updated as I can view my database updates directly.
However, the test fails with the error: Mockery\Exception\InvalidCountException : Method forcePasswordReset() from Mockery_2_Repositories_OrgRepository should be called
at least 1 times but called 0 times.
The artisan call within the test is:
Artisan::call('shisiah:implement-org-password-reset');
I have tried to make the artisan call before, as well as after this mock initialization, but I still get the same errors. Here is the mock initialization within the test
$this->spy(OrgRepository::class, function ($mock) {
$mock->shouldHaveReceived('forcePasswordReset');
});
What am I missing? I have gone through the documentation and searched through Google for hours. Please let me know if you need any additional information to help. I'm using Laravel version 6.0
edit
I pass the OrgRepository class into the handle method of the job class, like this:
public function handle(OrgRepository $repository)
{
//get orgs
$orgs = Org::where('status', true)->get();
foreach ($orgs as $org){
$repository->forcePasswordReset($org);
}
}
The problem is that you are initializing your spy after your job has already run, which means during the job it will use the real class instead of the spy.
You have to do something like this in your test:
$spy = $this->spy(OrgRepository::class);
// run your job
$spy->shouldHaveReceived('forcePasswordReset');
We tell laravel to use the spy instead of the repository, run the job and then assert that the method was called.
Jeffrey Way explains it pretty well in this screencast.
I have a php function which gets called when someone visits POST www.example.com/webhook. However, the external service which I cannot control, sometimes calls this url twice in rapid succession, messing with my logic since the webhook persists stuff in the database which takes a few ms to complete.
In other words, when the second request comes in (which can not be ignored), the first request is likely not completed yet however I need this to be completed in the order it came in.
So I've created a little hack in Laravel which should "throttle" the execution with 5 seconds in between. It seems to work most of the time. However an error in my code or some other oversight, does not make this solution work everytime.
function myWebhook() {
// Check if cache value (defaults to 0) and compare with current time.
while(Cache::get('g2a_webhook_timestamp', 0) + 5 > Carbon::now()->timestamp) {
// Postpone execution.
sleep(1);
}
// Create a cache value (file storage) that stores the current
Cache::put('g2a_webhook_timestamp', Carbon::now()->timestamp, 1);
// Execute rest of code ...
}
Anyone perhaps got a watertight solution for this issue?
You have essentially designed your own simplified queue system which is the right approach but you can make use of the native Laravel queue to have a more robust solution to your problem.
Define a job, e.g: ProcessWebhook
When a POST request is received to /webhook queue the job
The laravel queue worker will process one job at a time[1] in the order they're received, ensuring that no matter how many requests are received, they'll be processed one by one and in order.
The implementation of this would look something like this:
Create a new Job, e.g: php artisan make:job ProcessWebhook
Move your webhook processing code into the handle method of the job, e.g:
public function __construct($data)
{
$this->data = $data;
}
public function handle()
{
Model::where('x', 'y')->update([
'field' => $this->data->newValue
]);
}
Modify your Webhook controller to dispatch a new job when a POST request is received, e.g:
public function webhook(Request $request)
{
$data = $request->getContent();
ProcessWebhook::dispatch($data);
}
Start your queue worker, php artisan queue:work, which will run in the background processing jobs in the order they arrive, one at a time.
That's it, a maintainable solution to processing webhooks in order, one-by-one. You can read the Queue documentation to find out more about the functionality available, including retrying failed jobs which can be very useful.
[1] Laravel will process one job at a time per worker. You can add more workers to improve queue throughput for other use cases but in this situation you'd just want to use one worker.
Using Laravel 5.5, I have a listener called JobEventSubscriber that is using a database queue. It has a method called uploadFileToPartner that is triggered whenever a JobFilesUploaded event is fired.
Here is the code of my subscribe method:
public function subscribe($events){
$events->listen(JobSaved::class, JobEventSubscriber::class . '#syncToCrm');
$events->listen(JobFilesUploaded::class, JobEventSubscriber::class . '#uploadFileToPartner');
}
Whenever either of these events fire, the database Listener fails with the following error:
ErrorException: call_user_func_array() expects parameter 1 to be a valid callback,
class 'App\Listeners\JobEventSubscriber' does not have a method 'uploadFileToPartner'
in /var/www/html/vendor/laravel/framework/src/Illuminate/Events/CallQueuedListener.php:79
When I change my queue_driver to sync it works. I also went into Tinker and typed:
>>> use App\Listeners\JobEventSubscriber
>>> $eventSubscriber = app(JobEventSubscriber::class);
=> App\Listeners\JobEventSubscriber {#879
+connection: "database",
}
>>> method_exists($eventSubscriber, 'uploadFileToPartner');
=> true
What is wrong here where it cannot find methods that are definitely there.
It may be relevant to mention that I recently updated this app from Larvel 5.4.
Upon reading the docs it says that if you change your code you need to restart your queue process.
Specifically it says:
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
I changed the name of the methods, and then I didn't restart the queue. So the queue was receiving events with the new names but it was executing old code. As a result the method names weren't recognized.