rabbitmq and php - Process multiple queues with one worker (broker) - php

I have 1000 queues with specific names. so I want to process these queues with one broker. is it possible?
the queue names is stored in mysql db so I should fetch theme and run the broker for each one. and of course it should run asynchronously and should be able to pass the queued item to a idle broker. is this possible? or I should make 1000 files with specific queue names as brokers?
Update:
this is a picture of my queues. the queues should run in a parallel manner not a serial one. so the users are producer and the worker is consumer that runs the send_message() method;

I can show you how to it with enqueue library. I must warn you, there is no way to consume messages asynchronously in one process. Though you can run a few processes that serve a set of queues. They could be divided into groups by the queue importance.
Install the AMQP transport and consumption library:
composer require enqueue/amqp-ext enqueue/enqueue
Create a consumption script. I assume that you have an array of queue names already fetched from DB. They are stored in $queueNames var. The example bound the same processor to all queues but you can set different ones, of course.
<?php
use Enqueue\AmqpExt\AmqpConnectionFactory;
use Enqueue\Consumption\QueueConsumer;
use Enqueue\Psr\PsrMessage;
use Enqueue\Psr\PsrProcessor;
// here's the list of queue names which you fetched from DB
$queueNames = ['foo_queue', 'bar_queue', 'baz_queue'];
$factory = new AmqpConnectionFactory('amqp://');
$context = $factory->createContext();
// create queues at RabbitMQ side, you can remove it if you do not need it
foreach ($queueNames as $queueName) {
$queue = $context->createQueue($queueName);
$queue->addFlag(AMQP_DURABLE);
$context->declareQueue($queue);
}
$consumer = new QueueConsumer($context);
foreach ($queueNames as $queueName) {
$consumer->bind($queueName, function(PsrMessage $psrMessage) use ($queueName) {
echo 'Consume the message from queue: '.$queueName;
// your processing logic.
return PsrProcessor::ACK;
});
}
$consumer->consume();
More in doc

Related

PHP apcu not persistent in Laravel queued/dispatched jobs

(Laravel 8, PHP 8)
Hi. I have a bunch of data in the PHP APC cache that I can access across my Laravel application with the apcu commands.
I decided I should fire an async job to process some of that data for the user during a session and throw the results in the database.
So I made a middleware that fires (correctly) when the user accesses the page, and (correctly) dispatches a job called "MemoryProvider".
The dispatch command promply instantiates the MemoryProvider class, running its constructor, and then queues the job for execution.
About a second later, the queue is processed and the handle method in MemoryProvider is run.
I check the content of the php cache with "apcu_cache_info()" and "apcu_exists()" in the middleware and both in the MemoryProvider constructor and in its handle method.
The problem:
The PHP cache appears populated throughout my Laravel app.
The PHP cache appears populated in the middleware.
The PHP cache appears populated in the job's constructor.
The PHP cache appears EMPTY in the job's handle method.
Here's the middleware:
{
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
MemoryProvider::dispatch($request);
return $next($request);
}
Here's the job's (MemoryProvider) constructor:
{
$this->request = $request->all();
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
}
And here's the job's (MemoryProvider) handle method:
{
$a = apcu_cache_info(); // 0 entries
$b = apcu_exists('the:2:0'); // false
}
Question: is this a PHP limitation or a bad Laravel problem? And how can I access the content of my PHP cache in an async class?
p.s. I have apc.enable_cli=1 in php.ini
I found the answer. Apparently, it's a PHP limitation.
According to a good explanation given by gview back in 2017, a cli process doesn't share state or memory with other cli processes. So the apc memory space will never be shared this way.
I did find a workaround for my specific case: instead of running an async process to handle the heavy work in the background, I can get the same effect by simply issuing an AJAX request. The request is handled independently by PHP, with full access to the APC cache, and I can populate my database and let the user know when it's all done (or gradually done, as is the case).
I wish I had thought of this sooner.

Rate limiting PHP function

I have a php function which gets called when someone visits POST www.example.com/webhook. However, the external service which I cannot control, sometimes calls this url twice in rapid succession, messing with my logic since the webhook persists stuff in the database which takes a few ms to complete.
In other words, when the second request comes in (which can not be ignored), the first request is likely not completed yet however I need this to be completed in the order it came in.
So I've created a little hack in Laravel which should "throttle" the execution with 5 seconds in between. It seems to work most of the time. However an error in my code or some other oversight, does not make this solution work everytime.
function myWebhook() {
// Check if cache value (defaults to 0) and compare with current time.
while(Cache::get('g2a_webhook_timestamp', 0) + 5 > Carbon::now()->timestamp) {
// Postpone execution.
sleep(1);
}
// Create a cache value (file storage) that stores the current
Cache::put('g2a_webhook_timestamp', Carbon::now()->timestamp, 1);
// Execute rest of code ...
}
Anyone perhaps got a watertight solution for this issue?
You have essentially designed your own simplified queue system which is the right approach but you can make use of the native Laravel queue to have a more robust solution to your problem.
Define a job, e.g: ProcessWebhook
When a POST request is received to /webhook queue the job
The laravel queue worker will process one job at a time[1] in the order they're received, ensuring that no matter how many requests are received, they'll be processed one by one and in order.
The implementation of this would look something like this:
Create a new Job, e.g: php artisan make:job ProcessWebhook
Move your webhook processing code into the handle method of the job, e.g:
public function __construct($data)
{
$this->data = $data;
}
public function handle()
{
Model::where('x', 'y')->update([
'field' => $this->data->newValue
]);
}
Modify your Webhook controller to dispatch a new job when a POST request is received, e.g:
public function webhook(Request $request)
{
$data = $request->getContent();
ProcessWebhook::dispatch($data);
}
Start your queue worker, php artisan queue:work, which will run in the background processing jobs in the order they arrive, one at a time.
That's it, a maintainable solution to processing webhooks in order, one-by-one. You can read the Queue documentation to find out more about the functionality available, including retrying failed jobs which can be very useful.
[1] Laravel will process one job at a time per worker. You can add more workers to improve queue throughput for other use cases but in this situation you'd just want to use one worker.

Use server-sent events in combination with doctrine events to detect changes?

In my app I'm using server-sent events and have the following situation (pseudo code):
$response = new StreamedResponse();
$response->setCallback(function () {
while(true) {
// 1. $data = fetchData();
// 2. echo "data: $data";
// 3. sleep(x);
}
});
$response->send();
My SSE Response class accepts a callback to gather the data (step 1), which actually performs a database query. Now to my problem: As I am trying to avoid polling the database each X seconds, I want to make use of Doctrine's onFlush event to set a flag that the corresponding entity has been actually changed, which would then be checked within fetchData callback. Normally, I would do this by setting a flag on current user session, but as the streaming loop constantly writes data, the session cannot be accessed within the callback. So has anybody an idea how to resolve this problem?
BTW: I'm using Symfony 3.3 and Doctrine 2.5 - thanks for any help!
I know that this question is from a long time ago, but here's a suggestion:
Use shared memory (the php shm_*() functions). That way your flag isn't tied to a specific session.
Be sure to lock and unlock around access to the shared memory (I usually use a semaphore).

Laravel 5 concurrent request and scheduled job

I am using Laravel 5 with php 5.5.9-1ubuntu4.14 and mysql 5.5.47.
I have a multi-tenant environment with each tenant having a separate DB. So for every HTTP request, I set the appropriate DB connection in the BeforeMiddleware like so
public function handle($request, Closure $next)
{
...
// Set the client DB name and fire it up as the new default DB connection
Config::set('database.connections.mysql_client.host', $db_host);
Config::set('database.connections.mysql_client.database', $db_database);
Config::set('database.connections.mysql_client.username', $db_username);
Config::set('database.connections.mysql_client.password', $db_password);
DB::setDefaultConnection('mysql_client');
...
}
This works correctly even for concurrent HTTP requests.
But now I want to add scheduled jobs to my application. These jobs will send notifications to users belonging to all tenants. So, I again need to connect to respective tenant Databases. But this connection won't be through an HTTP request. Let's say I have a CommunicationController and a function inside it for sending notifications.
public function sendNotification()
{
...
DB::purge('mysql_client');//IMP
// Set the client DB name and fire it up as the new default DB connection
Config::set('database.connections.mysql_client.host', $db_host);
Config::set('database.connections.mysql_client.database', $db_database);
Config::set('database.connections.mysql_client.username', $db_username);
Config::set('database.connections.mysql_client.password', $db_password);
DB::setDefaultConnection('mysql_client');
...
}
This is where I set the Config values. This function will be executed every minute through Laravel Scheduler.
My questions are:
What will happen when an HTTP request comes in? Will there be any
concurrency issues as Config parameters are being set in both, HTTP request as well as Scheduled job?
Will the HTTP request connect to some incorrect DB if it's running simultaneously with the scheduled job?
According to the Laravel docs:
Configuration values that are set at run-time are only set for the
current request, and will not be carried over to subsequent requests.
Configuration values are set at run-time when you specifically call the Config::set method so the value will be only valid for that current request and you'll never have any concurrency issues due to that.
Your second question is impossible as well. Even if your previous task was running a request on the database, the second/subsequent tasks will wait for locks if any to be release on the database. You should try to be sure you're not performing any queries that can lock for a long period on your DB by using multi-insert, sqldump (if you're performing large queries).. etc
Looking at Illuminate\Config\Repository which is the implementation used for the Config facade and specifically the set method
public function set($key, $value = null)
{
if (is_array($key)) {
foreach ($key as $innerKey => $innerValue) {
Arr::set($this->items, $innerKey, $innerValue);
}
} else {
Arr::set($this->items, $key, $value);
}
}
you can be assured that the set is not an IO set (ie. to the filesystem) but an in-memory set. So the CLI scheduler command that runs every minute will run in its own context like every other http request.

Breaking out of a Gearman loop

I have a php application that gets requests for part numbers from our server. At that moment, we reach out to a third party API to gather pricing information to make sure we have the latest pricing for that particular request. Sometimes the third party API is slow or it might be down, so we have a database that stores the latest pricing requests for each particular part number that we can use as a fallback. I'd like to run the request to the third party API and the database in parallel using Gearman. Here is the idea:
Receive request
Through gearman, create two jobs:
Request to third party API
MySQL database lookup
Wait in a loop and return the results based on the following conditions:
If the third party API has completed return that result, return that result immediately
If an elapsed time has passed, (e.g. 2 seconds) and the third party API hasn't responded, return the MySQL lookup data
Using gearman, my thoughts were to either run the two tasks in the foreground and break out of runTasks() within the setCompleteCallback() call, or to run them in the background and check in on the two tasks within a separate loop and check in on the tasks using jobStatus().
Unfortunately, I can't get either route to work for me while still getting access to the resulting data. Is there a a better way, or are there some existing examples of how someone has made this work?
I think you've described a single blocking problem, namely the results of an 3rd-party API lookup. There's two ways you can handle this from my point of view, either you could abort the attempt altogether if you decide that you've run out of time or you could report back to the client that you ran out of time but continue on with the lookup anyway, just to update your local cache just in case it happens to respond slower than you would like. I'll describe how I would go about the former problem because that would be easier.
From the client side:
$request = array(
'productId' => 5,
);
$client = new GearmanClient( );
$client->addServer( '127.0.0.1', 4730 );
$results = json_decode($client->doNormal('apiPriceLookup', json_encode( $request )));
if($results && property_exists($results->success) && $results->success) {
// Use local data
} else {
// Use fresh data
}
This will create a job on the job server with a function name of 'apiPriceLookup' and pass it the workload data containing a product id of 5. It will wait for the results to come back, and check for a success property. If it exists and is true, then the api lookup was successful.
The idea is to set the timeout condition then in the worker task, which completely depends on how you're implementing the API lookup. If you're using cURL (or some wrapper around cURL), you can see the answer to how to detect a timeout here.
From the worker side:
$worker= new GearmanWorker();
$worker->addServer();
$worker->addFunction("apiPriceLookup", "apiPriceLookup", $count);
while ($worker->work());
function apiPriceLookup($job) {
$payload = json_decode($job->workload());
try {
$results = [
'data' => PerformApiLookupForProductId($payload->productId),
'success' => true,
];
} catch(Exception $e) {
$results = ['success' => false];
}
return json_encode($results);
}
This just creates a GearmanWorker object and subscribes it the function of apiPriceLookup. It will call the function apiPriceLookup whenever a client submits a task to the job server. That function calls out to another function, PerformApiLookupForProductId, which should be written so as to throw an exception whenever a timeout condition occurs.
I don't think this would be considered using exceptions to control logic flow, I think timeout conditions generally are exceptional (or should be) events. For instance, Guzzle will throw a GuzzleHttp\Exception\RequestException when it has decided to timeout.

Categories