I need to trigger a laravel job within the transaction.
Since the jobs are asynchronous, sometimes they complete before the transaction commits. In such situations, the job cannot get the relevant raw using the id. (Because the transaction is not yet committed and changes are not visible to the outside)
Please suggest a method other than putting this part outside of the transaction to solve this problem.
DB::beginTransaction()
...
$process = DB::table("trn_users")->insertGetId([
"first_name" => $first_name,
"last_name" => $last_name
]);
$job = (new SendEmailJob([
'Table' => 'trn_users',
'Id' => $process
]))->onQueue('email_send_job');
$this->dispatch($job);
...
DB:commit()
For this purpose I've published a package http://github.com/therezor/laravel-transactional-jobs
The other option is use events:
DB::beginTransaction()
...
$process = DB::table("trn_users")->insertGetId([
"first_name" => $first_name,
"last_name" => $last_name
]);
$job = (new SendEmailJob([
'Table' => 'trn_users',
'Id' => $process
]))->onQueue('email_send_job');
Event::listen(\Illuminate\Database\Events\TransactionCommitted::class, function () use ($job) {
$this->dispatch($job);
});
...
DB:commit()
I recently solved this problem in a project.
Simply defined a "buffer" facade singleton with a dispatch() method which instead of dispatching it right away, buffers jobs in memory until transaction commit.
When the 'buffer' class is constructed, it registers an event listener for commit and rollback events, and either dispatches or forgets buffered jobs depending on which event is fired.
It does some other clever stuff around the actual transaction level and working out whether it needs to buffer or dispatch immediately.
Hopefully you get the idea, but let me know if you want me to go into more detail.
Related
I am currently working on a PHP project using the AWS PHP SDK. I have a data import process that utilizes AWS batch. The PHP application needs to be able to check AWS for jobs that are not complete, prior to letting the user start a new job.
I am currently using the listJobs() call on the BacthClint like so, following an example given by the documentation:
<?php
$client = new Aws\Batch\BatchClient([
...
]);
$jobs = $client->listJobs([
'jobQueue' => '...',
'jobStatus' => 'RUNNING',
]);
However, I would like to get jobs matching the statuses of SUBMITTED, PENDING, RUNNABLE and STARTING as well as RUNNING.
The docs make it seem like I could submit the following value, as a pipe delinted list. But this syntax caused the request to fail:
<?php
$jobs = $client->listJobs([
'jobQueue' => '...',
'jobStatus' => 'SUBMITTED|PENDING|RUNNABLE|STARTING|RUNNING',
]);
Error:
Error executing request, Exception : Invalid job status SUBMITTED|PENDING|RUNNABLE|STARTING|RUNNING. Valid statuses are [SUBMITTED, PENDING, RUNNABLE, STARTING, RUNNING, SUCCEEDED, FAILED]
Is there some kind of way that I can submit multiple values under the 'jobStatus' input?
If not, is there some other way I can do this utilizing the AWS PHP SDK?
Note:
It looks like there is a 'filters' feature listed under the heading "Parameter Details" and "Parameter Syntax" secotion in the documentation example from before. This seems to suggest that something like this should work:
<?php
$jobs = $client->listJobs([
'jobQueue' => '...',
'filters' => [
'name' => 'jobStatus',
'values' => ['SUBMITTED', 'PENDING', 'RUNNABLE', 'STARTING', 'RUNNING']
]
]);
"You can filter the results by job status with the jobStatus parameter. If you don't specify a status, only RUNNING jobs are returned."
"The job status used to filter jobs in the specified queue. If the filters parameter is specified, the jobStatus parameter is ignored and jobs with any status are returned. If you don't specify a status, only RUNNING jobs are returned."
However this seems to return blank result sets.
I'm working on a process where I have a Queue, and I start with a known unit of work. As I process the unit of work, it will result in zero-or-more (unknown) units of work that gets added to the Queue. I continue to process the queue until there's no more work to perform.
I'm working on a proof-of-concept using Guzzle where I accept a first URL to seed the queue, then process the body of the response which may result in more URLs that need to be processed. My goal is to add them to the queue and have Guzzle continue processing them until there's nothing left in the queue.
In other cases, I can define a variable as the queue, and pass it by-reference into a function so that it gets updated with new work. But in the case of Guzzle Async Pools (which I think is the most efficient way to handle this), there doesn't seem to be a clear way to update the queue in-process and have the Pool execute the requests.
Does Guzzle provide a built-in approach for updating the list of Pool requests from inside a fulfilled Promise callback?
use ArrayIterator;
use GuzzleHttp\Promise\EachPromise;
use GuzzleHttp\TransferStats;
use Psr\Http\Message\ResponseInterface;
// Re-usable callback which prints the URL being requested
function onStats(TransferStats $stats) {
echo sprintf(
'%s (%s)' . PHP_EOL,
$stats->getEffectiveUri(),
$stats->getTransferTime()
);
}
// The queue of work to be performed
$requests = new ArrayIterator([
$client->get('http://httpbin.org/anything', [
'on_stats' => 'onStats',
])
]);
// Process the queue, which results in more work to be performed
$p = (new EachPromise($requests, [
'concurrency' => 50,
'fulfilled' => function(ResponseInterface $response) use ($client, &$requests) {
$hash = bin2hex(random_bytes(10));
$requests[] = $client->get(sprintf('http://httpbin.org/anything/%s', $hash), [
'on_stats' => 'onStats',
]);
},
'rejected' => function($reason) {
echo $reason . PHP_EOL;
},
]))->promise();
// Wait for everything to finish
$p->wait(true);
My question appears to be similar to Incrementally add requests to a Guzzle 5.0 Pool (Rolling Requests), but is different in that these refer to different major versions of Guzzle.
After posting this, I was able to do more searching and found some more SO threads and GitHub Issues for Guzzle. I found this library, which appears to address the problem.
https://github.com/alexeyshockov/guzzle-dynamic-pool
i'm trying to learn mongodb transactions using php-mongodb library v1.5 but i've found some problemes.
i've tried to start, commit, and abort transaction using the giving methods but abortTransaction is not working for me :
$session = self::$instance->startSession();
$this->db = self::$instance->{"mydb"};
$session->startTransaction();
$this->db->users->deleteOne([
'_id' => new MongoDB\BSON\ObjectId('5c88e197df815495df201a38')
]);
$session->abortTransaction();
$session->endSession();
the transaction is always commited even after the abort action !!!
what i'm missing here please save my day :(
the transaction is always commited even after the abort action
This is because the delete operation doesn't utilise the session object that you have instantiated. You need to pass the session as a $options parameter MongoDB\Collection::deleteOne(). Otherwise it will execute outside of the transaction. For example:
$session->startTransaction();
$this->db->users->deleteOne(
['_id' => new MongoDB\BSON\ObjectId('5c88e197df815495df201a38')],
['session' => $session]
);
See also MongoDB Transactions for more information
I am trying to create a job in GoogleBigQuery, that returns a JobId instantly whilst the Job continues to run without making the user wait.
From reading the documentation runQuery suggests this should be possible. maxRetries has been set as well as a very small timeoutMs.
The idea being that the user will get a JobId and an alert notifying them the Job is being processed and they will receive a further notification when it's complete.
Installed via Composer Version: google/cloud: ^0.53.0
Sample code included below.
runQuery
Runs a BigQuery SQL query in a synchronous fashion.
Unless $options.maxRetries is specified, this method will block until the query completes, at which time the result set will be returned.
http://googlecloudplatform.github.io/google-cloud-php/#/docs/google-cloud/v0.53.0/bigquery/bigqueryclient?method=runQuery
$client = new BigQueryClient([
'projectId' => 'XXXX',
]);
$client->dataset('XXXX');
if (!$dataset->exists()) {
throw new \Exception(sprintf('Dataset does not exist'));
}
$options = [
'timeoutMs' => 1000,
'maxRetries' => 2,
];
$queryJob = $client->queryConfig($sql, $options); //tried options here
$queryResult = $client->runQuery($queryJob,$options); //and tried options here together and individually
echo $queryResult->job()->id();
Hi i am new to GAE task queues, I created one queue with name anchorextractor, this is showing in queues list.
Then i created a task with the url ('/worker/extractor/1'). after creating if i echo the name of task, its showing name ( task3 ). After i checked the queues list is Taskqueue page, Tasks under this queue is 0. Actually there are 3 tasks created. I tried with all possibilities. I think i explained well and no need of code here. If u need more explination i will give. Please anyone help me. (I am updating the question with code for reference, following is the code):
require_once 'google/appengine/api/taskqueue/PushTask.php';
use google\appengine\api\taskqueue\PushTask;
require_once 'google/appengine/api/taskqueue/PushQueue.php';
use google\appengine\api\taskqueue\PushQueue;
$queue = new PushQueue('tagextractor');
$task = new PushTask('/worker/anchorextractor/1', ['content_id' => 'aa', 'content_type' => 'aa']);
echo "Task Name = ".$task_name = $task->add();
$queue->addTasks([$task]);
Try this syntax instead, it will log the new tasks name to the AppEngine logs as proof that the task was created:
require_once 'google/appengine/api/taskqueue/PushTask.php';
use \google\appengine\api\taskqueue\PushTask;
$task_name = (new PushTask('/worker/anchorextractor/1', array(
'content_id' => 'aa',
'content_type' => 'aa'
)))->add("tagextractor");
syslog(LOG_INFO, "new task=".$task_name);
Tasks do get processed very quickly, so it is sometimes difficult to "see" them in the queue, you can however go to the queue in the admin console and pause it, the tasks will then build up until you either run it manually or resume the queue.