We have a large database where we track users' activities. A service presents activity data. The request to the service consists of 3 stages.
Create data request. POST /createList
Check if data has been processed. GET /checkList/listId
Retrieve data if processed. GET /getList/idlistId
Processing of data in the service usually takes 1 minute, but this time may be 30 minutes. As soon as this process is finished, I need to return the answer to my own user. Although "do while" seems to work for now, I don't believe it is a very logical solution. How can I do this with Laravel job?
Do I need to call the queue with cron job continuously?
my code:
$listService = new ListService();
$activityList = null;
// Request for generate the list
// Return listId
$res = $listService->createList('USER_ACTIVITY_LIST');
$listId = $res->getListId();
do {
// control status of list.
$checkList = $listService->checkList($listId);
// If completed, create an ActivityList record
if ($checkList->getStatus() === 'complete') {
$data = $listService->getList($listId)->getData();
$activityList = ActivityList::create($data);
} else {
sleep(10);
}
} while ($checkList->getStatus() === 'pending'); // if it is not completed. try again
return $activityList;
Related
Ok, so lets start slow...
I have a pthreads script running and working for me, tested and working 100% of the time when I run it manually from the command line via ssh. The script is as follows with the main thread process code adjusted to simulate random process' run time.
class ProcessingPool extends Worker {
public function run(){}
}
class LongRunningProcess extends Threaded implements Collectable {
public function __construct($id,$data) {
$this->id = $id;
$this->data = $data;
}
public function run() {
$data = $this->data;
$this->garbage = true;
$this->result = 'START TIME:'.time().PHP_EOL;
// Here is our actual logic which will be handled within a single thread (obviously simulated here instead of the real functionality)
sleep(rand(1,100));
$this->result .= 'ID:'.$this->id.' RESULT: '.print_r($this->data,true).PHP_EOL;
$this->result .= 'END TIME:'.time().PHP_EOL;
$this->finished = time();
}
public function __destruct () {
$Finished = 'EXITED WITHOUT FINISHING';
if($this->finished > 0) {
$Finished = 'FINISHED';
}
if ($this->id === null) {
print_r("nullified thread $Finished!");
} else {
print_r("Thread w/ ID {$this->id} $Finished!");
}
}
public function isGarbage() : bool { return $this->garbage; }
public function getData() {
return $this->data;
}
public function getResult() {
return $this->result;
}
protected $id;
protected $data;
protected $result;
private $garbage = false;
private $finished = 0;
}
$LoopDelay = 500000; // microseconds
$MinimumRunTime = 300; // seconds (5 minutes)
// So we setup our pthreads pool which will hold our collection of threads
$pool = new Pool(4, ProcessingPool::class, []);
$Count = 0;
$StillCollecting = true;
$CountCollection = 0;
do {
// Grab all items from the conversion_queue which have not been processed
$result = $DB->prepare("SELECT * FROM `processing_queue` WHERE `processed` = 0 ORDER BY `queue_id` ASC");
$result->execute();
$rows = $result->fetchAll(PDO::FETCH_ASSOC);
if(!empty($rows)) {
// for each of the rows returned from the queue, and allow the workers to run and return
foreach($rows as $id => $row) {
$update = $DB->prepare("UPDATE `processing_queue` SET `processed` = 1 WHERE `queue_id` = ?");
$update->execute([$row['queue_id']]);
$pool->submit(new LongRunningProcess($row['fqueue_id'],$row));
$Count++;
}
} else {
// 0 Rows To Add To Pool From The Queue, Do Nothing...
}
// Before we allow the loop to move on to the next part, lets try and collect anything that finished
$pool->collect(function ($Processed) use(&$CountCollection) {
global $DB;
$data = $Processed->getData();
$result = $Processed->getResult();
$update = $DB->prepare("UPDATE `processing_queue` SET `processed` = 2 WHERE `queue_id` = ?");
$update->execute([$data['queue_id']]);
$CountCollection++;
return $Processed->isGarbage();
});
print_r('Collecting Loop...'.$CountCollection.'/'.$Count);
// If we have collected the same total amount as we have processed then we can consider ourselves done collecting everything that has been added to the database during the time this script started and was running
if($CountCollection == $Count) {
$StillCollecting = false;
print_r('Done Collecting Everything...');
}
// If we have not reached the full MinimumRunTime that this cron should run for, then lets continue to loop
$EndTime = microtime(true);
$TimeElapsed = ($EndTime - $StartTime);
if(($TimeElapsed/($LoopDelay/1000000)) < ($MinimumRunTime/($LoopDelay/1000000))) {
$StillCollecting = true;
print_r('Ended To Early, Lets Force Another Loop...');
}
usleep($LoopDelay);
} while($StillCollecting);
$pool->shutdown();
So while the above script will run via a command line (which has been adjusted to the basic example, and detailed processing code has been simulated in the above example), the below command gives a different result when run from a cron setup for every 5 minutes...
/opt/php7zts/bin/php -q /home/account/cron-entry.php file=every-5-minutes/processing-queue.php
The above script, when using the above command line call, will loop over and over during the run time of the script and collect any new items from the DB queue, and insert them into the pool, which allows 4 processes at a time to run and finish, which is then collected and the queue is updated before another loop happens, pulling any new items from the DB. This script will run until we have processed and collected all processes in the queue during the execution of the script. If the script has not run for the full 5 minute expected period of time, the loop is forced to continue checking the queue, if the script has run over the 5 minute mark it allows any current threads to finish & be collected before closing. Note that the above code also includes a code based "flock" functionality which makes future crons of this idle loop and exit or start once the lock has lifted, ensuring that the queue and threads are not bumping into each other. Again, ALL OF THIS WORKS FROM THE COMMAND LINE VIA SSH.
Once I take the above command, and put it into a cron to run for every 5 minutes, essentially giving me a never ending loop, while maintaining memory, I get a different result...
That result is described as follows... The script starts, checks the flock, and continues if the lock is not there, it creates the lock, and runs the above script. The items are taken from the queue in the DB, and inserted into the pool, the pool fires off the 4 threads at a time as expected.. But the unexpected result is that the run() command does not seem to be executed, and instead the __destruct function runs, and a "Thread w/ ID 2 FINISHED!" type of message is returned to the output. This in turn means that the collection side of things does not collect anything, and the initiating script (the cron script itself /home/account/cron-entry.php file=every-5-minutes/processing-queue.php) finishes after everything has been put into the pool, and destructed. Which prematurely "finishes" the cron job, since there is nothing else to do but loop and pull nothing new from the queue, since they are considered "being processed" when processed == 1 in the queue.
The question then finally becomes... How do I make the cron's script aware of the threads that where spawned and run() them without closing the pool out before they can do anything?
(note... if you copy / paste the provided script, note that I did not test it after removing the detailed logic, so it may need some simple fixes... please do not nit-pick said code, as the key here is that pthreads works if the script is executed FROM the Command Line, but fails to properly run when the script is executed FROM a CRON. If you plan on commenting with non-constructive criticism, please go use your fingers to do something else!)
Joe Watkins! I Need Your Brilliance! Thanks In Advance!
After all of that, it seems that the issue was with regards to user permissions. I was setting this specific cron up inside of cpanel, and when running the command manually I was logged in as root.
After setting this command up in roots crontab, I was able to get it to successfully run the threads from the pool. Only issue I have now is some threads never finish, and sometimes I am unable to close the pool. But this is a different issue, so I will open another question elsewhere.
For those running into this issue, make sure you know who the owner of the cron is as it matters with php's pthreads.
I'm trying have this principle working:
a producer that sends one message (1) and waits for ack which contains some result (json result of an operation, actually)
a consumer that checks all pending messages every 5 seconds, and handle all of them in one row, and acknowlegdes all of them in one row, then wait again 5 seconds (infinite loop).
Here are the 30 lines of my stompproducer.php:
<?php
function msg($txt)
{
echo date('H:i:s > ').$txt."\n";
}
$queue = '/aaaa';
$msg = 'bar';
if (count($argv)<3) {
echo $argv[0]." [msg] [nb to send]\n";
exit(1);
}
$msg = (string)$argv[1];
$to_send = intval($argv[2]);
try {
$stomp = new Stomp('tcp://localhost:61613');
while (--$to_send) {
msg("Sending...");
$result = $stomp->send(
$queue,
$msg." ". date("Y-m-d H:i:s"),
array('receipt' => 'message-123')
);
echo 'result='.var_export($result,true)."\n";
msg("Done.");
}
} catch(StompException $e) {
die('Connection failed: ' . $e->getMessage());
}
Here are the 30 lines of my stompconsumer.php:
<?php
$queue = '/aaaa';
$_waitTimer=5000000;
$_timeLastAsk = microtime(true);
function msg($txt)
{
echo date('H:i:s > ').$txt."\n";
}
try {
$stomp = new Stomp('tcp://localhost:61613');
$stomp->subscribe($queue, array('activemq.prefetchSize' => 40));
$stomp->setReadTimeout(0, 10000);
while (true) {
$frames_read=array();
while ($stomp->hasFrame()) {
$frame = $stomp->readFrame();
if ($frame != null) {
array_push($frames_read, $frame);
}
if (count($frames_read)==40) {
break;
}
}
msg("Nombre de frames lues : ".count($frames_read));
msg("Pause...");
$e=$_waitTimer-(microtime(true)-$_timeLastAsk);
if ($e>0) {
usleep($e);
}
if (count($frames_read)>0) {
msg("Ack now...");
foreach ($frames_read as $frame) {
$stomp->ack($frame);
}
}
$_timeLastAsk = microtime(true);
}
} catch(StompException $e) {
die('Connection failed: ' . $e->getMessage());
}
I can't manage to do synchronous producer, ie producer that waits for consumer ack. If you run the samples I've done here, you'll see that producer instantaneously sends all messages, then quits, with all "true" like "ok" results when calling $stomp->send().
I still haven't found good examples, neither good documentation with a simple blocking sample.
What shall I do to make my producer blocking until the consumer sends its ack?
NB: I've read all documentation here and the stomp php questions on stackoverflow here and here.
First thing to pop t my mind: Take a look at this stomp plugin:
http://activemq.apache.org/message-redelivery-and-dlq-handling.html
Another workaround I can thing of is:
On producer side:
1. Change your producer to send persistent messages
On your consumer side:
Use a timer.
1. Read message/frames until empty or max cap reached.
2. Create a CURL request and empty packed list of messages
3. Sleep your server for 5 secs
You definitely need to test this further, but should work. Once the process wakes up, you should be able to read all messages queued.
Things to consider:
- persistent messages will need an expiration time
- You'll need ACK on your consumer side to make sure to update status of messages already attended. Use ACK=client so you can ACK all messages acknowledged
- It's easier if you don't have to wait for your CURL to respond.
- Out of the box, it's not supported to send ACK from the consumer (server side).
Best of luck
From the question it sounds like you are looking for a request / response type messaging pattern. This is something you must implement yourself as the STOMP ack you reference is only acking the message to the message broker on behalf of the consumer, the producer has no knowledge of this. Request response involves setting a reply-to address on the outbound message and then waiting to receive a response on that address before sending the next message. There are a great many articles out there that document this sort of thing such as this one.
Or if you only need to know if the broker has received the message from the client and persisted it then you can use STOMP's built in receipt mechanism to have the broker send you a receipt indicating that it has processed your sent message. This however does not guarantee that a consumer has processed the message yet.
I just remembered, you can try reactphp/stomp library.
It's an event driven library that might help you. specially take a look ad the core functionality addPeriodicTimer
https://github.com/reactphp/stomp
Cheers
I'm writing a function in PHP that loops through an array, and then performs an asynchronous call on it (using a Promise).
The problem is that, the only way I can make this loop happen, is by letting a function call itself asynchronously. I run into the 100-nested functions problem really quick, and I would basically like to change it to not recur.
function myloop($data, $index = 0) {
if (!isset($data[$index])) {
return;
}
$currentItem = $data[$index];
$currentItem()->then(function() use ($data, $index) {
myloop($data, $index + 1);
});
}
For those that want to answer this from a practical perspective (e.g.: rewrite to not be asynchronous), I'm experimenting with functional and asynchronous patterns and I want to know if it is possible to do this with PHP.
I've written a possible solution in pseudo-code. The idea is to limit
the number of items running asynchronous at once by using a database
queue. myloop() is no longer directly recursive, instead being called
whenever an item finishes running. In the sample data, I've limited it
to 4 items concurrently (arbitrary value).
Basically, it's still recursively calling itself, but in a roundabout way,
avoiding the situation you mentioned of many nested called.
Execution Flow:
myloop() ---> queue
^ v
| |
'<-processor <-'
<?php
//----------
// database
//table: config
//columns: setting, value
//items: ACTIVE_COUNT, 0
// ITEM_CONCURRENT_MAX, 4
//table: queue
//columns: id, item, data, index, pid, status(waiting, running, finished), locked
// --- end pseudo-schema ---
<?php
// ---------------
// itemloop.php
// ---------------
//sends an item and associated data produced by myloop() into a database queue,
//to be processed (run asynchronous, but limited to how many can run at once)
function send_item_to_processor($item, $data, $index, $counter) {
//INSERT $item to a queue table, along with $data, $index (if needed), $counter, locked = 0
//status == waiting
}
//original code, slightly modified to remove direct recursion and implement
//the queue.
function myloop($data, $index = 0, $counter = 0) {
if (!isset($data[$index])) {
return;
}
$currentItem = $data[$index];
$currentItem()->then(function() use ($data, $index) {
//instead of directly calling `myloop()`, push item to
//database and let the processor worry about it. see below.
//*if you wanted currentItem to call a specific function after finishing,
//you could create an array of numbered functions and pass the function
//number along with the other data.*
send_item_to_processor($currentItem, $data, $index + 1, $counter + 1);
});
}
// ---------------
// processor.php
// ---------------
//handles the actual running of items. looks for a "waiting" item and
//executes it, updating various statuses along the way.
//*called from `process_queue()`*
function process_new_items() {
//select ACTIVE_COUNT, ITEM_CONCURRENT_MAX
//ITEM_COUNT = total records in the queue. this is done to
//short-circuit the execution of `process_queue()` whenever possible
//(which is called frequently).
if (ITEM_COUNT == 0 || $ACTIVE_COUNT >= $ITEM_CONCURRENT_MAX)
return FALSE;
//select item from queue where status = waiting AND locked = 0 limit 1;
//update item set status = running, pid = programPID
//update config ACTIVE_COUNT = +1
//**** asynchronous run item here ****//
return TRUE;
}
//main processor for the queue. first processes new/waiting items
//if it can (if too many items aren't already running), then processes
//dead/completed items. Upon an item.status == finished, `myloop()` is
//called from this function. Still technically a recursive call, but
//avoids out-of-control situations due to the asynchronous nature.
//this function could be called on a timer of some sort, such as a cronjob
function process_queue() {
if (!process_new_items())
return FALSE; //too many instances running, no need to process
//check queue table for items with status == finished or is_pid_valid(pid) == FALSE
$numComplete = count($rows);
//update all rows to locked = 1, in case process_queue() gets called again before
//we finish, resulting in an item potentially being processed as dead twice.
foreach($rows as $item) {
if (is_invalid(pid) || $status == finished) {
//and here is the call back to myloop(), avoiding a strictly recursive
//function call.
//*Not sure what to do with `$item` here -- might be passed back to `myloop()`?.*
//delete item(s) from queue
myloop(data, index, counter - 1);
//decrease config.ACTIVE_COUNT by $numComplete
}
}
}
Very simply, I have a program that needs to perform a large process (anywhere from 5 seconds to several minutes) and I don't want to make my page wait for the process to finish to load.
I understand that I need to run this gearman job as a background process but I'm struggling to identify the proper solution to get real-time status updates as to when the worker actually finishes the process. I've used the following code snippet from the PHP examples:
do {
sleep(3);
$stat = $gmclient->jobStatus($job_handle);
if (!$stat[0]) // the job is known so it is not done
$done = true;
echo "Running: " . ($stat[1] ? "true" : "false") . ", numerator: " . $stat[2] . ", denomintor: " . $stat[3] . "\n";
} while(!$done);
echo "done!\n";
and this works, however it appears that it just returns data to the client when the worker finished telling the job what to do. Instead I want to know when the literal process of the job finished.
My real-life example:
Pull several data feeds from an API (some feeds take longer than others)
Load a couple of the ones that always load fast, place a "Waiting/Loading" animation on the section that was sent off to a worker queue
When the work is done and the results have been completely retrieved, replace the animation with the results
This is a bit late, but I stumbled across this question looking for the same answer. I was able to get a solution together, so maybe it will help someone else.
For starters, refer to the documentation on GearmanClient::jobStatus. This will be called from the client, and the function accepts a single argument: $job_handle. You retrieve this handle when you dispatch the request:
$client = new GearmanClient( );
$client->addServer( '127.0.0.1', 4730 );
$handle = $client->doBackground( 'serviceRequest', $data );
Later on, you can retrieve the status by calling the jobStatus function on the same $client object:
$status = $client->jobStatus( $handle );
This is only meaningful, though, if you actually change the status from within your worker with the sendStatus method:
$worker = new GearmanWorker( );
$worker->addFunction( 'serviceRequest', function( $job ) {
$max = 10;
// Set initial status - numerator / denominator
$job->sendStatus( 0, $max );
for( $i = 1; $i <= $max; $i++ ) {
sleep( 2 ); // Simulate a long running task
$job->sendStatus( $i, $max );
}
return GEARMAN_SUCCESS;
} );
while( $worker->work( ) ) {
$worker->wait( );
}
In versions of Gearman prior to 0.5, you would use the GearmanJob::status method to set the status of a job. Versions 0.6 to current (1.1) use the methods above.
See also this question: Problem With Gearman Job Status
I searched for this but most of the questions related to this are for API's with other services.
I'm building an API that allows game developers to send and retrieve user info from my database.
I was finally able to put together the API, but now I need to call the API.
1st when the game initiates, it sends us the game developers key their developer id and game id.
//Game loads, get developer key, send token and current high score
// == [ FIRST FILTER - FILTER GET REQUEST ] == //
$_GET = array_map('_INPUT', $_GET); // filter all input
// ====================================== //
// ============[ ACTION MENU ]=========== //
// ====================================== //
if(!empty($_GET['action']) && !empty($_GET['user']) && !empty($_GET['key']) && !empty($_GET['email']) && !empty($_GET['password'])): // if key data exists
switch($_GET['action']):
//athenticate game developer return and high score
case 'authenticate':
$db = new PDO('mysql:host=localhost;dbname=xxxx', 'xxxx', 'xxxx');
$db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING);
$st = $db->prepare("SELECT * FROM `game_developers_games` WHERE `id` = :gameid AND `developer_id`=:user AND `key`= :key AND `developer_active` = '1'"); // need to filter for next auction
$st->bindParam(':user', $_GET['user']); // filter
$st->bindParam(':key', $_GET['key']); // filter
$st->execute();
$r = $st->fetch(PDO::FETCH_ASSOC);
if($st->rowCount() == 0):
$return = array('DBA_id'=>'0000');
echo json_encode($return);
else:
$token = initToken($_GET['key'],$_GET['user']);
if($token == $r['API_Token']):
$return = array(
'DBA_id'=>$token,
'DBA_servertime'=>time(),
'DBA_highscore'=>$r['score'],
);
echo json_encode($return);
endif;
endif;
break;
Here's the script the game developer will have to add to their game to get the data when the game loads. Found this on another stackoverflow question but it's not working.
$.getJSON("https://www.gamerholic.com/gamerholic_api/db_api_v1.php? user=1&key=6054abe3517a4da6db255e7fa27f4ba001083311&gameid=1&action=authenticate", function () {
alert("aaa");
});
Try adding &callback=? to the end of the url you are constructing. This will enable jsonp that is accepted by cors.
$.getJSON("https://www.gamerholic.com/gamerholic_api/db_api_v1.php?user=1&key=6054abe3517a4da6db255e7fa27f4ba001083311&gameid=1&action=authenticate&callback=?", function () {
alert("aaa");
});
As per cross domain origin policy you cannot access cross domain url using jquery getJson function.
A callback is required to manage cross domain request using json and it needs to be handled on the server as well as the client end.
Also make sure to check the response using firebug or similar tool because as of now it is returning response code as 200.
I am mentioning two threads here which can guide you the right way
Jquery getJSON cross domain problems
http://www.fbloggs.com/2010/07/09/how-to-access-cross-domain-data-with-ajax-using-jsonp-jquery-and-php/