pthreads stopping already running thread once a condition has been met - php

I'm still relatively new to PHP and trying to use pthreads to solve an issue. I have 20 threads running processes that end at varying times. Most finish around < 10 seconds or so. I don't need all 20, just 10 detected. Once I get to 10, I would like to kill the threads, or to continue on to the next step.
I have tried using set_time_limit to about 20 seconds for each of the threads, but they ignore it and keep running. I am looping through the jobs looking for the join because I didn't want the rest of the program to run but I'm stuck until the slowest one has finished. While pthreads has reduced the time from around a minute to about 30 seconds, I can shave even more time since the first 10 run in about 3 seconds.
Thanks for any help and here is my code:
$count = 0;
foreach ( $array as $i ) {
$imgName = $this->smsId."_$count.jpg";
$name = "LocalCDN/".$imgName;
$stack[] = new AsyncImageModify($i['largePic'], $name);
$count++;
}
// Run the threads
foreach ( $stack as $t ) {
$t->start();
}
// Check if the threads have finished; push the coordinates into an array
foreach ( $stack as $t ) {
if($t->join()){
array_push($this->imgArray, $t->data);
}
}
class class AsyncImageModify extends \Thread{
public $data;
public function __construct($arg, $name, $container) {
$this->arg = $arg;
$this->name = $name;
}
public function run() {
//tried putting the set_time_limit() here, didn't work
if ($this->arg) {
// Get the image
$didWeGetTheImage = Image::getImage($this->arg, $this->name);
if($didWeGetTheImage){
$timestamp1 = microtime(true);
print_r("Starting face detection $this->arg" . "\n");
print_r(" ");
$j = Image::process1($this->name);
if($j){
// lets go ahead and do our image manipulation at this point
$userPic = Image::process2($this->name, $this->name, 200, 200, false, $this->name, $j);
if($userPic){
$this->data = $userPic;
print_r("Back from process2; the image returned is $userPic");
}
}
$endTime = microtime(true);
$td = $endTime-$timestamp1;
print_r("Finished face detection $this->arg in $td seconds" . "\n");
print_r($j);
}
}
}

It is difficult to guess the functionality of Image::* methods, so I can't really answer in any detail.
What I can say, is that there are very few machines I can think of that are suitable to run 20 concurrent threads in any case. A more suitable setup would be the worker/stackable model. A Worker thread is a reuseable context, and can execute task after task, implemented as Stackables; execution in a multi-threaded environment should always use the least amount of threads to get the most work done possible.
Please see pooling example and other examples that are distributed with pthreads, available on github, additionally, much information regarding usage is contained in past bug reports, if you are still struggling after that ...

Related

usleep() in while loop causing issues - whole process freezes

I am using php, Laravel, Redis, and SQL on an Ubuntu localhost server. I have made a bunch of methods that return results from API searches after some processing. I am calling 5 of these methods which will be very slow if done synchronously, so I've been experimenting with async approaches (which I know php isn't optimised for). After a few approaches I have found some success with pcntl_fork(), but I'm running into some nasty problems.
Edit: After some messing around I have found that if I remove the while loop then the code afterward executes properly, I have removed the while loop and placed it in the second 'search' method. However it still causes a freeze of the system. This makes no sense as there shouldn't be an infinite loop as if I manually query the Redis db, all 5 results are there.
This is my code: (I have a few custom classes for making and processing the API calls, fyi these methods work flawlessly)
//this caches the individual api results to a Redis list
public static function cacheAsyncApiSearch(string $searchQuery, int $maxResults = 20)
{
$key = "search:".$searchQuery; //for Redis
if(!Redis::client()->exists($key)) {
for ($i = 0; $i < 5; $i++) {
// Create a child process
$pid = pcntl_fork();
if ($pid == -1) {
// Fork failed
exit(1);
} elseif ($pid) {
// This is the parent process
// I have tried many versions of pcntl_wait, none work! They all still don't allow code to be ran afterwards (even within this elseif block), and the best it does is cache the 1st api case (YouTube)
// while (!pcntl_wait($status, WNOHANG)) {
// $exitStatus = pcntl_wexitstatus($status);
// // Do something with the exit status of the child process
// }
// dd($pid);
// pcntl_waitpid($pid, $status, WUNTRACED);
} else {
//child processes
switch ($i) {
case 0:
$results = YouTube::search($searchQuery, $maxResults)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 1:
$results = Dailymotion::search($searchQuery, $maxResults)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 2:
$results = Vimeo::search($searchQuery, $maxResults)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 3:
$results = Twitch::search($searchQuery, 2)['results'];
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
case 4:
$results = Podcasts::getPodcastsFromItunesResults(Podcasts::search($searchQuery, 2)["response"]->results);
Redis::client()->rPush($key,SearchResultDTO::jsonEncodeArray($results));
SearchResultDTO::convertResultDTOToModels($results);
break;
}
$i = 10000;
exit(0);
}
}
// for noting the process id of the given process that gets to this point
Redis::client()->lPush("search_pid:".$searchQuery, $pid);
// sets a time out for the redis cache
Redis::client()->expire($key, 60*60*4);
while (is_numeric( Redis::client()->lLen($key)) && Redis::client()->lLen($key) < 5) {
usleep(500000); // 0.5 seconds
// pcntl_waitpid(-1, $status); //does this even do anything? not for me
}
return false; // not already cached
}
return true; // already cached
}
This code somewhat works, It performs the api calls and caches the Redis perfectly. However when the method is ran, no code will be ran after it (unless redis has found a cached version and the process is not forked).
This made me think that all processes are being exited (possibly true? if so i dont know why), so I tried writing a version without the exit(0) line. This works, I can then perform code after the method call, however I noticed (when getting SQL race conditions) that all 6 (5 child, 1 parent) processes continued to run their own version of the code after this method (e.g. some database writes)
public static function search(string $searchQuery, int $maxResults = 20): array
{
$key = "search:".$searchQuery;
$results = [];
// the quoted method above
self::cacheAsyncApiSearch($searchQuery, $maxResults);
foreach (Redis::client()->lRange($key,0,-1) as $result){
$results = array_merge($results, SearchResultDTO::jsonDecodeArray($result));
}
$creatorDTOs = [];
$videoDTOs = [];
$streamDTOs = [];
$playlistDTOs = [];
$podcastDTOs = [];
/** #var SearchResultDTO $result */
foreach ($results as $result) {
match ($result->kind) {
Kind::Creator => $creatorDTOs[] = $result,
Kind::Video => $videoDTOs[] = $result,
Kind::Stream => $streamDTOs[] = $result,
Kind::Playlist => $playlistDTOs[] = $result,
Kind::Podcast => $podcastDTOs[] = $result,
};
}
// did this to test how many times the code was being ran (the list has 6 1's in it)
Redis::client()->lPush("here", '1');
// I know this code isn't completely efficient since I already called these conversion methods before, however I am just trying to get the forking stuff to work right now.
return [
"creators" => SearchResultDTO::convertResultDTOToModels($creatorDTOs),
"videos" => SearchResultDTO::convertResultDTOToModels($videoDTOs),
"streams" => SearchResultDTO::convertResultDTOToModels($streamDTOs),
"playlists" => SearchResultDTO::convertResultDTOToModels($playlistDTOs),
"podcasts" => SearchResultDTO::convertResultDTOToModels($podcastDTOs)
];
}
These DTO's (Data Transfer Objects) are being used to populate a UI. So for example, when I make a search (that isn't cached), the page is blank forever. But if I refresh the page (after the search is cached) then the results show just fine.
This is the most bizarre problem I have ever ran into and I really appreciate any help.
Edit please read:
After some messing around I have found that if I remove the while loop then the code afterward executes properly, I have removed the while loop and placed it in the second 'search' method. However it still causes a freeze of the system. This makes no sense as there shouldn't be an infinite loop as if I manually query the Redis db, all 5 results are there. And the dd("two") can never be excecated unless the usleep() is removed. Hopefully this narrows the problem down.
Edit 2 please read:
I have figured out that I can get the dd("two") to work when usleep() is reduced to 0.05s from 0.5 seconds, but it still doesnt seem to run long enough for it to work.
if(!self::cacheAsyncApiSearch($searchQuery, $maxResults))
{
// make sure Redis is properly returning a number not object
$len = Redis::client()->lLen($key);
while(!is_numeric($len)){
usleep(500000); // 0.5 seconds
$len = Redis::client()->lLen($key);
}
//dd($len); //this dd() works
while ($len < 5) {
dd("one"); // this dd() works
usleep(500000); // 0.5 seconds
dd("two"); **//$this does not work, why?**
$len = Redis::client()->lLen($key);
}
}

PHP pthreads async with threads number gets stuck

I'm trying to achive a script that start threads async with a limit(5) and does wait if all threads are busy. If the threads are busy my script has to wait untill one is free and it should start another one but for some reasons does get stuck. Yesterday I discovered pthreads and after a few hours today this is all I came with:
EDIT: There are examples with 500 threads for example but in my case I need to parse large files(a few MBs) of targets and I need to make it to work async.
<?php
class My extends Thread {
public function run() {
echo $this->getThreadId() . "\n";
sleep(rand(1, 3)); // Simulate some work...
global $active_threads;
$active_threads = $active_threads - 1; // It should decrese the number of active threads. Here could be the problem?
}
}
$active_threads = 0;
$maximum_threads = 5;
while(true) {
if($active_threads == $maximum_threads) {
echo 'Too many threads, just wait for a moment untill some threads are free.' . "\n";
sleep(3);
continue;
}
$threads = array();
for($i = $active_threads; $i < $maximum_threads; $i++) {
$active_threads = $active_threads + 1;
$threads[] = new My();
}
foreach($threads as $thread) {
$thread->start();
}
}
?>
Output:
[root#localhost tests]# zts-php test
140517320455936
140517237061376
140517228668672
140517220275968
140517207701248
Too many threads, just wait for a moment.
Too many threads, just wait for a moment.
Too many threads, just wait for a moment.
Too many threads, just wait for a moment.
I know is pretty primitive but I'm totally new to pthreads and all the examples are without async threads number limit. There is for sure something wrong in my code but I have no ideea where.
The problem as stated in another post is that the global memory is not shared. So your $active_thread will not decrement even if you use global.
See post: How to share global variable across thread in php?

PHP pthreads failing when run from cron

Ok, so lets start slow...
I have a pthreads script running and working for me, tested and working 100% of the time when I run it manually from the command line via ssh. The script is as follows with the main thread process code adjusted to simulate random process' run time.
class ProcessingPool extends Worker {
public function run(){}
}
class LongRunningProcess extends Threaded implements Collectable {
public function __construct($id,$data) {
$this->id = $id;
$this->data = $data;
}
public function run() {
$data = $this->data;
$this->garbage = true;
$this->result = 'START TIME:'.time().PHP_EOL;
// Here is our actual logic which will be handled within a single thread (obviously simulated here instead of the real functionality)
sleep(rand(1,100));
$this->result .= 'ID:'.$this->id.' RESULT: '.print_r($this->data,true).PHP_EOL;
$this->result .= 'END TIME:'.time().PHP_EOL;
$this->finished = time();
}
public function __destruct () {
$Finished = 'EXITED WITHOUT FINISHING';
if($this->finished > 0) {
$Finished = 'FINISHED';
}
if ($this->id === null) {
print_r("nullified thread $Finished!");
} else {
print_r("Thread w/ ID {$this->id} $Finished!");
}
}
public function isGarbage() : bool { return $this->garbage; }
public function getData() {
return $this->data;
}
public function getResult() {
return $this->result;
}
protected $id;
protected $data;
protected $result;
private $garbage = false;
private $finished = 0;
}
$LoopDelay = 500000; // microseconds
$MinimumRunTime = 300; // seconds (5 minutes)
// So we setup our pthreads pool which will hold our collection of threads
$pool = new Pool(4, ProcessingPool::class, []);
$Count = 0;
$StillCollecting = true;
$CountCollection = 0;
do {
// Grab all items from the conversion_queue which have not been processed
$result = $DB->prepare("SELECT * FROM `processing_queue` WHERE `processed` = 0 ORDER BY `queue_id` ASC");
$result->execute();
$rows = $result->fetchAll(PDO::FETCH_ASSOC);
if(!empty($rows)) {
// for each of the rows returned from the queue, and allow the workers to run and return
foreach($rows as $id => $row) {
$update = $DB->prepare("UPDATE `processing_queue` SET `processed` = 1 WHERE `queue_id` = ?");
$update->execute([$row['queue_id']]);
$pool->submit(new LongRunningProcess($row['fqueue_id'],$row));
$Count++;
}
} else {
// 0 Rows To Add To Pool From The Queue, Do Nothing...
}
// Before we allow the loop to move on to the next part, lets try and collect anything that finished
$pool->collect(function ($Processed) use(&$CountCollection) {
global $DB;
$data = $Processed->getData();
$result = $Processed->getResult();
$update = $DB->prepare("UPDATE `processing_queue` SET `processed` = 2 WHERE `queue_id` = ?");
$update->execute([$data['queue_id']]);
$CountCollection++;
return $Processed->isGarbage();
});
print_r('Collecting Loop...'.$CountCollection.'/'.$Count);
// If we have collected the same total amount as we have processed then we can consider ourselves done collecting everything that has been added to the database during the time this script started and was running
if($CountCollection == $Count) {
$StillCollecting = false;
print_r('Done Collecting Everything...');
}
// If we have not reached the full MinimumRunTime that this cron should run for, then lets continue to loop
$EndTime = microtime(true);
$TimeElapsed = ($EndTime - $StartTime);
if(($TimeElapsed/($LoopDelay/1000000)) < ($MinimumRunTime/($LoopDelay/1000000))) {
$StillCollecting = true;
print_r('Ended To Early, Lets Force Another Loop...');
}
usleep($LoopDelay);
} while($StillCollecting);
$pool->shutdown();
So while the above script will run via a command line (which has been adjusted to the basic example, and detailed processing code has been simulated in the above example), the below command gives a different result when run from a cron setup for every 5 minutes...
/opt/php7zts/bin/php -q /home/account/cron-entry.php file=every-5-minutes/processing-queue.php
The above script, when using the above command line call, will loop over and over during the run time of the script and collect any new items from the DB queue, and insert them into the pool, which allows 4 processes at a time to run and finish, which is then collected and the queue is updated before another loop happens, pulling any new items from the DB. This script will run until we have processed and collected all processes in the queue during the execution of the script. If the script has not run for the full 5 minute expected period of time, the loop is forced to continue checking the queue, if the script has run over the 5 minute mark it allows any current threads to finish & be collected before closing. Note that the above code also includes a code based "flock" functionality which makes future crons of this idle loop and exit or start once the lock has lifted, ensuring that the queue and threads are not bumping into each other. Again, ALL OF THIS WORKS FROM THE COMMAND LINE VIA SSH.
Once I take the above command, and put it into a cron to run for every 5 minutes, essentially giving me a never ending loop, while maintaining memory, I get a different result...
That result is described as follows... The script starts, checks the flock, and continues if the lock is not there, it creates the lock, and runs the above script. The items are taken from the queue in the DB, and inserted into the pool, the pool fires off the 4 threads at a time as expected.. But the unexpected result is that the run() command does not seem to be executed, and instead the __destruct function runs, and a "Thread w/ ID 2 FINISHED!" type of message is returned to the output. This in turn means that the collection side of things does not collect anything, and the initiating script (the cron script itself /home/account/cron-entry.php file=every-5-minutes/processing-queue.php) finishes after everything has been put into the pool, and destructed. Which prematurely "finishes" the cron job, since there is nothing else to do but loop and pull nothing new from the queue, since they are considered "being processed" when processed == 1 in the queue.
The question then finally becomes... How do I make the cron's script aware of the threads that where spawned and run() them without closing the pool out before they can do anything?
(note... if you copy / paste the provided script, note that I did not test it after removing the detailed logic, so it may need some simple fixes... please do not nit-pick said code, as the key here is that pthreads works if the script is executed FROM the Command Line, but fails to properly run when the script is executed FROM a CRON. If you plan on commenting with non-constructive criticism, please go use your fingers to do something else!)
Joe Watkins! I Need Your Brilliance! Thanks In Advance!
After all of that, it seems that the issue was with regards to user permissions. I was setting this specific cron up inside of cpanel, and when running the command manually I was logged in as root.
After setting this command up in roots crontab, I was able to get it to successfully run the threads from the pool. Only issue I have now is some threads never finish, and sometimes I am unable to close the pool. But this is a different issue, so I will open another question elsewhere.
For those running into this issue, make sure you know who the owner of the cron is as it matters with php's pthreads.

determining proper gearman task function to retrieve real-time job status

Very simply, I have a program that needs to perform a large process (anywhere from 5 seconds to several minutes) and I don't want to make my page wait for the process to finish to load.
I understand that I need to run this gearman job as a background process but I'm struggling to identify the proper solution to get real-time status updates as to when the worker actually finishes the process. I've used the following code snippet from the PHP examples:
do {
sleep(3);
$stat = $gmclient->jobStatus($job_handle);
if (!$stat[0]) // the job is known so it is not done
$done = true;
echo "Running: " . ($stat[1] ? "true" : "false") . ", numerator: " . $stat[2] . ", denomintor: " . $stat[3] . "\n";
} while(!$done);
echo "done!\n";
and this works, however it appears that it just returns data to the client when the worker finished telling the job what to do. Instead I want to know when the literal process of the job finished.
My real-life example:
Pull several data feeds from an API (some feeds take longer than others)
Load a couple of the ones that always load fast, place a "Waiting/Loading" animation on the section that was sent off to a worker queue
When the work is done and the results have been completely retrieved, replace the animation with the results
This is a bit late, but I stumbled across this question looking for the same answer. I was able to get a solution together, so maybe it will help someone else.
For starters, refer to the documentation on GearmanClient::jobStatus. This will be called from the client, and the function accepts a single argument: $job_handle. You retrieve this handle when you dispatch the request:
$client = new GearmanClient( );
$client->addServer( '127.0.0.1', 4730 );
$handle = $client->doBackground( 'serviceRequest', $data );
Later on, you can retrieve the status by calling the jobStatus function on the same $client object:
$status = $client->jobStatus( $handle );
This is only meaningful, though, if you actually change the status from within your worker with the sendStatus method:
$worker = new GearmanWorker( );
$worker->addFunction( 'serviceRequest', function( $job ) {
$max = 10;
// Set initial status - numerator / denominator
$job->sendStatus( 0, $max );
for( $i = 1; $i <= $max; $i++ ) {
sleep( 2 ); // Simulate a long running task
$job->sendStatus( $i, $max );
}
return GEARMAN_SUCCESS;
} );
while( $worker->work( ) ) {
$worker->wait( );
}
In versions of Gearman prior to 0.5, you would use the GearmanJob::status method to set the status of a job. Versions 0.6 to current (1.1) use the methods above.
See also this question: Problem With Gearman Job Status

Improving HTML scraper efficiency with pcntl_fork()

With the help from two previous questions, I now have a working HTML scraper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scraper working with pcntl_fork.
If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions.
Using code I've cobbled together from multiple sources, I have this working test:
<?php
libxml_use_internal_errors(true);
ini_set('max_execution_time', 0);
ini_set('max_input_time', 0);
set_time_limit(0);
$hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org");
function doDomStuff($singleHref,$childPid) {
$html = new DOMDocument();
$html->loadHtmlFile($singleHref);
$xPath = new DOMXPath($html);
$domQuery = '//div[#id="slogan"]/h2';
$domReturn = $xPath->query($domQuery);
foreach($domReturn as $return) {
$slogan = $return->nodeValue;
echo "Child PID #" . $childPid . " says: " . $slogan . "\n";
}
}
$pids = array();
foreach ($hrefArray as $singleHref) {
$pid = pcntl_fork();
if ($pid == -1) {
die("Couldn't fork, error!");
} elseif ($pid > 0) {
// We are the parent
$pids[] = $pid;
} else {
// We are the child
$childPid = posix_getpid();
doDomStuff($singleHref,$childPid);
exit(0);
}
}
foreach ($pids as $pid) {
pcntl_waitpid($pid, $status);
}
// Clear the libxml buffer so it doesn't fill up
libxml_clear_errors();
Which raises the following questions:
1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100).
2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers.
My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it):
<?php
libxml_use_internal_errors(true);
ini_set('max_execution_time', 0);
ini_set('max_input_time', 0);
set_time_limit(0);
$maxChildWorkers = 10;
$html = new DOMDocument();
$html->loadHtmlFile('http://xxxx');
$xPath = new DOMXPath($html);
$domQuery = '//div[#id=productDetail]/a';
$domReturn = $xPath->query($domQuery);
$hrefsArray[] = $domReturn->getAttribute('href');
function doDomStuff($singleHref) {
// Do stuff here with each product
}
// To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10.
$pids = array();
foreach ($workArray(1,2,3 ... 10) as $singleHref) {
$pid = pcntl_fork();
if ($pid == -1) {
die("Couldn't fork, error!");
} elseif ($pid > 0) {
// We are the parent
$pids[] = $pid;
} else {
// We are the child
$childPid = posix_getpid();
doDomStuff($singleHref);
exit(0);
}
}
foreach ($pids as $pid) {
pcntl_waitpid($pid, $status);
}
// Clear the libxml buffer so it doesn't fill up
libxml_clear_errors();
But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process.
I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.
Introduction
pcntl_fork() is not the only way to improve performance HTML scraper while it might be a good idea to use Message Queue has Charles suggested but you still need a faster effective way to pull that request in your workers
Solution 1
Use curl_multi_init ... curl is actually faster and using multi curl gives you parallel processing
From PHP DOC
curl_multi_init Allows the processing of multiple cURL handles in parallel.
So Instead of using $html->loadHtmlFile('http://xxxx'); to load the files several times you can just use curl_multi_init to load multiple url at the same time
Here are some Interesting Implementations
php - Fastest way to check presence of text in many domains (above 1000)
php get all the images from url which width and height >=200 more quicker
How to prevent server from overloading during Curl requests in PHP
Solution 2
You can use pthreads to use multi-threading in PHP
Example
// Number of threads you want
$threads = 10;
// Treads storage
$ts = array();
// Your list of URLS // range just for demo
$urls = range(1, 50);
// Group Urls
$urlsGroup = array_chunk($urls, floor(count($urls) / $threads));
printf("%s:PROCESS #load\n", date("g:i:s"));
$name = range("A", "Z");
$i = 0;
foreach ( $urlsGroup as $group ) {
$ts[] = new AsyncScraper($group, $name[$i ++]);
}
printf("%s:PROCESS #join\n", date("g:i:s"));
// wait for all Threads to complete
foreach ( $ts as $t ) {
$t->join();
}
printf("%s:PROCESS #finish\n", date("g:i:s"));
Output
9:18:00:PROCESS #load
9:18:00:START #5592 A
9:18:00:START #9620 B
9:18:00:START #11684 C
9:18:00:START #11156 D
9:18:00:START #11216 E
9:18:00:START #11568 F
9:18:00:START #2920 G
9:18:00:START #10296 H
9:18:00:START #11696 I
9:18:00:PROCESS #join
9:18:00:START #6692 J
9:18:01:END #9620 B
9:18:01:END #11216 E
9:18:01:END #10296 H
9:18:02:END #2920 G
9:18:02:END #11696 I
9:18:04:END #5592 A
9:18:04:END #11568 F
9:18:04:END #6692 J
9:18:05:END #11684 C
9:18:05:END #11156 D
9:18:05:PROCESS #finish
Class Used
class AsyncScraper extends Thread {
public function __construct(array $urls, $name) {
$this->urls = $urls;
$this->name = $name;
$this->start();
}
public function run() {
printf("%s:START #%lu \t %s \n", date("g:i:s"), $this->getThreadId(), $this->name);
if ($this->urls) {
// Load with CURL
// Parse with DOM
// Do some work
sleep(mt_rand(1, 5));
}
printf("%s:END #%lu \t %s \n", date("g:i:s"), $this->getThreadId(), $this->name);
}
}
It seems like I suggest this daily, but have you looked at Gearman? There's even a well documented PECL class.
Gearman is a work queue system. You'd create workers that connect and listen for jobs, and clients that connect and send jobs. The client can either wait for the requested job to be completed, or it can fire it and forget. At your option, workers can even send back status updates, and how far through the process they are.
In other words, you get the benefits of multiple processes or threads, without having to worry about processes and threads. The clients and workers can even be on different machines.

Categories