I'm trying have this principle working:
a producer that sends one message (1) and waits for ack which contains some result (json result of an operation, actually)
a consumer that checks all pending messages every 5 seconds, and handle all of them in one row, and acknowlegdes all of them in one row, then wait again 5 seconds (infinite loop).
Here are the 30 lines of my stompproducer.php:
<?php
function msg($txt)
{
echo date('H:i:s > ').$txt."\n";
}
$queue = '/aaaa';
$msg = 'bar';
if (count($argv)<3) {
echo $argv[0]." [msg] [nb to send]\n";
exit(1);
}
$msg = (string)$argv[1];
$to_send = intval($argv[2]);
try {
$stomp = new Stomp('tcp://localhost:61613');
while (--$to_send) {
msg("Sending...");
$result = $stomp->send(
$queue,
$msg." ". date("Y-m-d H:i:s"),
array('receipt' => 'message-123')
);
echo 'result='.var_export($result,true)."\n";
msg("Done.");
}
} catch(StompException $e) {
die('Connection failed: ' . $e->getMessage());
}
Here are the 30 lines of my stompconsumer.php:
<?php
$queue = '/aaaa';
$_waitTimer=5000000;
$_timeLastAsk = microtime(true);
function msg($txt)
{
echo date('H:i:s > ').$txt."\n";
}
try {
$stomp = new Stomp('tcp://localhost:61613');
$stomp->subscribe($queue, array('activemq.prefetchSize' => 40));
$stomp->setReadTimeout(0, 10000);
while (true) {
$frames_read=array();
while ($stomp->hasFrame()) {
$frame = $stomp->readFrame();
if ($frame != null) {
array_push($frames_read, $frame);
}
if (count($frames_read)==40) {
break;
}
}
msg("Nombre de frames lues : ".count($frames_read));
msg("Pause...");
$e=$_waitTimer-(microtime(true)-$_timeLastAsk);
if ($e>0) {
usleep($e);
}
if (count($frames_read)>0) {
msg("Ack now...");
foreach ($frames_read as $frame) {
$stomp->ack($frame);
}
}
$_timeLastAsk = microtime(true);
}
} catch(StompException $e) {
die('Connection failed: ' . $e->getMessage());
}
I can't manage to do synchronous producer, ie producer that waits for consumer ack. If you run the samples I've done here, you'll see that producer instantaneously sends all messages, then quits, with all "true" like "ok" results when calling $stomp->send().
I still haven't found good examples, neither good documentation with a simple blocking sample.
What shall I do to make my producer blocking until the consumer sends its ack?
NB: I've read all documentation here and the stomp php questions on stackoverflow here and here.
First thing to pop t my mind: Take a look at this stomp plugin:
http://activemq.apache.org/message-redelivery-and-dlq-handling.html
Another workaround I can thing of is:
On producer side:
1. Change your producer to send persistent messages
On your consumer side:
Use a timer.
1. Read message/frames until empty or max cap reached.
2. Create a CURL request and empty packed list of messages
3. Sleep your server for 5 secs
You definitely need to test this further, but should work. Once the process wakes up, you should be able to read all messages queued.
Things to consider:
- persistent messages will need an expiration time
- You'll need ACK on your consumer side to make sure to update status of messages already attended. Use ACK=client so you can ACK all messages acknowledged
- It's easier if you don't have to wait for your CURL to respond.
- Out of the box, it's not supported to send ACK from the consumer (server side).
Best of luck
From the question it sounds like you are looking for a request / response type messaging pattern. This is something you must implement yourself as the STOMP ack you reference is only acking the message to the message broker on behalf of the consumer, the producer has no knowledge of this. Request response involves setting a reply-to address on the outbound message and then waiting to receive a response on that address before sending the next message. There are a great many articles out there that document this sort of thing such as this one.
Or if you only need to know if the broker has received the message from the client and persisted it then you can use STOMP's built in receipt mechanism to have the broker send you a receipt indicating that it has processed your sent message. This however does not guarantee that a consumer has processed the message yet.
I just remembered, you can try reactphp/stomp library.
It's an event driven library that might help you. specially take a look ad the core functionality addPeriodicTimer
https://github.com/reactphp/stomp
Cheers
Related
I'm currently working on a project with my friends,
so let me explain:
We have a mySql database filled with english postcode from London, one table with universities, and one with hosts, what we want is to actually calculate the public transport travel time between all the host and the universities and save it into another table of the database that will have the host postcode, the university post code and the travel time between the both on one line, and etc...
For that we are doing http request to the tfl API that return to us a JSON with all the travel details (and of course the travel time), that we then decode and keep only what we want (travel time).
The problem is that we have a quite big database with almost 250 host and 800 universities that give us around 200 000 request and a too long process time to be used (with the api response time and the php treatment, around 19h)
We tried to see if we could use the cURL method to split the process between multiple loop so that we can divide the process time by the number of cURL we made but we can't manage to figure how we can do that...
The final goal is to make a small local app that when we select one university it give us the nearests 10 hosts in public transport.
Does anyone have any experience with that kind of things and can help us ?
Here is what we have right now :
//postCodeUni list contains all the universites objects
foreach ($postCodeUni as $uniPostCode) {
//here we take the postcode from the university object
$uni = $uniPostCode['Postcode'];
//postCodeHost list contains all the host objects
foreach ($postCodeHost as $hostPostCode) {
//here we take the postcode from the host object
$host = $hostPostCode['Postcode'];
//here we make an http request to the tfl api that return us a journey between the two post codes (a json with all the journey details)
$data = json_decode(file_get_contents('https://api.tfl.gov.uk/journey/journeyresults/' . $uni . '/to/' . $host . '?app_key=a59c7dbb0d51419d8d3f9dfbf09bd5cc'), true);
//here we save the multiple duration times (because there is different ways to travel between two point with public transport)
$duration = $data['journeys'];
$tableTemp = [];
foreach ($duration as $durations) {
$durationns = $durations['duration'];
array_push($tableTemp, $durationns);
}
//We then take the shorter one
$min = min($tableTemp);
echo "Shorter travel time : " . $min . " of travel between " . $uni . " and ". $host . " . <br>";
echo "<br>";
//We then save this time in a table that will contain the travel time of all the journeys to do comparaison
array_push($tableAllRequest, array($uni . " and ". $host => $min));
}
}
There are many ways to achieve this however the easiest imo would be to use Guzzle Async (cURL multi interface under the hood). Take a look at this answer - Guzzle async requests not really async? example below,
<?php
use GuzzleHttp\Promise;
use GuzzleHttp\Client;
$client = new Client(['base_uri' => 'http://httpbin.org/']);
// Initiate each request but do not block
$promises = [
'image' => $client->getAsync('/image'),
'png' => $client->getAsync('/image/png'),
'jpeg' => $client->getAsync('/image/jpeg'),
'webp' => $client->getAsync('/image/webp')
];
// Wait on all of the requests to complete. Throws a ConnectException
// if any of the requests fail
$results = Promise\unwrap($promises);
// Wait for the requests to complete, even if some of them fail
$results = Promise\settle($promises)->wait();
// Loop through each response in the results and fetch data etc
foreach($results as $promiseKey => $result) {
// Data response
$dataOfResponse = ($result['value']->getBody()->getContents());
// Status
echo $promiseKey . ':' . $result['value']->getStatusCode() . "\r\n";
}
We have a large database where we track users' activities. A service presents activity data. The request to the service consists of 3 stages.
Create data request. POST /createList
Check if data has been processed. GET /checkList/listId
Retrieve data if processed. GET /getList/idlistId
Processing of data in the service usually takes 1 minute, but this time may be 30 minutes. As soon as this process is finished, I need to return the answer to my own user. Although "do while" seems to work for now, I don't believe it is a very logical solution. How can I do this with Laravel job?
Do I need to call the queue with cron job continuously?
my code:
$listService = new ListService();
$activityList = null;
// Request for generate the list
// Return listId
$res = $listService->createList('USER_ACTIVITY_LIST');
$listId = $res->getListId();
do {
// control status of list.
$checkList = $listService->checkList($listId);
// If completed, create an ActivityList record
if ($checkList->getStatus() === 'complete') {
$data = $listService->getList($listId)->getData();
$activityList = ActivityList::create($data);
} else {
sleep(10);
}
} while ($checkList->getStatus() === 'pending'); // if it is not completed. try again
return $activityList;
This question already has answers here:
curl: (6) Could not resolve host: google.com; Name or service not known
(7 answers)
Closed 9 months ago.
Ok so I am a little stuck with this issue. I have a foreach loop (usually 50 results) that queries an API using Guzzle via Laravel Http and I am getting really inconsistent results.
I monitor the inserts in the database as they come in and sometimes the process seems slow and other times the process will fail with the following after x number of returned results.
cURL error 6: Could not resolve host: api.coingecko.com
The following is the actual code im using to fetch the results.
foreach ($json_result as $account) {
var_dump($account['name']);
$name = $account['name'];
$coingecko_id = $account['id'];
$identifier = strtoupper($account['symbol']);
$response_2 = Http::get('https://api.coingecko.com/api/v3/coins/'.urlencode($coingecko_id).'?localization=false');
if($response_2->successful()){
$json_result_extra_details = $response_2->json();
if( isset($json_result_extra_details['description']['en']) ){
$description = $json_result_extra_details['description']['en'];
}
if( isset($json_result_extra_details['links']['twitter_screen_name']) ){
$twitter_screen_name = $json_result_extra_details['links']['twitter_screen_name'];
}
}else {
// Throw an exception if a client or server error occurred...
$response_2->throw();
}
$crypto_account = CryptoAccount::updateOrCreate(
[
'identifier' => $identifier
],
[
'name' => $name,
'identifier' => $identifier,
'type' => "cryptocurrency",
'coingecko_id' => $coingecko_id,
'description' => $description,
]);
//sleep(1);
}
Now I know I am within the API rate limit of 100 calls a minute so I don't think that is the issue. I am wondering if this is a server/api issue which I don't really have any control over or if it related to my code and how Guzzle is implemented.
When I do single queries I don't seem to have a problem, the issue seems to be when it is inside the foreach loop.
Any advice would be great. Thanks
EDIT
Ok to update the question, I am now wondering if this is Guzzle/Laravel related. I changed the API to now point to the Twitter API and I am getting the same error after 80 synchronous requests.
I think it's better to use Asynchronous Request directly with Guzzle.
$request = new \GuzzleHttp\Psr7\Request('GET', 'https://api.coingecko.com/api/v3/coins?localization=false');
for ($i=0; $i < 50 ; $i++) {
$promise = $client->sendAsync($request)
->then(function ($response) {
echo 'I completed! ' . $response->getBody();
});
$promise->wait();
}
more information on Async requests: Doc
I have a similar problem as yours.
I doing the HTTP requests in the loop, and the first 80 requests are okay.
But the 81st start throwing this "Could not resolve host" exception.
It's very strange for me because the domain can be resolved perfectly fine on my machine.
Thus I start digging into the code.
End up I found that Laravel's Http facades keep generate the new client.
And I guess this eventually trigger the DNS resolver's rate limit?
So I have the workaround as following:
// not working
// as this way will cause Laravel keep getting a new HTTP client from guzzle.
foreach($rows as $row) {
$response = Http::post();
}
// workaround
$client = new GuzzleHttp\Client();
foreach($rows as $row) {
$response = $client->post();
// don't forget use $response->getBody();
}
i believe it's because $client will cached the DNS resolve result, thus it will reduce the call to DNS resolver and not trigger the rate limit?
I'm not sure whether it was right. BUT it's working for me.
I have been trying to implement multi-threading in php to achieve multi-upload using pthreads php.
From my understanding of multi-threading, this is how I envisioned it working.
I would upload a file,the file will start uploading in the background; even if the file is not completed to upload, another instance( thread ) will be created to upload another file. I would make multiple upload requests using AJAXand multiple files would start uploading, I would get the response of a single request individually and I can update the status of upload likewise in my site.
But this is not how it is working. This is the code that I got from one of the pthread question on SO, but I do not have the link( sorry!! ).
I tested this code to see of this really worked like I envisioned. This is the code I tested, I changed it a little.
<?php
error_reporting(E_ALL);
class AsyncWebRequest extends Thread {
public $url;
public $data;
public function __construct ($url) {
$this->url = $url;
}
public function run () {
if ( ($url = $this->url) ){
/*
* If a large amount of data is being requested, you might want to
* fsockopen and read using usleep in between reads
*/
$this->data = file_get_contents ($url);
echo $this->getThreadId ();
} else{
printf ("Thread #%lu was not provided a URL\n", $this->getThreadId ());
}
}
}
$t = microtime (true);
foreach( ["http://www.google.com/?q=". rand () * 10, 'http://localhost', 'https://facebook.com'] as $url ){
$g = new AsyncWebRequest( $url );
/* starting synchronized */
if ( $g->start () ){
printf ( $url ." took %f seconds to start ", microtime (true) - $t);
while ($g->isRunning ()) {
echo ".";
usleep (100);
}
if ( $g->join () ){
printf (" and %f seconds to finish receiving %d bytes\n", microtime (true) - $t, strlen ($g->data));
} else{
printf (" and %f seconds to finish, request failed\n", microtime (true) - $t);
}
}
echo "<hr/>";
}
So what I expected from this code was it would hit google.com, localhost and facebook.com simultaneously and run their individual threads. But every request is waiting for another request to complete.
For this it is clearly waiting for first response to complete before it is making another request because time the request are sent are after the request from the previous request is complete.
So, This is clearly not the way to achieve what I am trying to achieve. How do I do this?
You might want to look at multi curl for such multiple external requests. Pthreads is more about internal processes.
Just for further reference, you are starting threads 1 by 1 and waiting for them to finish.
This code: while ($g->isRunning ()) doesn't stop until the thread is finished. It's like having a while (true) in a for. The for executes 1 step at a time.
You need to start the threads, add them in an array, and in another while loop check each of the threads if it stopped and remove them from the array.
Very simply, I have a program that needs to perform a large process (anywhere from 5 seconds to several minutes) and I don't want to make my page wait for the process to finish to load.
I understand that I need to run this gearman job as a background process but I'm struggling to identify the proper solution to get real-time status updates as to when the worker actually finishes the process. I've used the following code snippet from the PHP examples:
do {
sleep(3);
$stat = $gmclient->jobStatus($job_handle);
if (!$stat[0]) // the job is known so it is not done
$done = true;
echo "Running: " . ($stat[1] ? "true" : "false") . ", numerator: " . $stat[2] . ", denomintor: " . $stat[3] . "\n";
} while(!$done);
echo "done!\n";
and this works, however it appears that it just returns data to the client when the worker finished telling the job what to do. Instead I want to know when the literal process of the job finished.
My real-life example:
Pull several data feeds from an API (some feeds take longer than others)
Load a couple of the ones that always load fast, place a "Waiting/Loading" animation on the section that was sent off to a worker queue
When the work is done and the results have been completely retrieved, replace the animation with the results
This is a bit late, but I stumbled across this question looking for the same answer. I was able to get a solution together, so maybe it will help someone else.
For starters, refer to the documentation on GearmanClient::jobStatus. This will be called from the client, and the function accepts a single argument: $job_handle. You retrieve this handle when you dispatch the request:
$client = new GearmanClient( );
$client->addServer( '127.0.0.1', 4730 );
$handle = $client->doBackground( 'serviceRequest', $data );
Later on, you can retrieve the status by calling the jobStatus function on the same $client object:
$status = $client->jobStatus( $handle );
This is only meaningful, though, if you actually change the status from within your worker with the sendStatus method:
$worker = new GearmanWorker( );
$worker->addFunction( 'serviceRequest', function( $job ) {
$max = 10;
// Set initial status - numerator / denominator
$job->sendStatus( 0, $max );
for( $i = 1; $i <= $max; $i++ ) {
sleep( 2 ); // Simulate a long running task
$job->sendStatus( $i, $max );
}
return GEARMAN_SUCCESS;
} );
while( $worker->work( ) ) {
$worker->wait( );
}
In versions of Gearman prior to 0.5, you would use the GearmanJob::status method to set the status of a job. Versions 0.6 to current (1.1) use the methods above.
See also this question: Problem With Gearman Job Status