I'm trying to enable cookie persistence for a sent of pecl_http HttpRequest objects, sent using the same HttpRequestPool object (if it matters); unfortunately documentation is quite scarce, and despite all my attempts I do not think things work properly.
I have tried both using HttpRequestDataShare (albeit documentation here is very scarce) and using the 'cookiestore' request option to point to a file. I still do not see cookies sent back to the server(s) in consecutive requests.
To be clear, by "cookie persistence" I mean that cookies set by the server are automatically stored and re-sent by pecl_http on consecutive requests, without me having to manually handle that (if it comes to it I will, but I am hoping I don't have to do that).
Can anyone point me to a working code sample or application that sends multiple HttpRequest objects to the same server and utilizes pecl_http's cookie persistence?
Thanks!
Mind that the request pool tries to send all requests in parallel, so they cannot know cookies not yet received of course. E.g.:
<?php
$url = "http://dev.iworks.at/ext-http/.cookie.php";
function cc($a) { return array_map("current", array_map("current", $a)); }
$single_req = new HttpRequest($url);
printf("1st single request cookies:\n");
$single_req->send();
print_r(cc($single_req->getResponseCookies()));
printf("waiting 1 second...\n");
sleep(1);
printf("2nd single request cookies:\n");
$single_req->send();
print_r(cc($single_req->getResponseCookies()));
printf("1st pooled request cookies:\n");
$pooled_req = new HttpRequestPool(new HttpRequest($url), new HttpRequest($url));
$pooled_req->send();
foreach ($pooled_req as $req) {
print_r(cc($req->getResponseCookies()));
}
printf("waiting 1 second...\n");
sleep(1);
printf("2nd pooled request cookies:\n");
$pooled_req = new HttpRequestPool(new HttpRequest($url), new HttpRequest($url));
$pooled_req->send();
foreach ($pooled_req as $req) {
print_r(cc($req->getResponseCookies()));
}
printf("waiting 1 second...\n");
sleep(1);
printf("now creating a request datashare\n");
$pooled_req = new HttpRequestPool(new HttpRequest($url), new HttpRequest($url));
$s = new HttpRequestDataShare();
$s->cookie = true;
foreach ($pooled_req as $req) {
$s->attach($req);
}
printf("1st pooled request cookies:\n");
$pooled_req->send();
foreach ($pooled_req as $req) {
print_r(cc($req->getResponseCookies()));
}
printf("waiting 1 second...\n");
sleep(1);
printf("2nd pooled request cookies:\n");
$pooled_req->send();
foreach ($pooled_req as $req) {
print_r(cc($req->getResponseCookies()));
}
Related
Disclaimer: This is my first time working with ReactPHP and "php promises", so the solution might just be staring me in the face 🙄
I'm currently working on a little project where I need to create a Slack bot. I decided to use the Botman package and utilize it's Slack RTM driver. This driver utilizes ReactPHP's promises to communicate with the RTM API.
My problem:
When I make the bot reply on a command, I want to get the get retrieve the response from RTM API, so I can cache the ID of the posted message.
Problem is that, the response is being returned inside one of these ReactPHP\Promise\Promise but I simply can't figure out how to retrieve the data.
What I'm doing:
So when a command is triggered, the bot sends a reply Slack:
$response = $bot->reply($response)->then(function (Payload $item) {
return $this->reply = $item;
});
But then $response consists of an (empty?) ReactPHP\Promise\Promise:
React\Promise\Promise^ {#887
-canceller: null
-result: null
-handlers: []
-progressHandlers: & []
-requiredCancelRequests: 0
-cancelRequests: 0
}
I've also tried using done() instead of then(), which is what (as far as I can understand) the official ReactPHP docs suggest you should use to retrieve data from a promise:
$response = $bot->reply($response)->done(function (Payload $item) {
return $this->reply = $item;
});
But then $response returns as null.
The funny thing is, during debugging, I tried to do a var_dump($item) inside the then() but had forgot to remove a non-existing method on the promise. But then the var_dump actually returned the data 🤯
$response = $bot->reply($response)->then(function (Payload $item) {
var_dump($item);
return $this->reply = $item;
})->resolve();
So from what I can fathom, it's like I somehow need to "execute" the promise again, even though it has been resolved before being returned.
Inside the Bot's reply method, this is what's going on and how the ReactPHP promise is being generated:
public function apiCall($method, array $args = [], $multipart = false, $callDeferred = true)
{
// create the request url
$requestUrl = self::BASE_URL . $method;
// set the api token
$args['token'] = $this->token;
// send a post request with all arguments
$requestType = $multipart ? 'multipart' : 'form_params';
$requestData = $multipart ? $this->convertToMultipartArray($args) : $args;
$promise = $this->httpClient->postAsync($requestUrl, [
$requestType => $requestData,
]);
// Add requests to the event loop to be handled at a later date.
if ($callDeferred) {
$this->loop->futureTick(function () use ($promise) {
$promise->wait();
});
} else {
$promise->wait();
}
// When the response has arrived, parse it and resolve. Note that our
// promises aren't pretty; Guzzle promises are not compatible with React
// promises, so the only Guzzle promises ever used die in here and it is
// React from here on out.
$deferred = new Deferred();
$promise->then(function (ResponseInterface $response) use ($deferred) {
// get the response as a json object
$payload = Payload::fromJson((string) $response->getBody());
// check if there was an error
if (isset($payload['ok']) && $payload['ok'] === true) {
$deferred->resolve($payload);
} else {
// make a nice-looking error message and throw an exception
$niceMessage = ucfirst(str_replace('_', ' ', $payload['error']));
$deferred->reject(new ApiException($niceMessage));
}
});
return $deferred->promise();
}
You can see the full source of it here.
Please just point me in some kind of direction. I feel like I tried everything, but obviously I'm missing something or doing something wrong.
ReactPHP core team member here. There are a few options and things going on here.
First off then will never return the value from a promise, it will return a new promise so you can create a promise chain. As a result of that you do a new async operation in each then that takes in the result from the previous one.
Secondly done never returns result value and works pretty much like then but will throw any uncaught exceptions from the previous promise in the chain.
The thing with both then and done is that they are your resolution methods. A promise a merely represents the result of an operation that isn't done yet. It will call the callable you hand to then/done once the operation is ready and resolves the promise. So ultimately all your operations happen inside a callable one way or the other and in the broadest sense. (Which can also be a __invoke method on a class depending on how you set it up. And also why I'm so excited about short closures coming in PHP 7.4.)
You have two options here:
Run all your operations inside callable's
Use RecoilPHP
The former means a lot more mind mapping and learning how async works and how to wrap your mind around that. The latter makes it easier but requires you to run each path in a coroutine (callable with some cool magic).
I have a poll route on an API on Laravel 5.7 server, where the api user can request any information since the last poll.
The easy part is to respond immediately to a valid request if there is new information return $this->prepareResult($newData);
If there is no new data I am storing a poll request in the database, and a cron utility can then check once a minute for all poll requests and respond to any polls where data has been updated. Alternatively I can create an event listener for data updates and fire off a response to the poll when the data is updated.
I'm stuck with how to restore each session to match the device waiting for the update. I can store or pass the session ID but how do I make sure the CRON task / event processor can respond to the correct IP address just as if it was to the original request. Can php even do this?
I am trying to avoid websockets as will have lots of devices but with limited updates / interactions.
Clients poll for updates, APIs do not push updates.
REST API's are supposed to be stateless, so trying to have the backend keep track goes against REST.
To answer your question specifically, if you do not want to use websockets, the client app is going to have to continue to poll the endpoint till data is available.
Long poll is a valid technique. i think is a bad idea to run poll with session. since session are only for original user. you can run your long poll with php cli. you can check on your middleware to allow cli only for route poll. you can use pthreads
to run your long poll use pthreads via cli. and now pthreads v3 is designed safely and sensibly anywhere but CLI. you can use your cron to trigger your thread every one hour. then in your controller you need to store a $time = time(); to mark your start time of execution. then create dowhile loop to loop your poll process. while condition can be ($time > time()+3600) or other condition. inside loop you need to check is poll exist? if true then run it. then on the bottom of line inside loop you need to sleep for some second, for example 2 second.
on your background.php(this file is execute by cron)
<?php
error_reporting(-1);
ini_set('display_errors', 1);
class Atomic extends Threaded {
public function __construct($data = NULL) {
$this->data = $data;
}
private $data;
private $method;
private $class;
private $config;
}
class Task extends Thread {
public function __construct(Atomic $atomic) {
$this->atomic = $atomic;
}
public function run() {
$this->atomic->synchronized(function($atomic)
{
chdir($atomic->config['root']);
$exec_statement = array(
"php7.2.7",
$atomic->config['index'],
$atomic->class,
$atomic->method
);
echo "Running Command".PHP_EOL. implode(" ", $exec_statement)." at: ".date("Y-m-d H:i:s").PHP_EOL;
$data = shell_exec(implode(" ", $exec_statement));
echo $data.PHP_EOL;
}, $this->atomic);
}
private $atomic;
}
$config = array(
"root" => "/var/www/api.example.com/api/v1.1",
"index" => "index.php",
"interval_execution_time" => 200
);
chdir($config['root']);
$threads = array();
$list_threads = array(
array(
"class" => "Background_workers",
"method" => "send_email",
"total_thread" => 2
),
array(
"class" => "Background_workers",
"method" => "updating_data_user",
"total_thread" => 2
),
array(
"class" => "Background_workers",
"method" => "sending_fcm_broadcast",
"total_thread" => 2
)
);
for ($i=0; $i < count($list_threads); $i++)
{
$total_thread = $list_threads[$i]['total_thread'];
for ($j=0; $j < $total_thread; $j++)
{
$atomic = new Atomic();
$atomic->class = $list_threads[$i]['class'];
$atomic->method = $list_threads[$i]['method'];
$atomic->thread_number = $j;
$atomic->config = $config;
$threads[] = new Task($atomic);
}
}
foreach ($threads as $thread) {
$thread->start();
usleep(200);
}
foreach ($threads as $thread)
$thread->join();
?>
and this on your controller
<?php
defined('BASEPATH') OR exit('No direct script access allowed');
class Background_workers extends MX_Controller {
public function __construct()
{
parent::__construct();
$this->load->database();
$this->output->enable_profiler(FALSE);
$this->configuration = $this->config->item("configuration_background_worker_module");
}
public function sending_fcm_broadcast() {
$time_run = time();
$time_stop = strtotime("+1 hour");
do{
$time_run = time();
modules::run("Background_worker_module/sending_fcm_broadcast", $this->configuration["fcm_broadcast"]["limit"]);
sleep(2);
}
while ($time_run < $time_stop);
}
}
this is a sample runing code from codeigniter controller.
Long polling requires holding the connection open. That can only happen through an infinite loop of checking to see if the data exists and then adding a sleep.
There is no need to revitalize the session as the response is fired only on a successful data hit.
Note that this method is very CPU and memory intensive as the connection and FPM worker will remain open until a successful data hit. Web sockets is a much better solution regardless of the number of devices and frequency of updates.
You can use notifications. "browser notification" for web clients and FCM and APN notification for mobile clients.
Another option is using SSE (server sent events). It's a connection like socket but over http. Client sends a normal request, and server can just respond to client multiple times and any time if client is available (In the same request that has been sent).
I have created a soft phone for use with Twilio, using Twilio.js (1.4) and the Twilio REST API.
On the connection callback, I have a need to fetch the childSid for a call. To accommodate this I created a route in my Laravel app to use the Calls list resource and get it into the browser using jQuery.get() on the connection callback.
For some reason the API does not respond at all if I don't first wait about 12 seconds after the initial connection. After using sleep(12) in my PHP function I can successfully read the calls and filter for the ParentSid with no issues.
Is there a reason the API will not respond if invoked too soon after a connection is made via Twilio.js? It seems to only do this when I'm using $client->calls>read(). I have no problem retrieving a parentCallSid from a call immediately using $client->calls($callSid)->fetch().
Here is the original code:
public function showChildCallSid(Request $request, Client $client) {
$callSid = $request->input('CallSid');
sleep(12); // only works after waiting about 12 seconds
$call = $client->calls->read(['ParentCallSid' => $callSid])[0];
return $call->sid;
}
I believe the issue was ultimately a problem with syntax. I revised to the code as shown below and it works very well now (only needing a 1-second pause usually):
public function showChildCallSid(Request $request, Client $client) {
$callSid = $request->input('CallSid');
$attempt = 1;
$maxAttempts = 15;
do {
$calls = $client->calls->read(['ParentCallSid' => $callSid]);
if (sizeof($calls) > 0) {
break;
}
sleep(1);
$attempt++;
} while ($attempt < $maxAttempts);
$childSid = $calls[0]->sid;
return $childSid;
}
I am attempting to use guzzle promises in order to make some http calls, to illustrate what I have, I have made this simple example where a fake http request would take 5 seconds:
$then = microtime(true);
$promise = new Promise(
function() use (&$promise) {
//Make a request to an http server
$httpResponse = 200;
sleep(5);
$promise->resolve($httpResponse);
});
$promise2 = new Promise(
function() use (&$promise2) {
//Make a request to an http server
$httpResponse = 200;
sleep(5);
$promise2->resolve($httpResponse);
});
echo 'PROMISE_1 ' . $promise->wait();
echo 'PROMISE_2 ' . $promise2->wait();
echo 'Took: ' . (microtime(true) - $then);
Now what I would want to do is start both of them, and then make both echo's await for the response. What actually happens is promise 1 fires, waits 5 seconds then fires promise 2 and waits another 5 seconds.
From my understanding I should maybe be using the ->resolve(); function of a promise to make it start, but I dont know how to pass resolve a function in which I would make an http call
By using wait() you're forcing the promise to be resolved synchronously: https://github.com/guzzle/promises#synchronous-wait
According to the Guzzle FAQ you should use requestAsync() with your RESTful calls:
Can Guzzle send asynchronous requests?
Yes. You can use the requestAsync, sendAsync, getAsync, headAsync,
putAsync, postAsync, deleteAsync, and patchAsync methods of a client
to send an asynchronous request. The client will return a
GuzzleHttp\Promise\PromiseInterface object. You can chain then
functions off of the promise.
$promise = $client->requestAsync('GET', 'http://httpbin.org/get');
$promise->then(function ($response) {
echo 'Got a response! ' . $response->getStatusCode(); });
You can force an asynchronous response to complete using the wait()
method of the returned promise.
$promise = $client->requestAsync('GET', 'http://httpbin.org/get');
$response = $promise->wait();
This question is a little old but I see no answer, so I'll give it a shot, maybe someone will find it helpful.
You can use the function all($promises).
I can't find documentation about this function but you can find its implementation here.
The comment above this function starts like this:
Given an array of promises, return a promise that is fulfilled when all the items in the array are fulfilled.
Sounds like what you are looking for, so you can do something like this:
$then = microtime(true);
$promises = [];
$promises[] = new Promise(
function() use (&$promise) {
//Make a request to an http server
$httpResponse = 200;
sleep(5);
$promise->resolve($httpResponse);
});
$promises[] = new Promise(
function() use (&$promise2) {
//Make a request to an http server
$httpResponse = 200;
sleep(5);
$promise2->resolve($httpResponse);
});
all($promises)->wait();
echo 'Took: ' . (microtime(true) - $then);
If this function isn't the one that helps you solve your problem, there are other interesting functions in that file like some($count, $promises), any($promises) or settle($promises).
You can use the function Utils::all($promises)->wait();
Here is code example for "guzzlehttp/promises": "^1.4"
$promises = [];
$key = 0;
foreach(something...) {
$key++;
$promises[$key] = new Promise(
function() use (&$promises, $key) {
// here you can call some sort of async operation
// ...
// at the end call ->resolve method
$promises[$key]->resolve('bingo');
}
);
}
$res = Utils::all($promises)->wait();
It is important that your operation in promise must be non-blocking if you want to get concurrent workflow. For example, sleep(1) is blocking operation. So, 10 promises with sleep(1) - together are going to wait 10 sec anyway.
my web app requires making 7 different soap wsdl api requests to complete one task (I need the users to wait for the result of all the requests). The avg response time is 500 ms to 1.7 second for each request. I need to run all these request in parallel to speed up the process.
What's the best way to do that:
pthreads or
Gearman workers
fork process
curl multi (i have to build the xml soap body)
Well the first thing to say is, it's never really a good idea to create threads in direct response to a web request, think about how far that will actually scale.
If you create 7 threads for everyone that comes along and 100 people turn up, you'll be asking your hardware to execute 700 threads concurrently, which is quite a lot to ask of anything really...
However, scalability is not something I can usefully help you with, so I'll just answer the question.
<?php
/* the first service I could find that worked without authorization */
define("WSDL", "http://www.webservicex.net/uklocation.asmx?WSDL");
class CountyData {
/* this works around simplexmlelements being unsafe (and shit) */
public function __construct(SimpleXMLElement $element) {
$this->town = (string)$element->Town;
$this->code = (string)$element->PostCode;
}
public function run(){}
protected $town;
protected $code;
}
class GetCountyData extends Thread {
public function __construct($county) {
$this->county = $county;
}
public function run() {
$soap = new SoapClient(WSDL);
$result = $soap->getUkLocationByCounty(array(
"County" => $this->county
));
foreach (simplexml_load_string(
$result->GetUKLocationByCountyResult) as $element) {
$this[] = new CountyData($element);
}
}
protected $county;
}
$threads = [];
$thread = 0;
$threaded = true; # change to false to test without threading
$counties = [ # will create as many threads as there are counties
"Buckinghamshire",
"Berkshire",
"Yorkshire",
"London",
"Kent",
"Sussex",
"Essex"
];
while ($thread < count($counties)) {
$threads[$thread] =
new GetCountyData($counties[$thread]);
if ($threaded) {
$threads[$thread]->start();
} else $threads[$thread]->run();
$thread++;
}
if ($threaded)
foreach ($threads as $thread)
$thread->join();
foreach ($threads as $county => $data) {
printf(
"Data for %s %d\n", $counties[$county], count($data));
}
?>
Note that, the SoapClient instance is not, and can not be shared, this may well slow you down, you might want to enable caching of wsdl's ...