I'm looking for what the options GEARMAN_CLIENT_NON_BLOCKING does.
Using this example:
<?php
$client = new GearmanClient();
$client->setOptions(GEARMAN_CLIENT_NON_BLOCKING);
$client->addServer('127.0.0.1', 4730);
var_dump("before");
$client->doNormal('queue', 'data');
var_dump("after");
The script never prints "after" if no worker listens on "queue" function.
Reading the documentation (http://gearman.info/libgearman/gearman_client_options.html), this options should permit to perform the request in "non-blocking" mode.
I known if I want to send a job without waiting the response from the worker, I should use "doBackground" method.
So, which means "non-blocking" mode at the client side?
Related
References:
http://docs.guzzlephp.org/en/stable/request-options.html#proxy
Set proxy in Guzzle
Env :
GuzzleHttp/6.2.1
curl/7.47.0
PHP/7.1.3-3+deb.sury.org~xenial+1
I am trying to use proxy server with async Guzzle calls.
I have discovered that when I set proxy when creating a client, it works.
e.g.
new Client(['proxy' => 'tcp://64.140.159.209:80'])
However when create a client with no options .. and then set proxy on Request, proxy is not used at all, and guzzle makes a direct connection from client machine to server machine. That is confirmed by hitting http://httpbin.org/ip and inspecting the Origin returned by httpbin.
I need the ability to set proxy on each request.
Here is relevant code:
$client = new Client();
$request = new Request(
'GET',
'http://httpbin.org/ip',
['proxy' => 'tcp://64.140.159.209:80']
);
$client->sendAsync($request)
->then(
...closure here
// process here
);
Hope this helps for someone.
The document http://docs.guzzlephp.org/en/stable/request-options.html#proxy only lists creating a new request from a client.
That means I understood the usage wrong. I was creating new Request directly and passing third param with proxy info expecting that to be changed per request within a single client. It looks like that proxy is set on per client basis, even if you are making asynchronous calls.
So I had to modify my application to use a new client per async request.
I'm trying to fetch statistical data from a web service. Each request has a response time of 1-2 seconds and I've to submit the request for thousands of IDs, one at a time. All requests would sum up to a few hours, because of the server's response time.
I want to parallelize as much requests as possible (the server's can handle it). I've installed PHP7 and pthreads (CLI-only), but the maximum number of threads is limited (20 in Windows PHP CLI), so I've to start multiple processes.
Is there any simple PHP based framework/library for multi-process/pthread and job-queue handling? I don't need a large framework like symfony or laravel.
Workers
You could look into using php-resque which doesn't require pthreads.
You will have to run a local Redis server though (could also be remote). I believe you can run Redis on Windows, according to this SO
Concurrent Requests
You may also want to look into sending concurrent requests using something like GuzzleHttp, you can find examples on how to use that here
From the Docs:
You can send multiple requests concurrently using promises and asynchronous requests.
use GuzzleHttp\Client;
use GuzzleHttp\Promise;
$client = new Client(['base_uri' => 'http://httpbin.org/']);
// Initiate each request but do not block
$promises = [
'image' => $client->getAsync('/image'),
'png' => $client->getAsync('/image/png'),
'jpeg' => $client->getAsync('/image/jpeg'),
'webp' => $client->getAsync('/image/webp')
];
// Wait on all of the requests to complete. Throws a ConnectException
// if any of the requests fail
$results = Promise\unwrap($promises);
// Wait for the requests to complete, even if some of them fail
$results = Promise\settle($promises)->wait();
// You can access each result using the key provided to the unwrap
// function.
echo $results['image']->getHeader('Content-Length');
echo $results['png']->getHeader('Content-Length');
I have replaced CURL calls to API with RabbitMQ RPC messages. Everything works fine with rabbitmq example
Still it looks like implementation is wrong as every request opens connection, opens channel, sends message, waits for response, gets response, closes channel and closes connection.
How can i implement RabbitMQ RPC calls to use same connection for every request using PHP?
I using https://github.com/videlalvaro/php-amqplib library
My implementation looks like this https://gist.github.com/fordnox/fa41e1233a207ec5416c
Using it like this:
$rpc = new RabbitRpc([/* config array */]);
$result = $rpc->callOnServer(1, ["foo":"bar"]);
I'm using Guzzle that I installed via composer and failing to do something relatively straightforward.
I might be misunderstanding the documentation but essentially what I'm wanting to do is run a POST request to a server and continue executing code without waiting for a response. Here's what I have :
$client = new \GuzzleHttp\Client(/*baseUrl, and auth credentials here*/);
$client->post('runtime/process-instances', [
'future'=>true,
'json'=> $data // is an array
]);
die("I'm done with the call");
Now lets say the runtime/process-instances runs for about 5mn, I will not get the die message before those 5mn are up... When instead I want it right after the message is sent to the server.
Now I don't have access to the server so I can't have the server respond before running the execution. I just need to ignore the response.
Any help is appreciated.
Things I've tried:
$client->post(/*blabla*/)->then(function ($response) {});
It is not possible in Guzzle to send a request and immediately exit. Asynchronous requests require that you wait for them to complete. If you do not, the request will not get sent.
Also note that you are using post instead of postAsync, the former is a synchronous (blocking) request. To asynchronously send a post request, use the latter. In your code example, by changing post to postAsync the process will exit before the request is complete, but the target will not receive that request.
Have you tried setting a low timeout?
I have a web service written in PHP to which an iPhone app connects to. When the app calls the service, a series of notification messages are sent to Apple's APNs server so it can then send Push Notifications to other users of the app.
This process can be time consuming in some cases and my app has to wait a long time before getting a response. The response is totally independent of the result of the notification messages being sent to the APNs server.
Therefore, I would like the web service to send the response back to the app regardless of whether the messages to APNs have been sent.
I tried using pcntl_fork to solve the problem:
<?php
...
$pid = pcntl_fork();
if($pid == -1)
{
// Could not fork (send response anyway)
echo "response";
}
else if($pid)
{
// Parent process - send response to app
echo "response";
}
else
{
// Child process - send messages to APNs then die
sendMessageAPNs($token_array);
die();
}
?> // end of script
Unfortunately, the parent process seems to wait for the child process to end before sending the response even though I do not use pcntl_wait in the parent process. Am I doing something wrong or is this normal behaviour? If this is normal then is there another way I can solve this problem?
Thank you!
If you're hosting the PHP process in Apache then you really shouldn't use this: see this for the section that says *Process Control should not be enabled within a web server environment and unexpected results may happen if any Process Control functions are used within a web server environment. *.
You should probably set up a separate daemon in your preferred language of choice and hand the APNS communication tasks off to that. If you really really really must try using ob_flush().
I think you can send the response back before doing the "long" process. Take a look at the flush() function of PHP it'll maybe help