Understanding retry logic in the AWS PHP SDK - php

It seems that only the DynamoDB and S3Clients have retry logic enabled.
It seems like the retries config value has no effect on other services. Is there an easy way to enable this on others (e.g. SQS), or have I misunderstood this functionality?
I've located the clientConfig.setUseThrottleRetries(true); option in the Java SDK, but have yet to find an equivalent in the PHP SDK.

You have misunderstood: SQS does have retries enabled.
To test: try the following:
$sqsClient = new sqqClient('2012-11-05', ['retries' => 2]);
$startTime = time();
try {
$sqsClient->receiveMessages(['WaitTimeSeconds' => 5]);
} catch(Exception $e) {
}
$timeTaken = time() - $startTime;
echo $timeTaken;
and do not send any messages. You'll see
10
as that is #WaitTime * #retries
If you get no messages, it's considered a failure and will retry to get some.
S3 and DynamoDB have special cases - which is what you noticed.
Edit:
This works: hacking AWS code.
class final class Middleware
{
public static function retry(
callable $decider = null,
callable $delay = null,
$stats = false
) {
echo 'Forcing retries to false';
$decider = function() {return false;};
...
This doesn't: in my code.
$decider = function() {
echo 'No retries';
return false;
};
$client->getHandlerList()->appendSign(\AWS\Middleware::retry($decider, null), 'retry');

Related

How to run a computation while running a websocket server in PHP?

I have the following scenario:
I have an API built with the Slim PHP framework. I am using the PHP lib Ratchet to run a WebSocket server. Once the WebSocket server is started, I want to run a function that does some computation while the server is running.
So far, inside my API, I have a route that calls the MyMethod method of a class MyClass. Inside the class, I have the following:
class MyClass {
public $calculation_status;
public function MyMethod() {
$server = IoServer::factory(
new HttpServer(
new WsServer(
new messengerApp($this)
)
),
8080
);
$this->doCalculationAsynchronously()->then(
function ($result) {
$this->calculation_status = 'finished';
},
function ($reason) {
$this->calculation_status = 'stopped';
},
function ($update) {
$this->calculation_status = 'still working...';
}
}
$server->run($this);
}
public function doCalculationAsynchronously() {
$deferred = new Deferred();
$this->computeSomethingAsync(function ($error = null, $result) use ($deferred) {
if ($error) {
$deferred->reject($error);
} else {
$deferred->resolve($result);
}
});
return $deferred->promise();
}
public function computeSomethingAsync() {
// Simulate a long running calculation
while(true){} // OR sleep(1000000);
return $result;
}
}
So, I'm expecting this to try to start running the asynchronous calculation function, return a promise to MyMethod, and then run my WebSocket server.
The reason for injecting $this into the server is to access my calculation_status property and be able to send it to clients through the WS.
This code is inspired by the example for Deferred in the ReactPHP doc
Note: If I don't have the forever while loop, it goes on and runs the server correctly (but this is synchronous behavior; my goal for the server is to send the calculation status to clients). Injecting the class into the object works fine as well.

Overwriting retries / retry decider for individual calls in AWS SDK

Is there a way of overwriting retries for an individual call in AWS SDK for PHP?
The following code explains the question:
// Create client with a default of 2 retries
$sqsClient = new sqsClient('2012-11-05', ['retries' => 2]);
// This will retry twice to get the queue attributes (perfect)
try {
$sqsClient->getQueueAttributes();
} catch(Exception $e) {
}
// I want the following to NEVER retry
try {
$sqsClient->turnOffRetryLogic(???);
$sqsClient->receiveMessages(['WaitTimeSeconds' => 5]);
} catch(Exception $e) {
}
// Now set the retries back to as before.
Retries are handled by Middleware - but as the Middleware class is marked "final" I need to pass in a "decider"? This means we need to hook into one of the handlers but none appear to be connected to retries.
Edit:
I have managed to prove the concept of a new "decider" by directly editing the AWS SDK as follows:
final class Middleware
{
public static function retry(
callable $decider = null,
callable $delay = null,
$stats = false
) {
....
$decider = function() {
echo 'retries cancelled';
return false;
};
....
So the question is how to do this without editing the SDK. Have tried various middleware hooks as follows, without success.
$decider = function() {
echo 'No retries';
return false;
};
$SqsClient->getHandlerList()->appendSign(\AWS\Middleware::retry($decider, null), 'retry');
$result = $SqsClient->receiveMessage($aParams);
(Code samples snipped to only show relevant parts)
Next code removes retry handler
$sqsClient->getHandlerList()->remove('retry');
Sqs client isn't going to retry after that. To restore default behavior you can attach default handler back
$decider = RetryMiddleware::createDefaultDecider(3);
$sqsClient->getHandlerList()->appendSign(
Middleware::retry($decider, null, false),
'retry'
);
Though, two separate clients with retries enabled and disabled sound more transparent for me.

React PHP timeout always resolves

I was playing around with React and wanted to try to get a working timeout function. Following (sort of) the examples and Unit tests from https://github.com/reactphp/promise-timer#timeout I came up with:
use React\Promise\Timer;
$promise = uncertainOperation();
$loop = \React\EventLoop\Factory::create();
Timer\timeout($promise, 1, $loop)->then(
function ($value) {
var_dump($value);
}
,
function ($error) {
var_dump($error);
}
);
$loop->run();
function uncertainOperation() {
return new React\Promise\Promise(
function ($resolve) {
for($i = 0; $i < 30000000; $i++) { }
$resolve("Done");
}
);
}
But this always resolves with "Done" no matter how low I set the time in Timer\timeout. What am I missing?
The issue with your code is that it blocks. And it synchronously resolves the promise. It does nowhere return to the event loop driver, so that it could schedule the timeout watcher.
Try changing your code to use a timeout as simulation of e.g. a network timeout.
function uncertainOperation($loop) {
return new React\Promise\Promise(
function ($resolve) use ($loop) {
$loop->addTimer(5, function () {
$resolve("Done");
});
}
);
}
$loop->run();
Unfortunately, you have to pass around the loop in React.

voryx thruway multiple publish

I need to publish messages from php script, I can publish a single message fine. But now I need to publish different messages in loop, can't find proper way how to do it, here is what I tried:
$counter = 0;
$closure = function (\Thruway\ClientSession $session) use ($connection, &$counter) {
//$counter will be always 5
$session->publish('com.example.hello', ['Hello, world from PHP!!! '.$counter], [], ["acknowledge" => true])->then(
function () use ($connection) {
$connection->close(); //You must close the connection or this will hang
echo "Publish Acknowledged!\n";
},
function ($error) {
// publish failed
echo "Publish Error {$error}\n";
}
);
};
while($counter<5){
$connection->on('open', $closure);
$counter++;
}
$connection->open();
Here I want to publish $counter value to subscribers but the value is always 5, 1.Is there a way that I open connection before loop and then in loop I publish messages
2.How to access to $session->publish() from loop ?
Thanks!
There are a couple different ways to accomplish this. Most simply:
$client = new \Thruway\Peer\Client('realm1');
$client->setAttemptRetry(false);
$client->addTransportProvider(new \Thruway\Transport\PawlTransportProvider('ws://127.0.0.1:9090'));
$client->on('open', function (\Thruway\ClientSession $clientSession) {
for ($i = 0; $i < 5; $i++) {
$clientSession->publish('com.example.hello', ['Hello #' . $i]);
}
$clientSession->close();
});
$client->start();
There is nothing wrong with making many short connections to the router. If you are running in a daemon process though, it would probably make more sense to setup something that just uses the same client connection and then use the react loop to manage the loop instead of while(1):
$loop = \React\EventLoop\Factory::create();
$client = new \Thruway\Peer\Client('realm1', $loop);
$client->addTransportProvider(new \Thruway\Transport\PawlTransportProvider('ws://127.0.0.1:9090'));
$loop->addPeriodicTimer(0.5, function () use ($client) {
// The other stuff you want to do every half second goes here
$session = $client->getSession();
if ($session && ($session->getState() == \Thruway\ClientSession::STATE_UP)) {
$session->publish('com.example.hello', ['Hello again']);
}
});
$client->start();
Notice that the $loop is now being passed into the client constructor and also that I got rid of the line disabling automatic reconnect (so if there are network issues, your script will reconnect).

check if object exists in Cloud Files (PHP API)

I've just started working with the PHP API for Rackspace Cloud Files. So far so good-- but I am using it as sort of a poor man's memcache, storing key/value pairs of serialized data.
My app attempts to grab the existing cached object by its key ('name' in the API language) using something like this:
$obj = $this->container->get_object($key);
The problem is, if the object doesn't exist, the API throws a fatal error rather than simply returning false. The "right" way to do this by the API would probably be to do a
$objs = $this->container->list_objects();
and then check for my $key value in that list. However, this seems way more time/CPU intensive than just returning false from the get_object request.
Is there a way to do a "search for object" or "check if object exists" in Cloud Files?
Thanks
I sent them a pull request and hope it'll get included.
https://github.com/rackspace/php-cloudfiles/pull/35
My pull-request includes an example, for you it would be similar to this:
$object = new CF_Object($this->container, 'key');
if ($object->exists() === false) {
echo "The object '{$object->name}' does not exist.";
}
I have more general way to check if object exists:
try {
$this->_container->get_object($path);
$booExists = true;
} catch (Exception $e) {
$booExists = false;
}
If you dump the $object, you'll see that content_length is zero. Or, last modified will be a zero length string.
Example:
$object = new CF_Object($container, 'thisdocaintthere.pdf');
print_r($object->content_length);
There is also, deep in the dumped parent object, a 404 that will return, but it's private, so you'd need to some hackin' to get at it.
To see this, do the following:
$object = new CF_Object($container, 'thisdocaintthere.pdf');
print_r($object->container->cfs_http);
You'll see inside that object a response_status that is 404
[response_status:CF_Http:private] => 404
I know I'm a little late to the party, but hopefully this will help someone in the future: you can use the objectExists() method to test if an object is available.
public static function getObject($container, $filename, $expirationTime = false)
{
if ($container->objectExists($filename)) {
$object = $container->getPartialObject($filename);
// return a private, temporary url
if ($expirationTime) {
return $object->getTemporaryUrl($expirationTime, 'GET');
}
// return a public url
return $object->getPublicUrl();
}
// object does not exist
return '';
}
Use like...
// public CDN file
$photo = self::getObject($container, 'myPublicfile.jpg');
// private file; temporary link expires after 60 seconds
$photo = self::getObject($container, 'myPrivatefile.jpg', 60);
If you do not want to import opencloud to perform this check you can use the following:
$url = 'YOUR CDN URL';
$code = FALSE;
$options['http'] = array(
'method' => "HEAD",
'ignore_errors' => 1,
'max_redirects' => 0
);
$body = file_get_contents($url, NULL, stream_context_create($options));
sscanf($http_response_header[0], 'HTTP/%*d.%*d %d', $code);
if($code!='200') {
echo 'failed';
} else {
echo 'exists';
}

Categories