I'm using AWS SDK V3 for PHP. Sometimes when I make calls to AWS S3 I get errors like 400 errors Bad Request RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period due to network problem even when the object I'm trying to interact with is there. What I need to do is to implement a retry mechanism. I wonder if we can do it simply with an option in the AWS SDK to specify the number of times we want a retry after an error.
I know that I can do that with simple try catch and retry, but I'm thinking may be the SDK already provides a much cleaner way to do that.
I already found static function Middleware::retry() but I have no idea how to use it.
You can specify the number of retries when constructing a new instance of any AWS service client class:
$client = new Aws\EC2\Ec2Client([
'region' => 'eu-central-1',
'retries' => 3
]);
https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/configuration.html#retries
You can use something like this,
public function get_s3_connection(){
$sdk = new \Aws\Sdk([
'region' => 'us-east-1',
'version' => 'latest',
'credentials' => $credentials
]);
$s3_connection = $sdk->createS3([
'retries' => 3,
'http' => [
'connect_timeout' => 15,
'timeout' => 20
],
'endpoint' => 'https://bucket.0123xyz.s3.us-east-1.amazonaws.com'
]);
return $s3_connection;
}
In the above code the retries => 3 means it will try 3 times and then stop trying.
You can also set the timeout and connect_timeout as show above.
You can also retry manually by surrounding the method with do-while loop, as shown below,
public function s3_connect_with_retry($total_retry = 3){
$retry_count = 0;
$do_retry = false;
do{
try{
$s3_connection = $this->get_s3_connection();
}catch(Exception $exception){
$retry_count++;
$do_retry = true;
}
}while($total_retry < $retry && $do_retry == true;);
}
When the connection breaks and throws exception the $do_retry value will become true and the $retry_count will increase. based on the $do_retry flag the retry happens until the $total_retry count is reached.
Related
I have a list of Job IDs to check their status. So, I'm simply looping through all the Job IDs to get their status on Media Convert.
function get_aws_job_id_status($job_id)
{
$result = [];
$client = \App::make('aws')->createClient('MediaConvert', [
// 'profile' => 'default',
// 'version' => '2017-08-29',
'region' => 'region',
'endpoint' => "endpoint"
]);
try {
$result = $client->getJob([
'Id' => $job_id,
]);
return $result;
} catch (AwsException $e) {
return $result;
}
}
I'm using the above function inside the loop to get the status.
Referred to AWS Docs and Stackoverflow, but still, when I don't find a record for the given Job ID, it returns "NotFoundException" error that is not going in catch block and breaking the loop. Is there any way to handle that exception so I can continue the loop?
I believe you need to call Aws\MediaConvert\Exception\MediaConvertException and catch for MediaConvert specific errors. I don't see any of your use statements but I assume the code would look something like the following.
Note I am catching for all MediaConvert client errors, but I believe you could specifically call out the NotFoundException by doing Aws\MediaConvert\Exception\MediaConvertException\NotFoundException
use Aws\MediaConvert\MediaConvertClient;
use Aws\Exception\AwsException;
use Aws\MediaConvert\Exception\MediaConvertException as MediaConvertError;
function get_aws_job_id_status($job_id)
{
$result = [];
$client = \App::make('aws')->createClient('MediaConvert', [
// 'profile' => 'default',
// 'version' => '2017-08-29',
'region' => 'region',
'endpoint' => "endpoint"
]);
try {
$result = $client->getJob([
'Id' => $job_id,
]);
return $result;
} catch (MediaConvertError $e) {
/*Oh no, the job id provided ca not be found.
Let us log the job id and the message and return it back up to the main application
Note this assumes the main application is iterating through a list of JobIDs and
can handle this and log that job id was not found and will not have the normal Job
JSON structure.
*/
$error = array("Id"=>$job_id, "Message"=>"Job Id Not found");
$result = json_encode($error);
return $result;
}
}
Also keep in mind that if you are polling for job status's you may be throttled if your list grows too big. You would need to catch for a TooManyRequestsException [1] and try the poll with a back off threshold [2].
The most scalable solution is to use CloudWatch Events and track jobs based the STATUS_UPDATE, COMPLETE and ERROR status. [3]
[1] https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.MediaConvert.Exception.MediaConvertException.html
[2] https://docs.aws.amazon.com/general/latest/gr/api-retries.html
[3] https://docs.aws.amazon.com/mediaconvert/latest/ug/monitoring-overview.html
Description
I'm using Guzzle in my Laravel project. I had a memory crash when I make a request to an API that return a huge payload.
I have this on the top of my CURL.php class. I have get() that I use Guzzle.
use GuzzleHttp\Exception\GuzzleException;
use GuzzleHttp\Client;
use GuzzleHttp\FORCE_IP_RESOLVE;
use GuzzleHttp\DECODE_CONTENT;
use GuzzleHttp\CONNECT_TIMEOUT;
use GuzzleHttp\READ_TIMEOUT;
use GuzzleHttp\TIMEOUT;
class CURL {
public static function get($url) {
$client = new Client();
$options = [
'http_errors' => true,
'force_ip_resolve' => 'v4',
'connect_timeout' => 2,
'read_timeout' => 2,
'timeout' => 2,
];
$result = $client->request('GET',$url,$options);
$result = (string) $result->getBody();
$result = json_decode($result, true);
return $result;
}
...
}
When I call it like this in my application, it request a large payload (30000)
$url = 'http://site/api/account/30000';
$response = CURL::get($url)['data'];
I kept getting this error
cURL error 28: Operation timed out after 2000 milliseconds with 7276200 out of 23000995 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
How do I avoid this?
Should I increase these settings?
'connect_timeout' => 2,
'read_timeout' => 2,
'timeout' => 2,
Yes, you need to increase read_timeout and timeout. The error is clear, you don't have enough time to get the response (the server is slow, network or something else, doesn't matter).
If it's possible, increasing the timeouts is the easiest way.
If the server supports pagination, it's a better way to request the data part by part.
Also you can use async queries in Guzzle and send something to your end user while you are waiting for the response from the API.
simply add this in your composer, worked for me !
"require": {
"ext-curl": "*"
},
I'm trying to use promise in SNS client (AWS SDK for PHP) but it not work.
This code (synchronous way) work right with function createTopic:
require 'aws/aws-autoloader.php';
use GuzzleHttp\Promise;
use Aws\Sns\SnsClient;
$client = new SnsClient([
'version' => 'latest',
'region' => 'ap-northeast-1',
'credentials' => [
'key' => 'xxx',
'secret' => 'xxx',
],
]);
$result = $client->createTopic(['Name' => "test"]);
echo $result->get('TopicArn');
But when I want to use promise (asynchronous way) by use function createTopicAsync :
$result = $client->createTopicAsync(['Name' => "test"]);
$result->then(
function ($value) {
echo "The promise was fulfilled with {$value}";
},
function ($reason) {
echo "The promise was rejected with {$reason}";
}
);
It not work and nothing happened, no error return.
Anyone that might know what can be wrong?
Try adding the following line:
// Wait for the operation to complete
$result->wait();
So the complete block should look like
$result = $client->createTopicAsync(['Name' => "test"]);
$result->then(
function ($value) {
echo "The promise was fulfilled with {$value}";
},
function ($reason) {
echo "The promise was rejected with {$reason}";
}
);
// Wait for the operation to complete
$result->wait();
UPD: Obviously, it makes a little sense to use async call this way. But to answer your question: to get any result in your case, you should synchronously force your promise to complete as described above.
UPD2: Here you can see an example of executing multiple async operations. Notice that you will have to call wait() anyway, no matter how many promises you have
I am trying to set up SQS and after receiving the message, I need to delete it from the queue.
Creating Client -
$client = Aws\Sqs\SqsClient::factory(array(
'key' => '******',
'secret' => '******',
'region' => 'ap-southeast-1'
));
Sending Message
public static function SendMessage()
{
if(!isset(self::$queueUrl))
self::getQueueUrl();
$command = "This is a command";
$commandstring = json_encode($command);
self::$client->sendMessage(array(
'QueueUrl' => self::$queueUrl,
'MessageBody' => $commandstring,
));
}
Receiving Message
public static function RecieveMessage()
{
if(!isset(self::$queueUrl))
self::getQueueUrl();
$result = self::$client->receiveMessage(array(
'QueueUrl' => self::$queueUrl,
));
// echo "Message Recieved >> ";
print_r($result);
foreach ($result->getPath('Messages/*/Body') as $messageBody) {
// Do something with the message
echo $messageBody;
//print_r(json_decode($messageBody));
}
foreach ($result->getPath('Messages/*/ReceiptHandle') as $ReceiptHandle) {
self::$client->deleteMessage(self::$queueUrl, $ReceiptHandle);
}
}
When I try to delete the message using the Receipt Handle in the receive message code, I get error from Guzzle -
Catchable fatal error: Argument 2 passed to Guzzle\Service\Client::getCommand() must be an array, string given,
Now after searching a lot for it, I was able to find similar questions which state that they were using wrong SDK version. I am still not able to narrow it down though. I am using the zip version of the latest sdk 2.6.15
Why don't you give this a try this:
self::$client->deleteMessage(array(
'QueueUrl' => self::$queueUrl,
'ReceiptHandle' => $ReceiptHandle,
));
The Basic formatting example in the API docs for SqsClient::deleteMessage() (and other operations) should help. All of the methods that execute operations take exactly one parameter, which is an associative array of the operation's parameters. You should read through the SDK's Getting Started Guide (if you haven't already), which talks about how to perform operations in general.
I have an online software that sends emails to Amazon SES. Currently I have a cron job that sends the emails via the SMTP with phpmailer to send the messages. Currently I have to max the send limit to around 300 every minute to make sure my server doesn't time out. We see growth and eventually I'd like to send out to 10,000 or more.
Is there a better way to send to Amazon SES, or is this what everyone else does, but with just more servers running the workload?
Thanks in advance!
You can try using the AWS SDK for PHP. You can send emails through the SES API, and the SDK allows you to send multiple emails in parallel. Here is a code sample (untested and only partially complete) to get you started.
<?php
require 'vendor/autoload.php';
use Aws\Ses\SesClient;
use Guzzle\Service\Exception\CommandTransferException;
$ses = SesClient::factory(/* ...credentials... */);
$emails = array();
// #TODO SOME SORT OF LOGIC THAT POPULATES THE ABOVE ARRAY
$emailBatch = new SplQueue();
$emailBatch->setIteratorMode(SplQueue::IT_MODE_DELETE);
while ($emails) {
// Generate SendEmail commands to batch
foreach ($emails as $email) {
$emailCommand = $ses->getCommand('SendEmail', array(
// GENERATE COMMAND PARAMS FROM THE $email DATA
));
$emailBatch->enqueue($emailCommand);
}
try {
// Send the batch
$successfulCommands = $ses->execute(iterator_to_array($emailBatch));
} catch (CommandTransferException $e) {
$successfulCommands = $e->getSuccessfulCommands();
// Requeue failed commands
foreach ($e->getFailedCommands() as $failedCommand) {
$emailBatch->enqueue($failedCommand);
}
}
foreach ($successfulCommands as $command) {
echo 'Sent message: ' . $command->getResult()->get('MessageId') . "\n";
}
}
// Also Licensed under version 2.0 of the Apache License.
You could also look into using the Guzzle BatchBuilder and friends to make it more robust.
There are a lot of things you will need to fine tune with this code, but you may be able to achieve higher throughput of emails.
If anyone is looking for this answer, its outdated and you can find the new documentation here: https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/commands.html
use Aws\S3\S3Client;
use Aws\CommandPool;
// Create the client.
$client = new S3Client([
'region' => 'us-standard',
'version' => '2006-03-01'
]);
$bucket = 'example';
$commands = [
$client->getCommand('HeadObject', ['Bucket' => $bucket, 'Key' => 'a']),
$client->getCommand('HeadObject', ['Bucket' => $bucket, 'Key' => 'b']),
$client->getCommand('HeadObject', ['Bucket' => $bucket, 'Key' => 'c'])
];
$pool = new CommandPool($client, $commands);
// Initiate the pool transfers
$promise = $pool->promise();
// Force the pool to complete synchronously
$promise->wait();
Same thing can be done for SES commands
Thank you for your answer. It was a good starting point. #Jeremy Lindblom
My problem is now that i can't get the Error-Handling to work.
The catch()-Block works fine and inside of it
$successfulCommands
returns all the succeed Responses with Status-Codes, but only if an error occurs. For example "unverified address" in Sandbox-Mode. Like a catch() should work. :)
The $successfulCommands inside the try-Block only returns:
SplQueue Object
(
[flags:SplDoublyLinkedList:private] => 1
[dllist:SplDoublyLinkedList:private] => Array
(
)
)
I can't figure it out how to get the real Response from Amazon with Status-Codes etc.