Uploading object into transfer accelerated S3 bucket using PHP - php

I am new to AWS. As I understand, S3 transfer acceleration uses the Cloudfront edges for fastest uploading, but I can't find the proper documentation for PHP API, for uploading object into transfer acceleration enabled bucket.
My code :
use Aws\S3\S3Client;
$S3_Client = new S3Client([
'version' => 'latest',
'region' =>'ap-south-1',
'credentials' => [
'key' => 'Accesskey',
'secret' => 'Secretkey',
],
'endpoint' => 'http://my_bucket_name.s3-accelerate.amazonaws.com'
]);
$bucket = 'my_bucket_name';
$key = 'EC2.pdf';
$SourceFile = '/path/to/the/file/EC2.pdf';
$put = $S3_Client->putObject([
'Bucket' => $bucket,
'Key' => $key,
'SourceFile' => $SourceFile
]);
I am getting the following error
The authorization header is malformed;
the region 'ap-south-1' is wrong; expecting 'us-east-1'
but my bucket is located in us-east-1 , when I change the region as
us-east-1
I am getting the following error:
The specified bucket does not exist

Instead of endpoint => ..., pass 'use_accelerate_endpoint' => True to the constructor.
There are a number of different rules that come into play when building a request to send to S3. The endpoint option provides a service endpoint, rather than a bucket endpoint, and is mostly useful for non-standard configurations.

This may be related to this discussion: https://github.com/hashicorp/terraform/issues/2774
Try the following solution -
"I had same issue, i had created the bucket previously and deleted it. I changed the name and it applied no problem."

Related

aws s3 createBucket function always returns S3 BucketAlreadyExists

I'm using aws S3 php api to create bucket as shown below, but it returns this error message whatever I try
The requested bucket name is not available. The bucket namespace is
shared by all users of the system. Please select a different name and
try again.
when I try to create bucket on aws console which I've tried before on api, it works on aws console
here is my sample code
function createBucket($s3Client, $bucketName)
{
try {
$result = $s3Client->createBucket([
'Bucket' => $bucketName,
]);
return 'The bucket\'s location is: ' .
$result['Location'] . '. ' .
'The bucket\'s effective URI is: ' .
$result['#metadata']['effectiveUri'];
} catch (AwsException $e) {
return 'Error: ' . $e->getAwsErrorMessage();
}
}
function createTheBucket($name)
{
define('AWS_KEY', 'AWS_KEY');
define('AWS_SECRET_KEY', 'AWS_SECRET_KEY');
define('REGION', 'eu-west-1');
// Establish connection with DreamObjects with an S3 client.
$s3Client = new Aws\S3\S3Client([
'version' => '2006-03-01',
'region' => REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' => AWS_SECRET_KEY,
]
]);
echo createBucket($s3Client, $name);
}
The S3 bucket you are trying to create has already been created in the AWS namespace.
Its important to understand that S3 buckets have a unique name amongst the entire AWS global namespace.
Ensure your bucket name does not collide with anyone else's or one of your own.
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes.
If the S3 bucket name is free then its possible that either a hard coded value has overridden the $bucketName variable or that code logic (such as looping or formatting parameters) is trying to recreate a bucket that has already existed.
The best way to discover to validate the variable $bucketName value throughout your scrip execution.

copy folder from one to another in aws s3 in php

I am trying to copy a folder to another in aws s3 as below
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => 'testbucket',
'secret' => BUCKET_SECRET //Global constant
),
'version' => BUCKET_VERSION, //Global constant
'region' => BUCKET_REGION //Global constant
)
);
$sourceBucket = 'testbucket';
$sourceKeyname = 'admin/collections/Athena'; // Object key
$targetBucket = 'testbucket';
$targetKeyname = 'admin/collections/Athena-New';
// Copy an object.
$s3->copyObject(array(
'Bucket' => $targetBucket,
'Key' => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
));
It is throwing error as
Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with
message 'Error executing "CopyObject" on
"https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New";
AWS HTTP error: Client error: PUT
https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New
resulted in a 404 Not Found response:
NoSuchKeyThe specified key does not
exist.admin/collections/AthenaNoSuchKeyThe specified key does not
exist.admin/collections/Athena29EA131A5AD9CB836OjDNLgbdLPLMd0t7MuNi4JH6AU5pKfRmhCcWigGAaTuRlqoX8X5aMicWTui56rTH1BLRpJJtmc='
I can't figure out why it is making wrong bucket url like
https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New
While right aws bucket url is
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena-New
Why it is appending the bucket name to before s3 in url?
In simple words, I wanted to copy the content of
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena
to
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena-New
It is not possible to "copy a folder" in Amazon S3 because folders do not actually exist.
Instead, the full path of an object is stored in the object's Key (filename).
So, an object might be called:
admin/collections/Athena/foo.txt
If you wish to copy all objects from one "folder" to another "folder", then you will need to:
Obtain a listing of the bucket for the given Prefix (effectively, full path to the folder)
Loop through each object returned, and copy the objects one-at-a-time to the new name (which effectively puts it in a new folder)
So, it would copy admin/collections/Athena/foo.txt to admin/collections/Athena-New/foo.txt

How can I verify amazon keys on correctness programmatically?

I'm connecting to Amazon SES via this php-code
$ses = new SesClient([
'credentials' => [
'key' => KEY,
'secret' => SECRET_KEY,
],
'region' => REGION,
'version' => SES_VERSION,
]);
How can I recognize here, whether constants KEY and SECRET_KEY are valid or invalid (such as wrong, inputed with typos and so on) ?
Is there any method in AWS SDK to verify it ?
I use the Python call get_user(). With no arguments, this call will return the user name based on the access key ID. This validates that the credentials are correct. This technique is not bulletproof, but does provide a simple, quick method. You can test this concept with the CLI aws iam get-user.
Python IAM get_user()

Can't pass my credentials to AWS PHP SDK

I installed AWS PHP SDK and am trying to use SES. My problem is that it's (apparently) trying to read ~/.aws/credentials no matter what I do. I currently have this code:
$S3_AK = getenv('S3_AK');
$S3_PK = getenv('S3_PK');
$profile = 'default';
$path = '/home/franco/public/site/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = SesClient::factory(array(
'profile' => 'default',
'region' => 'us-east-1',
'version' => "2010-12-01",
'credentials' => [
'key' => $S3_AK,
'secret' => $S3_PK,
]
));
And am still getting "Cannot read credentials from ~/.aws/credentials" error (after quite a while).
I tried 'credentials' => $provider of course, that was the idea, but as it wasn't working I reverted to hardcoded credentials. I've dumped $S3_AK and $S3_PK and they're fine, I'm actually using them correctly for S3, but there I have Zend's wrapper. I've tried ~/.aws/credentials (no ".ini") to the same result. Both files having 777 permissions.
Curious information: I had to set memory limit to -1 so it would be able to var_dump the exception. The html to the exception is around 200mb.
I'd prefer to use the environment variables, all though the credentials file is fine. I just don't understand why it appears to be trying to read the file even though I've hardcoded the credentials.
EDIT: So a friend showed me this, I removed the profile and also modified the try/catch and noticed the client seems to be created properly, and the error comes from trying to actually send an email.
The trick is just remove 'profile' => 'default' from the factory params, if this is defined we can't use a custom credentials file or environment variables. Is not documented but just works.
I'm using Sns and Sdk v3.
<?php
use Aws\Credentials\CredentialProvider;
$profile = 'sns-reminders';
$path = '../private/credentials';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$sdk = new Aws\Sdk(['credentials' => $provider]);
$sns = $sdk->createSns([
// 'profile' => $profile,
'region' => 'us-east-1',
'version' => 'latest',
]);
This solution will probably only work if you're using version 3 of the SDK. I use something similar to this:
$provider = CredentialsProvider::memoize(CredentialsProvider::ini($profile, $path));
$client = new SesClient([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => $provider]);
I use this for S3Client, DynamoDbClient, and a few other clients, so I am assuming that the SesClient constructor supports the same arguments.
OK, I managed to fix it.
I couldn't read the credentials file but it wasn't exactly my idea.
What was happening was that the actual client was being created successfully, but the try/catch also had the sendEmail included. This was what was failing.
About creating the client with explicit credentials: If you specify region, it will try and read a credentials file.
About the SendEmail, this is the syntax that worked for me, I'd found another one also in the AWS docs site, and that one failed. It must've been for an older SDK.

Update AWS S3 item ACL using new PHP SDK

How can an item in S3 be updated with 'public-read' using the new AWS S3 PHP SDK: It would seem it is only possible to GET and PUT? http://docs.aws.amazon.com/aws-sdk-php/latest/class-Aws.S3.S3Client.html
The iterator returns an array, not a class. Get object returns a class, but there are no obvious methods to update. CopyObject seems a bit of a hack?
$s3->copyObject(array(
'Bucket' => 'media',
'Key' => $k,
'CopySource' => 'media'.'/'.$k,
'ACL' => 'public-read',
));
returns:
PHP Fatal error: Uncaught Aws\S3\Exception\InvalidRequestException: AWS Error Code: InvalidRequest, Status Code: 400, AWS Request ID: FC630F89A049823A, AWS Error Type: client, AWS Error Message: This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes., User-Agent: aws-sdk-php2/2.5.3 Guzzle/3.8.1 curl/7.35.0 PHP/5.5.9-1ubuntu4.4 thrown in /.../vendor/aws/aws-sdk-php/src/Aws/Common/Exception/NamespaceExceptionFactory.php on line 91
Better late than never.
$s3Client->putObjectAcl(array(
'Bucket' => 'yourbucket',
'Key' => 'yourkey',
'ACL' => 'public-read'
));

Categories