I am trying to copy a folder to another in aws s3 as below
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => 'testbucket',
'secret' => BUCKET_SECRET //Global constant
),
'version' => BUCKET_VERSION, //Global constant
'region' => BUCKET_REGION //Global constant
)
);
$sourceBucket = 'testbucket';
$sourceKeyname = 'admin/collections/Athena'; // Object key
$targetBucket = 'testbucket';
$targetKeyname = 'admin/collections/Athena-New';
// Copy an object.
$s3->copyObject(array(
'Bucket' => $targetBucket,
'Key' => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
));
It is throwing error as
Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with
message 'Error executing "CopyObject" on
"https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New";
AWS HTTP error: Client error: PUT
https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New
resulted in a 404 Not Found response:
NoSuchKeyThe specified key does not
exist.admin/collections/AthenaNoSuchKeyThe specified key does not
exist.admin/collections/Athena29EA131A5AD9CB836OjDNLgbdLPLMd0t7MuNi4JH6AU5pKfRmhCcWigGAaTuRlqoX8X5aMicWTui56rTH1BLRpJJtmc='
I can't figure out why it is making wrong bucket url like
https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New
While right aws bucket url is
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena-New
Why it is appending the bucket name to before s3 in url?
In simple words, I wanted to copy the content of
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena
to
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena-New
It is not possible to "copy a folder" in Amazon S3 because folders do not actually exist.
Instead, the full path of an object is stored in the object's Key (filename).
So, an object might be called:
admin/collections/Athena/foo.txt
If you wish to copy all objects from one "folder" to another "folder", then you will need to:
Obtain a listing of the bucket for the given Prefix (effectively, full path to the folder)
Loop through each object returned, and copy the objects one-at-a-time to the new name (which effectively puts it in a new folder)
So, it would copy admin/collections/Athena/foo.txt to admin/collections/Athena-New/foo.txt
Related
I'm using aws S3 php api to create bucket as shown below, but it returns this error message whatever I try
The requested bucket name is not available. The bucket namespace is
shared by all users of the system. Please select a different name and
try again.
when I try to create bucket on aws console which I've tried before on api, it works on aws console
here is my sample code
function createBucket($s3Client, $bucketName)
{
try {
$result = $s3Client->createBucket([
'Bucket' => $bucketName,
]);
return 'The bucket\'s location is: ' .
$result['Location'] . '. ' .
'The bucket\'s effective URI is: ' .
$result['#metadata']['effectiveUri'];
} catch (AwsException $e) {
return 'Error: ' . $e->getAwsErrorMessage();
}
}
function createTheBucket($name)
{
define('AWS_KEY', 'AWS_KEY');
define('AWS_SECRET_KEY', 'AWS_SECRET_KEY');
define('REGION', 'eu-west-1');
// Establish connection with DreamObjects with an S3 client.
$s3Client = new Aws\S3\S3Client([
'version' => '2006-03-01',
'region' => REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' => AWS_SECRET_KEY,
]
]);
echo createBucket($s3Client, $name);
}
The S3 bucket you are trying to create has already been created in the AWS namespace.
Its important to understand that S3 buckets have a unique name amongst the entire AWS global namespace.
Ensure your bucket name does not collide with anyone else's or one of your own.
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes.
If the S3 bucket name is free then its possible that either a hard coded value has overridden the $bucketName variable or that code logic (such as looping or formatting parameters) is trying to recreate a bucket that has already existed.
The best way to discover to validate the variable $bucketName value throughout your scrip execution.
I am new to AWS. As I understand, S3 transfer acceleration uses the Cloudfront edges for fastest uploading, but I can't find the proper documentation for PHP API, for uploading object into transfer acceleration enabled bucket.
My code :
use Aws\S3\S3Client;
$S3_Client = new S3Client([
'version' => 'latest',
'region' =>'ap-south-1',
'credentials' => [
'key' => 'Accesskey',
'secret' => 'Secretkey',
],
'endpoint' => 'http://my_bucket_name.s3-accelerate.amazonaws.com'
]);
$bucket = 'my_bucket_name';
$key = 'EC2.pdf';
$SourceFile = '/path/to/the/file/EC2.pdf';
$put = $S3_Client->putObject([
'Bucket' => $bucket,
'Key' => $key,
'SourceFile' => $SourceFile
]);
I am getting the following error
The authorization header is malformed;
the region 'ap-south-1' is wrong; expecting 'us-east-1'
but my bucket is located in us-east-1 , when I change the region as
us-east-1
I am getting the following error:
The specified bucket does not exist
Instead of endpoint => ..., pass 'use_accelerate_endpoint' => True to the constructor.
There are a number of different rules that come into play when building a request to send to S3. The endpoint option provides a service endpoint, rather than a bucket endpoint, and is mostly useful for non-standard configurations.
This may be related to this discussion: https://github.com/hashicorp/terraform/issues/2774
Try the following solution -
"I had same issue, i had created the bucket previously and deleted it. I changed the name and it applied no problem."
I have used Composer to install the AWS SDK for PHP per the getting started instructions found here. I installed it in my html root. I created an IAM user called "ImageUser" with the sole permission of "AmazonS3FullAccess" and captured its keys.
Per the instructions here, I created a file called "credentials" as follows:
[default]
aws_access_key_id = YOUR_AWS_ACCESS_KEY_ID
aws_secret_access_key = YOUR_AWS_SECRET_ACCESS_KEY
Yes, I replaced those upper case words with the appropriate keys. The file resides in the hidden subdirectory ".aws" in the html root. The file's UNIX permissions are 664.
I created this simple file (called "test.php" in a subdirectory of my html root called "t") to test uploading a file to S3:
<?php
// Include the AWS SDK using the Composer autoloader.
require '../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'testbucket';
$keyname = 'test.txt';
// Instantiate the client.
$s3 = S3Client::factory();
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
Unfortunately, it throws a http error 500 at the line:
$s3 = S3Client::factory();
Yes, the autoloader directory is correct. Yes, the bucket exists. No, the file "test.txt" does not already exist.
According to the page noted above, "If no credentials or profiles were explicitly provided to the SDK and no credentials were defined in environment variables, but a credentials file is defined, the SDK will use the 'default' profile." Even so, I also tried explicitly specifying the profile "default" in the factory statement only to get the same results.
What am I doing wrong?
tldr; You have a mix of AWS SDK versions
Per the link you provided in your message (link) you have installed php sdk v3
Per your examples, you use PHP sdk v2
The v3 does not know about the S3Client::factory method so thats the reason it throws you the error. You can continue checking your link to check the usage https://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/basic-usage.html. There are a few methods to get the s3 client
create a client - simple method
<?php
// Include the SDK using the Composer autoloader
require 'vendor/autoload.php';
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
create a client - using sdk class
// Use the us-west-2 region and latest version of each client.
$sharedConfig = [
'region' => 'us-west-2',
'version' => 'latest'
];
// Create an SDK class used to share configuration across clients.
$sdk = new Aws\Sdk($sharedConfig);
// Create an Amazon S3 client using the shared configuration data.
$s3 = $sdk->createS3();
once you have your client you can use your existing code (yes, this one is v3) to put a new object on s3 so you'll get something like
<?php
// Include the AWS SDK using the Composer autoloader.
require '../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'testbucket';
$keyname = 'test.txt';
// Instantiate the client.
-- select method 1 or 2 --
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
I'm trying to create a folder within an S3 amazon bucket but I'm finding it difficult to find the right documentation to adequately explain what is needed. I have the following code / pseudocode for creating a folder. Can anyone explain or provide a sample of the arguments I need to place within the code
use vendor\aws\S3\S3Client;
$bucket_url = 'https://***.amazonaws.com/***/';
$folder_name = $username . '/';
$s3Client = new vendor\aws\S3\S3Client([
'version' => AWS_VERSION,
'region' => AWS_REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' =>AWS_SECRET,
],
]);
$s3Client->putObject(array(
'Bucket' => AWS_BUCKET, // Defines name of Bucket
'Key' => AWS_PATH . $folder_name, //Defines Folder name
'Body' => "",
));
S3 doesn't have folders beyond the bucket, but objects (files) can have /s (forward slashes) in their name, and there are methods to retrieve based on a prefix that allows you to emulate a directory-list. This means though, that you can't create an empty folder.
So a work around will be put a empty txt file and delete it after wards but the folder structure will stay.
/* function to upload empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->putObjectFile(‘test.txt’, ‘bucketname’, ‘/folder/test.txt’, S3::ACL_PUBLIC_READ);
/* function to delete empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->deleteObject(‘bucketname’, ‘/folder/test.txt’);
Amazon S3 does not have a concept of folders. For S3, all objects are simply a key name with data.
Folders are a human concept which use the '/' character to separate the folders. But S3 does not care.
When you use many third-party tools (and even the AWS Management Console), the tools often will look at the object keys under your prefix and when it sees a '/' in it, it will interpret it as a folder.
But there's no way to "create a folder".
If you you simply PutObject an object with a key with your desired full path (for example, "my/desired/folder/structure/file.txt"), Amazon S3 will put it there. It's not like many filesystems where the folder must exist before a file can be created.
The closest thing to "creating a folder" you could do is to create a 0-byte object with a '/' at the end of it's key. For example "my/desired/folder/structure/". But it will just be another object in the bucket. It won't have any effect on the creation or operation of the bucket or any other objects in the bucket.
Amazon S3 doesn't really have directories:
In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
Instead, it fakes it based on the name of an object's key. Just upload an object using a key like some/directory/file.txt and many tools, including the S3 interface in the AWS console, will act as if you have an object called file.txt in a directory called directory in a directory called some.
See also Amazon S3 boto - how to create a folder?
$client->putObject([
'Bucket' => 'bucket',
'Key' => 'folder/', ]);
For 'version' => '2006-03-01',
How can an item in S3 be updated with 'public-read' using the new AWS S3 PHP SDK: It would seem it is only possible to GET and PUT? http://docs.aws.amazon.com/aws-sdk-php/latest/class-Aws.S3.S3Client.html
The iterator returns an array, not a class. Get object returns a class, but there are no obvious methods to update. CopyObject seems a bit of a hack?
$s3->copyObject(array(
'Bucket' => 'media',
'Key' => $k,
'CopySource' => 'media'.'/'.$k,
'ACL' => 'public-read',
));
returns:
PHP Fatal error: Uncaught Aws\S3\Exception\InvalidRequestException: AWS Error Code: InvalidRequest, Status Code: 400, AWS Request ID: FC630F89A049823A, AWS Error Type: client, AWS Error Message: This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes., User-Agent: aws-sdk-php2/2.5.3 Guzzle/3.8.1 curl/7.35.0 PHP/5.5.9-1ubuntu4.4 thrown in /.../vendor/aws/aws-sdk-php/src/Aws/Common/Exception/NamespaceExceptionFactory.php on line 91
Better late than never.
$s3Client->putObjectAcl(array(
'Bucket' => 'yourbucket',
'Key' => 'yourkey',
'ACL' => 'public-read'
));