How to create a folder within S3 bucket using PHP - php

I'm trying to create a folder within an S3 amazon bucket but I'm finding it difficult to find the right documentation to adequately explain what is needed. I have the following code / pseudocode for creating a folder. Can anyone explain or provide a sample of the arguments I need to place within the code
use vendor\aws\S3\S3Client;
$bucket_url = 'https://***.amazonaws.com/***/';
$folder_name = $username . '/';
$s3Client = new vendor\aws\S3\S3Client([
'version' => AWS_VERSION,
'region' => AWS_REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' =>AWS_SECRET,
],
]);
$s3Client->putObject(array(
'Bucket' => AWS_BUCKET, // Defines name of Bucket
'Key' => AWS_PATH . $folder_name, //Defines Folder name
'Body' => "",
));

S3 doesn't have folders beyond the bucket, but objects (files) can have /s (forward slashes) in their name, and there are methods to retrieve based on a prefix that allows you to emulate a directory-list. This means though, that you can't create an empty folder.
So a work around will be put a empty txt file and delete it after wards but the folder structure will stay.
/* function to upload empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->putObjectFile(‘test.txt’, ‘bucketname’, ‘/folder/test.txt’, S3::ACL_PUBLIC_READ);
/* function to delete empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->deleteObject(‘bucketname’, ‘/folder/test.txt’);

Amazon S3 does not have a concept of folders. For S3, all objects are simply a key name with data.
Folders are a human concept which use the '/' character to separate the folders. But S3 does not care.
When you use many third-party tools (and even the AWS Management Console), the tools often will look at the object keys under your prefix and when it sees a '/' in it, it will interpret it as a folder.
But there's no way to "create a folder".
If you you simply PutObject an object with a key with your desired full path (for example, "my/desired/folder/structure/file.txt"), Amazon S3 will put it there. It's not like many filesystems where the folder must exist before a file can be created.
The closest thing to "creating a folder" you could do is to create a 0-byte object with a '/' at the end of it's key. For example "my/desired/folder/structure/". But it will just be another object in the bucket. It won't have any effect on the creation or operation of the bucket or any other objects in the bucket.

Amazon S3 doesn't really have directories:
In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
Instead, it fakes it based on the name of an object's key. Just upload an object using a key like some/directory/file.txt and many tools, including the S3 interface in the AWS console, will act as if you have an object called file.txt in a directory called directory in a directory called some.
See also Amazon S3 boto - how to create a folder?

$client->putObject([
'Bucket' => 'bucket',
'Key' => 'folder/', ]);
For 'version' => '2006-03-01',

Related

aws s3 createBucket function always returns S3 BucketAlreadyExists

I'm using aws S3 php api to create bucket as shown below, but it returns this error message whatever I try
The requested bucket name is not available. The bucket namespace is
shared by all users of the system. Please select a different name and
try again.
when I try to create bucket on aws console which I've tried before on api, it works on aws console
here is my sample code
function createBucket($s3Client, $bucketName)
{
try {
$result = $s3Client->createBucket([
'Bucket' => $bucketName,
]);
return 'The bucket\'s location is: ' .
$result['Location'] . '. ' .
'The bucket\'s effective URI is: ' .
$result['#metadata']['effectiveUri'];
} catch (AwsException $e) {
return 'Error: ' . $e->getAwsErrorMessage();
}
}
function createTheBucket($name)
{
define('AWS_KEY', 'AWS_KEY');
define('AWS_SECRET_KEY', 'AWS_SECRET_KEY');
define('REGION', 'eu-west-1');
// Establish connection with DreamObjects with an S3 client.
$s3Client = new Aws\S3\S3Client([
'version' => '2006-03-01',
'region' => REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' => AWS_SECRET_KEY,
]
]);
echo createBucket($s3Client, $name);
}
The S3 bucket you are trying to create has already been created in the AWS namespace.
Its important to understand that S3 buckets have a unique name amongst the entire AWS global namespace.
Ensure your bucket name does not collide with anyone else's or one of your own.
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes.
If the S3 bucket name is free then its possible that either a hard coded value has overridden the $bucketName variable or that code logic (such as looping or formatting parameters) is trying to recreate a bucket that has already existed.
The best way to discover to validate the variable $bucketName value throughout your scrip execution.

Get publicly accessible URL from Google Cloud after upload PHP

The problem is after I upload an object to my publicly accessible Google Cloud bucket, I want to use the created URL immediately for another service. However, I don't see a way to get the mediaUrl that I could then use. All of the properties on the following method that would give me that are private:
$bucket->upload(
fopen($_FILES['file']['tmp_name'], 'r'),
array('name' => $name)
);
I've already tried var_dump-ing the above method to see if any of the public properties would give me the created URL, but it doesn't even have any public properties.
Here's the code I'm using to upload the data:
$storage = new StorageClient([
'keyFilePath' => 'keyfile_json.json'
]);
$bucket = $storage->bucket('bucket');
$name = 'some/name/path/'.$_POST['name'];
$bucket->upload(
fopen($_FILES['file']['tmp_name'], 'r'),
array('name' => $name)
);
The file is uploading, I just can't get the URL of the actual resource that I can then go use in a different API call to a different service.
How can I get the URL of the resource after it is uploaded?
You have two ways to achieve this:
Creating the URL for public objects using the following sintaxis: https://storage.googleapis.com/[BucketName]/[ObjectName]
Where:
[BucketName] = your bucket
[ObjectName]= name of your uploaded object
If you are using AppEngine Standard Environment, there is a method in the API PHP App Engine API: getPublicUrl(string $gs_filename, boolean $use_https) : string
Where:
$gs_filename, string, The Google Cloud Storage filename, in the format: gs://bucket_name/object_name.
$use_https, boolean, If True then return a HTTPS URL. Note that the development server ignores this argument and returns only HTTP URLs.
Here the API documentation.
You need to build the public Link URL yourself for public objects.
The format is simple https://storage.cloud.google.com/BucketName/ObjectName.

aws s3 rename directory (object)

I am trying to rename directory in amazon aws s3 bucket. I know that there is no such things like directory in aws s3 everything is object
I have directory structure like
abc/
aaa
bbb
And Now I am trying to rename it by
$s3->copyObject(array(
'Bucket' => $bucket,
'Key' => $newName,
'CopySource' => "{$bucket}/{$currentObj}",
));
and then delete existing, it create new object with new name,the problem is that
when I rename the abc to something else like demo it just create new object with name demo which is empty
I am also aware that why demo is empty because there were three different object
abc/
abc/aaa
abc/bbb
Now to rename them all with one request, is there something like copyMatchingObjects? I mean we have deleteMatchingObjects
No, you cannot rename them all in one API call. The best you can do is probably:
copy abc/aaa to demo/aaa
copy abc/bbb to demo/bbb
delete abc/aaa
delete abc/bbb
delete abc/ (if it actually exists)
In particular, there is typically no need to create demo/.

Listing objects filtered by prefix in AWS with PHP SDK

I recently got the task to manage data which is stored via the Amazone Web Service.
According to the docu of Amazone i tried the following code to list all Objects within a bucket and it works fine:
$aws = Aws::factory('/path/to/my/config.php');
$s3 = $aws->get('s3');
$it = $s3->getIterator('ListObjects', array (
'Bucket' => 'myBucket',
)
);
foreach($it as $o){
echo $o['Key']."<br />";
}
But i need to list all objects only with a certain prefix. To achieve this i added the following line below line 6 of the shown code:
'prefix' => 'myPrefix/',
(The actual key of the file i want to access is (scheme):
myPrefix/subPrefix/subPrefix2/file.txt)
But the code keeps returning all objects in the bucket.
I havn't found any helpfull hints in the Amazon docu for my question.
Can anyone tell me (one of) the correct syntax to list all objects of a given prefix in PHP?
thank you in advance for any help
According to the following thread ...
List objects in a specific folder on Amazon S3
... you need capitalized the index-values of the array that is passed as second argument to the getIterator-function:
'Prefix' => 'myPrefix/',

List all Files in a S3 Directory with Zend Framework

How I can list all files in an Amazon S3 Directory of a Bucket in PHP (and maybe with a helper from Zend Framework)?
See example #5:
http://framework.zend.com/manual/en/zend.service.amazon.s3.html
getObjectsByBucket($bucket) returns the list of the object keys,
contained in the bucket.
$s3 = new Zend_Service_Amazon_S3($my_aws_key, $my_aws_secret_key);
$list = $s3->getObjectsByBucket("my-own-bucket");
foreach($list as $name) {
echo "I have $name key:\n";
$data = $s3->getObject("my-own-bucket/$name");
echo "with data: $data\n";
}
Update:
"Folders" in amazon s3 are prefixes, you can set a param:
prefix - Limits the response to keys which begin with the indicated prefix. You can use prefixes to separate a bucket into different sets of keys in a way similar to how a file system uses folders.
See line #293 of S3.php

Categories