I'm reading: http://docs.aws.amazon.com/aws-sdk-php/latest/class-Aws.S3.S3Client.html#_getBucketCors
I have a partial key ex: "/myfolder/myinnerfolder/"
However there are actually many objects (files) inside of myinnerfolder.
I believe that I can call something like this:
$result = $client->getObject(array(
'Bucket' => $bucket,
'Key' => $key
));
return $result;
If I have the full key. How can I call something like the above but have it return all of the objects and or their names to me? In Python you can simply request by the front of a key but I don't see an option to do this. Any ideas?
You need to use the listObjects() method with the 'Prefix' parameter.
$result = $client->listObjects(array(
'Bucket' => $bucket,
'Prefix' => 'myfolder/myinnerfolder/',
));
$objects = $result['Contents'];
To make this even easier, especially if you have more than 1000 objects with that prefix (which would normally require multiple requests), you can use the Iterators feature of the SDK.
$objects = $client->getIterator('ListObjects', array(
'Bucket' => $bucket,
'Prefix' => 'myfolder/myinnerfolder/',
));
foreach ($objects as $object) {
echo $object['Name'];
}
Related
I have a little problem with AWS S3 service. I'm trying to delete whole bucket and i would like to use deleteBucketAsync().
My code:
$result = $this->s3->listObjects(array(
'Bucket' => $bucket_name,
'Prefix' => ''
));
foreach($result['Contents'] as $file){
$this->s3->deleteObjectAsync(array(
'Bucket' => $bucket_name,
'Key' => $file['Key']
));
}
$result = $this->s3->deleteBucketAsync(
[
'Bucket' => $bucket_name,
]
);
Sometimes this code works and delete whole bucket in seconds. But sometimes it doesn't.
Can someone please explain me how exacly S3 async functions work?
I'm trying to remove objects and when all objects are removed i want to remove the main folder.
right now its work fine when i remove a object key in a folder, and when the folder are empty the folder will be removed, but when i hit the last main folder like (folder1/folder2/) and folder1 one are empty after folder2 are removed, i can't removed this folder.
my php code look like this.
$s3 = new S3Client([
'version' => 'latest',
'region' => AMAZON_S3_REGION,
'credentials' => [
'key' => AMAZON_KEY,
'secret' => AMAZON_SECRET
]
]);
$response = $s3->listObjects([
'Bucket' => AMAZON_S3_BUCKET,
'Prefix' => $prefix_dir
]);
$keys = [];
foreach($response['Contents'] AS $val) {
$keys[]['Key'] = $val['Key'];
}
$result = $s3->deleteObjects([
'Bucket' => AMAZON_S3_BUCKET,
'Objects' => $keys
]);
when i trying to remove the folder path alone after it i get a status-code 204 but why?
Instead of deleteObjects try using deleteMatchingObjects and passing empty prefix or string matching some folder.
$s3->deleteMatchingObjects($bucket);
// or inner of folder
$s3->deleteMatchingObjects($bucket, 'folder1/');
I am successfully uploading folders to S3 using ->uploadDirectory(). Several hundred folders have 100's, or 1,000's of images contained within them with so using PutObject() for each file hardly seemed to make sense. The upload works, and all goes well, but the ACL, StorageClass, and metadata is not being included in the upload.
According to the docs at http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html#uploading-a-directory-to-a-bucket , the following code should accomplished this. It is further documented with the putObject() function that is also cited.
I can find no examples of this function using anything but a directory and bucket, so fail to see what might be wrong with it. Any ideas why the data in $options is being ignored?
$aws = Aws::factory('config.php');
$s3 = $aws->get('S3');
$dir = 'c:\myfolder\myfiles';
$bucket = 'mybucket;
$keyPrefix = "ABC/myfiles/";
$options = array(
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata'=> array(
'MyVal1'=>'Something',
'MyVal2'=>'Something else'
)
);
$result = $s3->uploadDirectory($dir, $bucket, $keyPrefix, $options);
Parameters to provide to putObject or createMultipartUpload should be in the params option, not provided as top-level values in the options array. Try declaring your options as follows:
$options = array(
'params' => array(
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata'=> array(
'MyVal1'=>'Something',
'MyVal2'=>'Something else',
),
),
);
I use this code, and it returned me no more than 1000 parts of upload. How can i get more than 1000 parts of amazon s3 multipart upload using php-aws lib listParts() method? Thanks.
$parts = $s3->listParts(array(
'Bucket' => $bucket,
'Key' => $keyName,
'UploadId' => $uploadId,
));
If there are more than 1000 parts, then the result will contain the NextPartNumberMarker field. Take the value from NextPartNumberMarker and make another request, exactly like your original, but add a PartNumberMarker parameter with the value from NextPartNumberMarker, like this:
$result2 = $s3->listParts(array(
'Bucket' => $bucket,
'Key' => $keyName,
'UploadId' => $uploadId,
'PartNumberMarker' => result1['NextPartNumberMarker'],
));
These parameters are documented in the API documentation for the ListParts operations.
Doing that is kind of a pain though, so the easiest way is to use the "Iterators" feature of the SDK. Iterators enumerate one part at a time, and automatically make subsequent requests in the background for more as needed. To use the iterator for the ListParts operation, you can do something like this:
$parts = $s3->getIterator('ListParts', array(
'Bucket' => $bucket,
'Key' => $keyName,
'UploadId' => $uploadId,
));
foreach ($parts as $part) {
// Do something with the part data
printf("%d: %s (%d bytes)\n", $part['PartNumber'], $part['ETag'], $part['Size']);
}
We have an application where in user can create his own webpages and host them.We are using S3 to store the pages as they are static.Here,as we have a limitation of 100 buckets per user,we decided to go with folders for each user inside a bucket.
Now,if a user wants to host his website on his domain,we ask him for the domain name(when he starts we publish it on our subdomain) and I have to rename the folder.
S3 being a flat file system I know there are actually no folders but just delimeter / separated values so I cannot go into the folder and check how many pages it contains.The API allows it one by one but for that we have to know the object names in the bucket.
I went through the docs and came across iterators,which I have not implemented yet.This uses guzzle of which I have no experience and facing challenges in implementing
Is there any other path I can take or I need to go this way.
You can create an iterator for the contents of a "folder" by doing the following:
$objects = $s3->getIterator('ListObjects', array(
'Bucket' => 'bucket-name',
'Prefix' => 'subfolder-name/',
'Delimiter' => '/',
));
foreach ($objects as $object) {
// Do things with each object
}
If you just need a count, you could this:
echo iterator_count($s3->getIterator('ListObjects', array(
'Bucket' => 'bucket-name',
'Prefix' => 'subfolder-name/',
'Delimiter' => '/',
)));
Bit of a learning curve with s3, eh? I spent about 2 hours and ended up with this codeigniter solution. I wrote a controller to loop over my known sub-folders.
function s3GetObjects($bucket) {
$CI =& get_instance();
$CI->load->library('aws_s3');
$prefix = $bucket.'/';
$objects = $CI->aws_s3->getIterator('ListObjects', array(
'Bucket' => $CI->config->item('s3_bucket'),
'Prefix' => $prefix,
'Delimiter' => '/',
));
foreach ($objects as $object) {
if ($object['Key'] == $prefix) continue;
echo $object['Key'].PHP_EOL;
if (!file_exists(FCPATH.$object['Key'])) {
try {
$r = $CI->aws_s3->getObject(array(
'Bucket' => $CI->config->item('s3_bucket'),
'Key' => $object['Key'],
'SaveAs' => FCPATH.$object['Key']
));
} catch (Exception $e) {
echo $e->getMessage().PHP_EOL;
//return FALSE;
}
echo PHP_EOL;
} else {
echo ' -- file exists'.PHP_EOL;
}
}
return TRUE;
}