aws s3 rename directory (object) - php

I am trying to rename directory in amazon aws s3 bucket. I know that there is no such things like directory in aws s3 everything is object
I have directory structure like
abc/
aaa
bbb
And Now I am trying to rename it by
$s3->copyObject(array(
'Bucket' => $bucket,
'Key' => $newName,
'CopySource' => "{$bucket}/{$currentObj}",
));
and then delete existing, it create new object with new name,the problem is that
when I rename the abc to something else like demo it just create new object with name demo which is empty
I am also aware that why demo is empty because there were three different object
abc/
abc/aaa
abc/bbb
Now to rename them all with one request, is there something like copyMatchingObjects? I mean we have deleteMatchingObjects

No, you cannot rename them all in one API call. The best you can do is probably:
copy abc/aaa to demo/aaa
copy abc/bbb to demo/bbb
delete abc/aaa
delete abc/bbb
delete abc/ (if it actually exists)
In particular, there is typically no need to create demo/.

Related

PHP - How can I delete a GCP bucket folder and all files/folders within it?

I'm trying to delete a folder in a GCP bucket using the gs storage PHP library.
The folder structure is like so
-Folder1
--Folder1.1
---File
---File
--Folder1.2
---File
-Folder2
--Folder2.1
---File
---File
--Folder2.3
---File
Hopefully, that makes sense. If not, I basically just need to delete a folder and all files and folders within it.
When I do
$storage->bucket($_ENV['bucket_name'])->object('folder1')->delete();
I just get a 404 "No such object" error. I can't see any additional options to use in the library to delete a folder and its contents.
You can't directly delete the folder just only using the $object->delete() function. You need to list all the object inside the folder with the use of prefix in bucket to locate specific location of your folder.. Then, delete it one by one because the API only supports deleting a single object at a time. It means, there is no API call to delete multiple objects using wildcards or the like clause.
To delete all files including the folder use the sample code below, inspired from this answer :
require __DIR__ . '/vendor/autoload.php';
use Google\Cloud\Storage\StorageClient;
function delete_Folder($bucketName)
{
$storage = new StorageClient();
$bucket = $storage->bucket($bucketName);
$objects = $bucket->objects([
'prefix' => 'foldername/'
]);
foreach ($objects as $object) {
$object->delete();
printf('Deleted object: %s' . PHP_EOL, $object->name());
}
}
delete_Folder("mybucket");

Get publicly accessible URL from Google Cloud after upload PHP

The problem is after I upload an object to my publicly accessible Google Cloud bucket, I want to use the created URL immediately for another service. However, I don't see a way to get the mediaUrl that I could then use. All of the properties on the following method that would give me that are private:
$bucket->upload(
fopen($_FILES['file']['tmp_name'], 'r'),
array('name' => $name)
);
I've already tried var_dump-ing the above method to see if any of the public properties would give me the created URL, but it doesn't even have any public properties.
Here's the code I'm using to upload the data:
$storage = new StorageClient([
'keyFilePath' => 'keyfile_json.json'
]);
$bucket = $storage->bucket('bucket');
$name = 'some/name/path/'.$_POST['name'];
$bucket->upload(
fopen($_FILES['file']['tmp_name'], 'r'),
array('name' => $name)
);
The file is uploading, I just can't get the URL of the actual resource that I can then go use in a different API call to a different service.
How can I get the URL of the resource after it is uploaded?
You have two ways to achieve this:
Creating the URL for public objects using the following sintaxis: https://storage.googleapis.com/[BucketName]/[ObjectName]
Where:
[BucketName] = your bucket
[ObjectName]= name of your uploaded object
If you are using AppEngine Standard Environment, there is a method in the API PHP App Engine API: getPublicUrl(string $gs_filename, boolean $use_https) : string
Where:
$gs_filename, string, The Google Cloud Storage filename, in the format: gs://bucket_name/object_name.
$use_https, boolean, If True then return a HTTPS URL. Note that the development server ignores this argument and returns only HTTP URLs.
Here the API documentation.
You need to build the public Link URL yourself for public objects.
The format is simple https://storage.cloud.google.com/BucketName/ObjectName.

How to create a folder within S3 bucket using PHP

I'm trying to create a folder within an S3 amazon bucket but I'm finding it difficult to find the right documentation to adequately explain what is needed. I have the following code / pseudocode for creating a folder. Can anyone explain or provide a sample of the arguments I need to place within the code
use vendor\aws\S3\S3Client;
$bucket_url = 'https://***.amazonaws.com/***/';
$folder_name = $username . '/';
$s3Client = new vendor\aws\S3\S3Client([
'version' => AWS_VERSION,
'region' => AWS_REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' =>AWS_SECRET,
],
]);
$s3Client->putObject(array(
'Bucket' => AWS_BUCKET, // Defines name of Bucket
'Key' => AWS_PATH . $folder_name, //Defines Folder name
'Body' => "",
));
S3 doesn't have folders beyond the bucket, but objects (files) can have /s (forward slashes) in their name, and there are methods to retrieve based on a prefix that allows you to emulate a directory-list. This means though, that you can't create an empty folder.
So a work around will be put a empty txt file and delete it after wards but the folder structure will stay.
/* function to upload empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->putObjectFile(‘test.txt’, ‘bucketname’, ‘/folder/test.txt’, S3::ACL_PUBLIC_READ);
/* function to delete empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->deleteObject(‘bucketname’, ‘/folder/test.txt’);
Amazon S3 does not have a concept of folders. For S3, all objects are simply a key name with data.
Folders are a human concept which use the '/' character to separate the folders. But S3 does not care.
When you use many third-party tools (and even the AWS Management Console), the tools often will look at the object keys under your prefix and when it sees a '/' in it, it will interpret it as a folder.
But there's no way to "create a folder".
If you you simply PutObject an object with a key with your desired full path (for example, "my/desired/folder/structure/file.txt"), Amazon S3 will put it there. It's not like many filesystems where the folder must exist before a file can be created.
The closest thing to "creating a folder" you could do is to create a 0-byte object with a '/' at the end of it's key. For example "my/desired/folder/structure/". But it will just be another object in the bucket. It won't have any effect on the creation or operation of the bucket or any other objects in the bucket.
Amazon S3 doesn't really have directories:
In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
Instead, it fakes it based on the name of an object's key. Just upload an object using a key like some/directory/file.txt and many tools, including the S3 interface in the AWS console, will act as if you have an object called file.txt in a directory called directory in a directory called some.
See also Amazon S3 boto - how to create a folder?
$client->putObject([
'Bucket' => 'bucket',
'Key' => 'folder/', ]);
For 'version' => '2006-03-01',

Listing objects filtered by prefix in AWS with PHP SDK

I recently got the task to manage data which is stored via the Amazone Web Service.
According to the docu of Amazone i tried the following code to list all Objects within a bucket and it works fine:
$aws = Aws::factory('/path/to/my/config.php');
$s3 = $aws->get('s3');
$it = $s3->getIterator('ListObjects', array (
'Bucket' => 'myBucket',
)
);
foreach($it as $o){
echo $o['Key']."<br />";
}
But i need to list all objects only with a certain prefix. To achieve this i added the following line below line 6 of the shown code:
'prefix' => 'myPrefix/',
(The actual key of the file i want to access is (scheme):
myPrefix/subPrefix/subPrefix2/file.txt)
But the code keeps returning all objects in the bucket.
I havn't found any helpfull hints in the Amazon docu for my question.
Can anyone tell me (one of) the correct syntax to list all objects of a given prefix in PHP?
thank you in advance for any help
According to the following thread ...
List objects in a specific folder on Amazon S3
... you need capitalized the index-values of the array that is passed as second argument to the getIterator-function:
'Prefix' => 'myPrefix/',

How to store an image?

I don't understand Gaufrette and symfony2.
This seems to me like it's only working for textfiles/textcontent.
I can create a file but can't copy from a local source (i.e. a path).
What i would like to do is something like this:
$adapter = new LocalAdapter($realpath);
$filesystem = new Filesystem($adapter);
$filesystem->fromUploadedFile($tempPathOfUploadedFile,$idForGaufrette);
How do I store an image and how do I handle it's output when requested by the user?
Update:
How can I access the temp filename of an uploaded file in symfony"?
How can I access the existing, private attribute $path that exists in the Symfony\Component\HttpFoundation\File\UploadedFile Object in Symfony2?
The method copy(x, y) is not implemented but if you want storing a file you can move it using the method
rename($key, $new)
defined in the Filesystem Class.
For handle the output when requested by the user, all you need is a link to the image path (probaly in the database). So you don't need the filesystem (you can check if the file exists with the has($key) method).
In all cases use the adapter Local to work in local.
If you need a stream wrapper like "ftp://mydomain/myPicture" i recently send a PR to configures the stream wrapper in the config.yml, and register filiesystems that you want and their domain.
To get the tmp filename:
$file->getPathName(); // /tmp/filename
where $file is a Symfony\Component\HttpFoundation\File\UploadedFile Object.

Categories