set up expiration for putObject in s3 php sdk - php

This is working codes for creating bucket and putObject:
$bucket = uniqid("php-sdk-sample-", true);
echo "Creating bucket named {$bucket}\n";
$result = $client->createBucket(array(
'Bucket' => $bucket
));
// Wait until the bucket is created
$key = 'hello_world.txt';
echo "Creating a new object with key {$key}\n";
$result = $client->createObject(array(
'Bucket' => $bucket,
'Key' => $key,
'Body' => "Hello World!"
));
How can I set up expiration date for this just created object ? For ex. I want to see disappear the uploaded file 3 days later.

You need to read and learn about S3 object lifecycle management first, then you can use the putBucketLifecycle operation to configure it.

I figured it out. although its not working with sdk 2.6.*, this solved the problem ;
create_object_expiration_config ( $bucket, $opt )
in sdk 1.6.2.
this is link.

Related

Create & zip & Download an S3 folder/multiple-files with AWS SDK PHP

I'm willing to create a php function, trigger in js, that can :
Retrieve all AWS S3 bucket files from a specific folder (I can also provide the path of each files)
Create a Zip containing all S3 files
Download the Zip when the trigger is hit (cta)
I'm able to download a single file with the getObject method from this example However, I can't find any informations in order to download multiple file and Zip it.
I tried the downloadBucket method, however it download all files inside my project architecture and not as a zip file.
Here is my code:
<?php
// AWS Info + Connection
$IAM_KEY = 'XXXXX';
$IAM_SECRET = 'XXXXX';
$region = 'XXXXX';
require '../../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$FolderToDownload="FolderToDownload";
// Connection OK
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => $IAM_KEY,
'secret' => $IAM_SECRET
),
'version' => 'latest',
'region' => $region
)
);
try {
$bucketName = 'XXXXX';
$destination = 'NeedAZipFileNotAFolderPath';
$options = array('debug'=>true);
// ADD All Files into the folder: NeedAZipFileNotAFolderPath/Files1.png...Files12.png...
$s3->downloadBucket($destination,$bucketName,$FolderToDownload,$options);
// Looking for a zip file that can be downloaded
// Can I use downloadBucket? Is there a better way to do it?
// if needed I can create an array of all files (paths) that needs to be added to the zip file & dl
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}
?>
If some one can help, would be nice.
Thanks
You can use ZipArchive, list the objects from a prefix (the bucket's folder). In the case, I also use 'registerStreamWrapper' to get the keys and added to the created zip file.
Something like that:
<?php
require '/path/to/sdk-or-autoloader';
$s3 = Aws\S3\S3Client::factory(/* sdk config */);
$s3->registerStreamWrapper();
$zip = new ZipArchive;
$zip->open('/path/to/zip-you-are-creating.zip', ZipArchive::CREATE);
$bucket = 'your-bucket';
$prefix = 'your-prefix-folder'; // ex.: 'image/test/folder/'
$objects = $s3->getIterator('ListObjects', array(
'Bucket' => $bucket,
'Prefix' => $prefix
));
foreach ($objects as $object) {
$contents = file_get_contents("s3://{$bucket}/{$object['Key']}"); // get file
$zip->addFromString($object['Key'], $contents); // add file contents in zip
}
$zip->close();
// Download de zip file
header("Content-Description: File Transfer");
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"/path/to/zip-you-are-creating.zip\"");
readfile ('/path/to/zip-you-are-creating.zip');
?>
You can also delete the zip file after the download, if you prefer.
Must work in most of cases, but I don't want to save the file in the server, but I don't see how I can download multiple objects directly from the AWS S3 bucket by the browser without saving a file in my server. If anyone know, please share with us.

How to download latest file or recently added file from Aws S3 using PHP. (Yii2)

I have a non-versioned S3 bucket (VersionId is null for all files), files have different names.
My current code is:
$path = $this->key.'/primary/pdfs/'.$id.'/';
$result = $this->s3->listObjects(['Bucket' => $this->bucket,"Prefix" => $path])->toArray();
//get the last object from s3
$object = end($result['Contents']);
$key = $object['Key'];
$file = $this->s3->getObject([
'Bucket' => $this->bucket,
'Key' => $key
]);
//download the file
header('Content-Type: application/pdf');
echo $file['Body'];
The above is incorrect as it is giving the end file which is not the latest file.
Do I need to use the below api call ? if so, how to use it ?
$result = $this->s3->listObjectVersions(['Bucket' => $this->bucket,"Prefix" => $path])->toArray();
Since the VersionId of all files is null, there should be only one version of the files in the bucket

How to store url contents in variable and upload to Amazon S3?

So, I've been trying to get this to work for the past couple hours, but I can't figure it out. The goal is to pull the converted mp4 file from gfycat and upload that file to the Amazon S3 bucket.
gfycat is returning a JSON object properly, and $result->mp4Url is returning a correct url to the mp4 file. I keep getting errors such as "object expected, string given". Any ideas? Thanks.
// convert the gif to video format using gfycat
$response = file_get_contents("http://upload.gfycat.com/transcode/" . $key .
"?fetchUrl=" . urlencode($upload->getUrl('g')));
$result = json_decode($response);
$result_mp4 = file_get_contents($result->mp4Url);
// save the converted file to AWS S3 /i
$s3->putObject(array(
'Bucket' => getenv('S3_BUCKET'),
'Key' => 'i/' . $upload->id64 . '.mp4',
'SourceFile' => $result_mp4,
));
var_dump($response) yields:
string '{
"gfyId":"vigorousspeedyinexpectatumpleco",
"gfyName":"VigorousSpeedyInexpectatumpleco",
"gfyNumber":"884853904",
"userName":"anonymous",
"width":250,
"height":250,
"frameRate":11,
"numFrames":67,
"mp4Url":"http:\/\/zippy.gfycat.com\/VigorousSpeedyInexpectatumpleco.mp4",
"webmUrl":"http:\/\/zippy.gfycat.com\/VigorousSpeedyInexpectatumpleco.webm",
"gifUrl":"http:\/\/fat.gfycat.com\/VigorousSpeedyInexpectatumpleco.gif",
"gifSize":1364050,
"mp4Size":240833,
"webmSize":220389,
"createDate":"1388777040",
"views":"205",
"title":'... (length=851)
Using json_decode() on it also yields similar results.
You are mixing up the 'SourceFile' parameter (which accepts a file path) with the 'Body' parameter (which accepts raw data). See Uploading Objects in the AWS SDK for PHP User Guide for more examples.
Here are 2 options that should work:
Option 1 (Using SourceFile)
// convert the gif to video format using gfycat
$response = file_get_contents("http://upload.gfycat.com/transcode/" . $key .
"?fetchUrl=" . urlencode($upload->getUrl('g')));
$result = json_decode($response);
// save the converted file to AWS S3 /i
$s3->putObject(array(
'Bucket' => getenv('S3_BUCKET'),
'Key' => 'i/' . $upload->id64 . '.mp4',
'SourceFile' => $result->mp4Url,
));
Option 2 (Using Body)
// convert the gif to video format using gfycat
$response = file_get_contents("http://upload.gfycat.com/transcode/" . $key .
"?fetchUrl=" . urlencode($upload->getUrl('g')));
$result = json_decode($response);
$result_mp4 = file_get_contents($result->mp4Url);
// save the converted file to AWS S3 /i
$s3->putObject(array(
'Bucket' => getenv('S3_BUCKET'),
'Key' => 'i/' . $upload->id64 . '.mp4',
'Body' => $result_mp4,
));
Option 1 is better though, because the SDK will use a file handle to mp4 file instead of loading the entire thing into memory (like file_get_contents does).

S3 - need to download particular images from aws and download them in zip

I need to download images from AWS bucket into a local directory and zip download them.
I have tried my code, but can't figure out how will I copy the images into my local directory.
here is my function :
public function commomUpload($contentId,$prop_id)
{
$client = S3Client::factory(array(
'key' => 'my key',
'secret' => '----secret-----',
));
$documentFolderName = 'cms_photos';
$docbucket = "propertiesphotos/$prop_id/$documentFolderName";
$data = $this->Photo->find('all',array('conditions'=>array('Photo.content_id'=>$contentId)));
//pr($data);die;
$split_point = '/';
foreach($data as $row){
$string = $row['Photo']['aws_link'];
$result = array_map('strrev', explode($split_point, strrev($string)));
$imageName = $result[0];
$result = $client->getObject(array(
'Bucket' => $docbucket,
'Key' => $imageName
));
$uploads_dir = '/img/uploads/';
if (!copy($result, $uploads_dir)) {
echo "failed to copy $result...\n";
}
//move_uploaded_file($imageName, "$uploads_dir/$imageName");
}
}
Dont complicate things. There exists a very simple tools called s3cmd. Install it in any platforms.Click here to know more. Once you download the images from the s3,use can either use gzip or zip it using just a bash script. Dont forget to configure your s3cmd. You need to have AWS access key and secret key with you.
you can use Amazon S3 cakephp plugin https://github.com/fullybaked/CakeS3

Download file from Amazon S3 with Laravel

I'm a little sure as to how to launch a download of a file from Amazon S3 with Laravel 4. I'm using the AWS
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
));
// temp file
$file = tempnam('../uploads', 'download_');
file_put_contents($file, $result['Body']);
$response = Response::download($file, 'test-file.txt');
//unlink($file);
return $response;
The above works, but I'm stuck with saving the file locally. How can I use the result from S3 correctly with Response::download()?
Thanks!
EDIT: I've found I can use $s3->getObjectUrl($bucket, $file, $expiration) to generate an access URL. This could work, but it still doesn't solve the problem above completely.
EDIT2:
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
));
header('Content-type: ' . $result['ContentType']);
header('Content-Disposition: attachment; filename="' . $fileName . '"');
header('Content-length:' . $result['ContentLength']);
echo $result['Body'];
Still don't think it's ideal, though?
The S3Client::getObject() method allows you to specify headers that S3 should use when it sends the response. The getObjectUrl() method uses the GetObject operation to generate the URL, and can accept any valid GetObject parameters in its last argument. You should be able to do a direct S3-to-user download with your desired headers using a pre-signed URL by doing something like this:
$downloadUrl = $s3->getObjectUrl($bucket, 'data.txt', '+5 minutes', array(
'ResponseContentDisposition' => 'attachment; filename="' . $fileName . '"',
));
If you want to stream an S3 object from your server, then you should check out the Streaming Amazon S3 Objects From a Web Server article on the AWS Developer Guide
This question is not answered fully. Initially it was asked to how to save a file locally on the server itself from S3 to make use of it.
So, you can use the SaveAs option with getObject method. You can also specify the version id if you are using versioning on your bucket and want to make use of it.
$result = $this->client->getObject(array(
'Bucket'=> $bucket_name,
'Key' => $file_name,
'SaveAs' => $to_file,
'VersionId' => $version_id));
The answer is somewhat outdated with the new SDK. The following works with v3 SDK.
$client->registerStreamWrapper();
$result = $client->headObject([
'Bucket' => $bucket,
'Key' => $key
]);
$headers = $result->toArray();
header('Content-Type: ' . $headers['ContentType']);
header('Content-Disposition: attachment');
// Stop output buffering
if (ob_get_level()) {
ob_end_flush();
}
flush();
// stream the output
readfile("s3://{$bucket}/{$key}");

Categories