I'm using the AWS PHP SDK to upload a file to S3 then trancode it with Elastic Transcoder.
First pass everything works fine, the putobject command overwrites the old file (always named the same) on s3:
$s3->putObject([
'Bucket' => Config::get('app.aws.S3.bucket'),
'Key' => $key,
'SourceFile' => $path,
'Metadata' => [
'title' => Input::get('title')
]
]);
However when creating a second transcoding job, i get the error:
The specified object could not be saved in the specified bucket because an object by that name already exists
the transcoder role has full s3 access. Is there a way around this or will i have to delete the files using the sdk everytime before its transcoded?
my create job:
$result = $transcoder->createJob([
'PipelineId' => Config::get('app.aws.ElasticTranscoder.PipelineId'),
'Input' => [
'Key' => $key
],
'Output' => [
'Key' => 'videos/'.$user.'/'.$output_key,
'ThumbnailPattern' => 'videos/'.$user.'/thumb-{count}',
'Rotate' => '0',
'PresetId' => Config::get('app.aws.ElasticTranscoder.PresetId')
],
]);
The Amazon Elastic Transcoder service documents that this is the expected behavior here: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key.
If your workflow requires you to overwrite the same key, then it sounds like you should have the job output somewhere unique and then issue an S3 CopyObject operation to overwrite the older file.
If you enable versioning on the S3 bucket, then Amazon Elastic Transcoder will be happy overwriting the same key with the transcoded version.
I can think of two ways to implement it:
Create two buckets, one for temp file storage (where its uploaded) and another where transcoded file is placed. Post transcoding when new file is created, you can delete temp file.
Use single bucket and upload file with some suffix/prefix. Create transcoded file in same bucket removing prefex/suffix (which you used for temp name).
In both cases for automated deletion of uploaded files you can use Lambda function with S3 notifications.
Related
I have an s3 bucket that I am using for file storage with my PHP/Laravel application.
I have a certain case where I need to manually add a bunch of pdf's to a bucket in a given folder/prefix. I then need to run a csv import into my db. Part of the import is the pdf filenames.
For each iteration thought the csv import I want to use the known filename and go link the imported item to the pdf in S3.
I have been messing with this and have the AWS S3 SDK installed but I don't see a way of getting the object back from S3 based on a filename.
I am running:
php 7.2
aws sdk v3.9 via composer
What I am currently testing is:
$results = $s3->getPaginator('ListObjects', [
'Bucket' => env('S3BUCKET'),
'Prefix' => 'folder-name/'
]);
foreach ($results as $result) {
foreach ($result['Contents'] as $object) {
$doc = $s3->getObject([
'Bucket' => env('S3BUCKET'),
'Key' => $object['Key']
]);
dd($doc);
The issue here is that no filename is listed so I can't run a compare of any kind.
TIA
Laravel 5.5 app. I need to retrieve images of driver's licenses from Amazon S3 (already working) and then upload it to Stripe using their api for identity verification (not working).
Stripe's documents give this example:
\Stripe\Stripe::setApiKey(PLATFORM_SECRET_KEY);
\Stripe\FileUpload::create(
array(
"purpose" => "identity_document",
"file" => fopen('/path/to/a/file.jpg', 'r')
),
array("stripe_account" => CONNECTED_STRIPE_ACCOUNT_ID)
);
However, I am not retrieving my files using fopen().
When I retrieve my image from Amazon S3 (using my own custom methods), I end up with an instance of Intervention\Image -- essentially, Image::make($imageFromS3) -- and I don't know how to convert this to the equivalent of the call to fopen('/path/to/a/file.jpg', 'r'). I have tried the following:
$image->stream()
$image->stream()->__toString()
$image->stream('data-url')
$image->stream('data-url')->__toString()
I have also tried skipping intervention image and just using Laravel's storage retrieval, for example:
$image = Storage::disk('s3')->get('path/to/file.jpg');
All of these approaches result in getting an Invalid hash exception from Stripe.
What is the proper way to get a file from S3 and convert it to the equivalent of the fopen() call?
If the file on S3 is public, you could just pass the URL to Stripe:
\Stripe\FileUpload::create(
array(
"purpose" => "identity_document",
"file" => fopen(Storage::disk('s3')->url($file)),
),
array("stripe_account" => CONNECTED_STRIPE_ACCOUNT_ID)
);
Note that this requires allow_url_fopen to be turned on in your php.ini file.
If not, then you could grab the file from S3 first, write it to a temporary file, and then use the fopen() method that the Stripe documentation speaks of:
// Retrieve file from S3...
$image = Storage::disk('s3')->get($file);
// Create temporary file with image content...
$tmp = tmpfile();
fwrite($tmp, $image);
// Reset file pointer to first byte so that we can read from it from the beginning...
fseek($tmp, 0);
// Upload temporary file to S3...
\Stripe\FileUpload::create(
array(
"purpose" => "identity_document",
"file" => $tmp
),
array("stripe_account" => CONNECTED_STRIPE_ACCOUNT_ID)
);
// Close temporary file and remove it...
fclose($tmp);
See https://secure.php.net/manual/en/function.tmpfile.php for more information.
I am using "aws-sdk-php-laravel" package in my laravel 5 project. which used amazonS3 for storage. But what is OBJECT_KEY and where to get it.
$s3 = App::make('aws')->createClient('s3');
$s3->putObject(array(
'Bucket' => 'YOUR_BUCKET',
'Key' => 'YOUR_OBJECT_KEY',
'SourceFile' => '/the/path/to/the/file/you/are/uploading.ext',
));
It's name of your file on S3.
You can check it here.
It's the name of the file on S3. Just to make things a little easier to understand, don't think of S3 as a filesystem. Think of it as a key data store.
So you have a key, the 'file name', and the data associated with it. In this instance it is your 'SourceFile' variable which is a location to a file on your local filesystem (not s3). All content in that source file will be uploaded against the key name.
I have a website here http://www.voclr.it/acapellas/ my files are hosting on my Amazon S3 Account, but when a visitor goes to download an MP3 from my website it forces them to stream it but what I actually want it to do is download it to there desktop.
I have disabled S3 on the website for now, so the downloads are working fine. but really I want S3 to search the MP3s
Basically, you have to tell S3 to override the content-disposition header of the response. You can do that by appending the response-content-disposition query string parameter to the S3 file url and setting it to the desired content-disposition value. To force download try:
<url>&response-content-disposition="attachment; filename=somefilename"
You can find this in the S3 docs. For information on the values that the content-disposition header can assume you can look here.
As an additional information this also works with Google Cloud Storage.
require_once '../sdk-1.4.2.1/sdk.class.php';
// Instantiate the class
$s3 = new AmazonS3();
// Copy object over itself and modify headers
$response = $s3->copy_object(
array( // Source
'bucket' => 'your_bucket',
'filename' => 'Key/To/YourFile'
),
array( // Destination
'bucket' => 'your_bucket',
'filename' => 'Key/To/YourFile'
),
array( // Optional parameters
**'headers' => array(
'Content-Type' => 'application/octet-stream',
'Content-Disposition' => 'attachment'**
)
)
);
// Success?
var_dump($response->isOK());
Amazon AWS S3 to Force Download Mp3 File instead of Stream It
I created a solution for doing this via CloudFront functions (no php required since it all runs at AWS by linking to the .mp3 file on Cloudfront with a ?title=TitleGoesHere querystring to force a downloaded file with that filename). This is a fairly recent way of doing things (as of August 2022). I documented my function and how I set up my "S3 bucket behind a CloudFront Distribution" here: https://stackoverflow.com/a/73451456/19823883
The problem I have is that I need the Content-Disposition: attachment header to be present on EVERY file that hits my bucket.
In Wordpress, I can just use .htaccess to cover the filetypes in question (videos), but those rules do not extend to my S3 downloads which browsers are simply trying to open, instead of download.
I need an automated/default solution, since I am not the only one that uploads these files (our staff uploads through Wordpress, and the uploads all are stored on our S3 bucket). So using Cloudberry or other browsers is not useful for this situation. I can't adjust the files on a per-file basis (the uploads are too frequent).
Is there a way to do this?
(Other information: I'm using the "Amazon S3 and Cloudfront" plugin on Wordpress that is responsible for linking the two together. Unfortunately, the site is not public, so I cannot link to it.)
Unfortunately there is no way to set this for an entire bucket in S3, and also Cloudfront can only set Cache-Headers
But you can set The Content-disposition parameter when uploading files to S3.
For existing files, you must change the Header, so loop through every object in the Bucket, and copy it to itself using the new headers.
All I can say now, please post the code that uploads the file to S3.
First, you need to locate the code that puts the object in the bucket.
You can use notepad++ to search for "putObject" within the php files of whatever plugin you are using.
An example code from another WP plugin that stores files to S3 is as follows:
$this->s3->putObject( array(
'Bucket' => $bucket,
'Key' => $file['name'],
'SourceFile' => $file['file'],
) );
Now, simply add ContentDisposition' => 'attachment' like so:
$this->s3->putObject( array(
'Bucket' => $bucket,
'Key' => $file['name'],
'SourceFile' => $file['file'],
'ContentDisposition' => 'attachment',
) );
Thats it :)
Yes, you can set default Content-Disposition header for your each and every upcoming uploading file in your S3 bucket using Bucket Explorer's Bucket Default feature.
For existing files, you can use Update Metadata option that update metadata on every file exist in your bucket in batch.
You just need to -
Select Key as : Content-Disposition
Add Value as : attachment;filename={$file_name_without_path_$$}
Then update metadata on the files.
See this page to set Content-Disposition on your file.
More references:
http://www.bucketexplorer.com/documentation/amazon-s3--metadata-http-header-bucket-default-metadata.html
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-manage-http-headers-for-amazon-s3-objects.html
http://www.bucketexplorer.com/documentation/amazon-s3--metadata-http-header-update-custom-metadata.html
Thanks