I'm trying to put a remote object to amazon s3, i'm using this code :
$s3 = Aws\S3\S3Client::factory();
$bucket = getenv('S3_BUCKET')?: die('No "S3_BUCKET" config var in found in env!');
$s3->putObject(array(
'Bucket' => $bucket,
'Key' => 'myvideo.mp4',
'Body' => 'http://example.fr/video.mp4'
));
this code is working but it's not uploading a full size file.
This doesn't do what you want:
'Body' => 'http://example.fr/video.mp4'
This sets the object body to contain the string of the URL... not the content from the remote URL.
To upload a "remote" object, you have to download it first. There is no built-in capability in S3 to fetch content from a remote URL.
Related
I'm using a NeutrinoAPI API for watermarking images (https://www.neutrinoapi.com/api/image-watermark/)
That's working just fine, but now I need to upload the watermarked image to Amazon S3, which I use this code (https://gist.github.com/keithweaver/70eb06d98b008113ce97f6148fbea83d)
As the first API the response is a image, I don't know how to use it on the AWS API. That's how I'm doing, but it keeps uploading a 0kb file.
// Add it to S3
try {
// Uploaded:
$file = $_FILES[$json];
$response = $s3->putObject(
array(
'Bucket'=>$bucketName,
'Key' => $keyName,
'ACL' => 'public-read',
'SourceFile' => $file
)
);
When I use the 'file_put_contents($filename, $json);' it works, it exports the image I want, but how do I put it on the amazon $file?
I am using "aws-sdk-php-laravel" package in my laravel 5 project. which used amazonS3 for storage. But what is OBJECT_KEY and where to get it.
$s3 = App::make('aws')->createClient('s3');
$s3->putObject(array(
'Bucket' => 'YOUR_BUCKET',
'Key' => 'YOUR_OBJECT_KEY',
'SourceFile' => '/the/path/to/the/file/you/are/uploading.ext',
));
It's name of your file on S3.
You can check it here.
It's the name of the file on S3. Just to make things a little easier to understand, don't think of S3 as a filesystem. Think of it as a key data store.
So you have a key, the 'file name', and the data associated with it. In this instance it is your 'SourceFile' variable which is a location to a file on your local filesystem (not s3). All content in that source file will be uploaded against the key name.
I have a website here http://www.voclr.it/acapellas/ my files are hosting on my Amazon S3 Account, but when a visitor goes to download an MP3 from my website it forces them to stream it but what I actually want it to do is download it to there desktop.
I have disabled S3 on the website for now, so the downloads are working fine. but really I want S3 to search the MP3s
Basically, you have to tell S3 to override the content-disposition header of the response. You can do that by appending the response-content-disposition query string parameter to the S3 file url and setting it to the desired content-disposition value. To force download try:
<url>&response-content-disposition="attachment; filename=somefilename"
You can find this in the S3 docs. For information on the values that the content-disposition header can assume you can look here.
As an additional information this also works with Google Cloud Storage.
require_once '../sdk-1.4.2.1/sdk.class.php';
// Instantiate the class
$s3 = new AmazonS3();
// Copy object over itself and modify headers
$response = $s3->copy_object(
array( // Source
'bucket' => 'your_bucket',
'filename' => 'Key/To/YourFile'
),
array( // Destination
'bucket' => 'your_bucket',
'filename' => 'Key/To/YourFile'
),
array( // Optional parameters
**'headers' => array(
'Content-Type' => 'application/octet-stream',
'Content-Disposition' => 'attachment'**
)
)
);
// Success?
var_dump($response->isOK());
Amazon AWS S3 to Force Download Mp3 File instead of Stream It
I created a solution for doing this via CloudFront functions (no php required since it all runs at AWS by linking to the .mp3 file on Cloudfront with a ?title=TitleGoesHere querystring to force a downloaded file with that filename). This is a fairly recent way of doing things (as of August 2022). I documented my function and how I set up my "S3 bucket behind a CloudFront Distribution" here: https://stackoverflow.com/a/73451456/19823883
I'm using the AWS PHP SDK to upload a file to S3 then trancode it with Elastic Transcoder.
First pass everything works fine, the putobject command overwrites the old file (always named the same) on s3:
$s3->putObject([
'Bucket' => Config::get('app.aws.S3.bucket'),
'Key' => $key,
'SourceFile' => $path,
'Metadata' => [
'title' => Input::get('title')
]
]);
However when creating a second transcoding job, i get the error:
The specified object could not be saved in the specified bucket because an object by that name already exists
the transcoder role has full s3 access. Is there a way around this or will i have to delete the files using the sdk everytime before its transcoded?
my create job:
$result = $transcoder->createJob([
'PipelineId' => Config::get('app.aws.ElasticTranscoder.PipelineId'),
'Input' => [
'Key' => $key
],
'Output' => [
'Key' => 'videos/'.$user.'/'.$output_key,
'ThumbnailPattern' => 'videos/'.$user.'/thumb-{count}',
'Rotate' => '0',
'PresetId' => Config::get('app.aws.ElasticTranscoder.PresetId')
],
]);
The Amazon Elastic Transcoder service documents that this is the expected behavior here: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key.
If your workflow requires you to overwrite the same key, then it sounds like you should have the job output somewhere unique and then issue an S3 CopyObject operation to overwrite the older file.
If you enable versioning on the S3 bucket, then Amazon Elastic Transcoder will be happy overwriting the same key with the transcoded version.
I can think of two ways to implement it:
Create two buckets, one for temp file storage (where its uploaded) and another where transcoded file is placed. Post transcoding when new file is created, you can delete temp file.
Use single bucket and upload file with some suffix/prefix. Create transcoded file in same bucket removing prefex/suffix (which you used for temp name).
In both cases for automated deletion of uploaded files you can use Lambda function with S3 notifications.
I'm trying to upload files to my bucket using a piece of code like this:
$s3 = new AmazonS3();
$bucket = 'host.domain.ext'; // My bucket name matches my host's CNAME
// Open a file resource
$file_resource = fopen('picture.jpg', 'r');
// Upload the file
$response = $s3->create_object($bucket, 'picture.jpg', array(
'fileUpload' => $file_resource,
'acl' => AmazonS3::ACL_PUBLIC,
'headers' => array(
'Cache-Control' => 'public, max-age=86400',
),
));
But I get the "NoSuchBucket" error, the weird thing is that when I query my S3 account to retrieve the list of buckets, I get the exact same name I'm using for uploading host.domain.ext.
I tried creating a different bucket with no dots in the name and it works perfectly... yes, my problem is my bucket name, but I need to keep the FQDN convention in order to map it as a static file server on the Internet. Does anyone know if is there any escaping I can do to my bucket name before sending it to the API to prevent the dot crash? I've already tried regular expressions and got the same result.
I'd try using path style urls as suggested in the comments in a related AWS forum thread...
$s3 = new AmazonS3();
$s3->path_style = true;