I took the sample code from the aws documentation, and I always get this error : IncalculablePayloadException
<?php $filepath = (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] === 'on' ? "https" : "http") . "://$_SERVER[HTTP_HOST]";
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$s3Key = 'mykey';
$s3Secret = 'mysecretkey';
$region = 'eu-west-3';
$s3Bucket = 'mybucket';
$bucket = $s3Bucket;
$file_Path = $filepath.'/msk.png';
$key = basename($filepath);
echo $file_Path;
try {
//Create a S3Client
$s3Client = new S3Client([
'profile' => 'default',
'region' => $region,
'version' => '2006-03-01',
// 'debug' => true
]);
$result = $s3Client->putObject([
'Bucket' => $bucket,
'Key' => $key,
'SourceFile' => $file_Path,
]);
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
I have checked that my file exist, the path is correct. I have no idea what to do, this should work, it's from their damned doc.
I have tried with other files, different code found else where but I always get the same error.
Thanks for the help.
EDIT --------- 1
I have change $file_Path to $file_Path = 'msk.png';
And it's now 'working'. It upload a file to S3, but not a png but an xml file. with at the end
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
Any idea why?
EDIT --------- 2
Ok, so now I know why, the permission of the file was not set to read for public access. How can i set the ACL with putObject ?
The AWS PHP SDK Documentation has this description of SourcePath:
The path to a file on disk to use instead of the Body parameter.
But you are not providing a file on disk, you are providing a URL.
You have two options:
Provide the disk location of the path, based on where it is on your server. For instance, using __DIR__ to refer to the directory of the current PHP file, it might be $file_Path = __DIR__ . '/../public/msk.png';
Download the content first, and pass it in as a string to the Body parameter instead of SourcePath, e.g. using $body = file_get_contents($File_Path);
The first option makes more sense if it's a file on your own server, but the second one might be useful if you've simplified the example.
Related
I'm willing to create a php function, trigger in js, that can :
Retrieve all AWS S3 bucket files from a specific folder (I can also provide the path of each files)
Create a Zip containing all S3 files
Download the Zip when the trigger is hit (cta)
I'm able to download a single file with the getObject method from this example However, I can't find any informations in order to download multiple file and Zip it.
I tried the downloadBucket method, however it download all files inside my project architecture and not as a zip file.
Here is my code:
<?php
// AWS Info + Connection
$IAM_KEY = 'XXXXX';
$IAM_SECRET = 'XXXXX';
$region = 'XXXXX';
require '../../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$FolderToDownload="FolderToDownload";
// Connection OK
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => $IAM_KEY,
'secret' => $IAM_SECRET
),
'version' => 'latest',
'region' => $region
)
);
try {
$bucketName = 'XXXXX';
$destination = 'NeedAZipFileNotAFolderPath';
$options = array('debug'=>true);
// ADD All Files into the folder: NeedAZipFileNotAFolderPath/Files1.png...Files12.png...
$s3->downloadBucket($destination,$bucketName,$FolderToDownload,$options);
// Looking for a zip file that can be downloaded
// Can I use downloadBucket? Is there a better way to do it?
// if needed I can create an array of all files (paths) that needs to be added to the zip file & dl
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}
?>
If some one can help, would be nice.
Thanks
You can use ZipArchive, list the objects from a prefix (the bucket's folder). In the case, I also use 'registerStreamWrapper' to get the keys and added to the created zip file.
Something like that:
<?php
require '/path/to/sdk-or-autoloader';
$s3 = Aws\S3\S3Client::factory(/* sdk config */);
$s3->registerStreamWrapper();
$zip = new ZipArchive;
$zip->open('/path/to/zip-you-are-creating.zip', ZipArchive::CREATE);
$bucket = 'your-bucket';
$prefix = 'your-prefix-folder'; // ex.: 'image/test/folder/'
$objects = $s3->getIterator('ListObjects', array(
'Bucket' => $bucket,
'Prefix' => $prefix
));
foreach ($objects as $object) {
$contents = file_get_contents("s3://{$bucket}/{$object['Key']}"); // get file
$zip->addFromString($object['Key'], $contents); // add file contents in zip
}
$zip->close();
// Download de zip file
header("Content-Description: File Transfer");
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"/path/to/zip-you-are-creating.zip\"");
readfile ('/path/to/zip-you-are-creating.zip');
?>
You can also delete the zip file after the download, if you prefer.
Must work in most of cases, but I don't want to save the file in the server, but I don't see how I can download multiple objects directly from the AWS S3 bucket by the browser without saving a file in my server. If anyone know, please share with us.
I have a non-versioned S3 bucket (VersionId is null for all files), files have different names.
My current code is:
$path = $this->key.'/primary/pdfs/'.$id.'/';
$result = $this->s3->listObjects(['Bucket' => $this->bucket,"Prefix" => $path])->toArray();
//get the last object from s3
$object = end($result['Contents']);
$key = $object['Key'];
$file = $this->s3->getObject([
'Bucket' => $this->bucket,
'Key' => $key
]);
//download the file
header('Content-Type: application/pdf');
echo $file['Body'];
The above is incorrect as it is giving the end file which is not the latest file.
Do I need to use the below api call ? if so, how to use it ?
$result = $this->s3->listObjectVersions(['Bucket' => $this->bucket,"Prefix" => $path])->toArray();
Since the VersionId of all files is null, there should be only one version of the files in the bucket
So, I've been trying to get this to work for the past couple hours, but I can't figure it out. The goal is to pull the converted mp4 file from gfycat and upload that file to the Amazon S3 bucket.
gfycat is returning a JSON object properly, and $result->mp4Url is returning a correct url to the mp4 file. I keep getting errors such as "object expected, string given". Any ideas? Thanks.
// convert the gif to video format using gfycat
$response = file_get_contents("http://upload.gfycat.com/transcode/" . $key .
"?fetchUrl=" . urlencode($upload->getUrl('g')));
$result = json_decode($response);
$result_mp4 = file_get_contents($result->mp4Url);
// save the converted file to AWS S3 /i
$s3->putObject(array(
'Bucket' => getenv('S3_BUCKET'),
'Key' => 'i/' . $upload->id64 . '.mp4',
'SourceFile' => $result_mp4,
));
var_dump($response) yields:
string '{
"gfyId":"vigorousspeedyinexpectatumpleco",
"gfyName":"VigorousSpeedyInexpectatumpleco",
"gfyNumber":"884853904",
"userName":"anonymous",
"width":250,
"height":250,
"frameRate":11,
"numFrames":67,
"mp4Url":"http:\/\/zippy.gfycat.com\/VigorousSpeedyInexpectatumpleco.mp4",
"webmUrl":"http:\/\/zippy.gfycat.com\/VigorousSpeedyInexpectatumpleco.webm",
"gifUrl":"http:\/\/fat.gfycat.com\/VigorousSpeedyInexpectatumpleco.gif",
"gifSize":1364050,
"mp4Size":240833,
"webmSize":220389,
"createDate":"1388777040",
"views":"205",
"title":'... (length=851)
Using json_decode() on it also yields similar results.
You are mixing up the 'SourceFile' parameter (which accepts a file path) with the 'Body' parameter (which accepts raw data). See Uploading Objects in the AWS SDK for PHP User Guide for more examples.
Here are 2 options that should work:
Option 1 (Using SourceFile)
// convert the gif to video format using gfycat
$response = file_get_contents("http://upload.gfycat.com/transcode/" . $key .
"?fetchUrl=" . urlencode($upload->getUrl('g')));
$result = json_decode($response);
// save the converted file to AWS S3 /i
$s3->putObject(array(
'Bucket' => getenv('S3_BUCKET'),
'Key' => 'i/' . $upload->id64 . '.mp4',
'SourceFile' => $result->mp4Url,
));
Option 2 (Using Body)
// convert the gif to video format using gfycat
$response = file_get_contents("http://upload.gfycat.com/transcode/" . $key .
"?fetchUrl=" . urlencode($upload->getUrl('g')));
$result = json_decode($response);
$result_mp4 = file_get_contents($result->mp4Url);
// save the converted file to AWS S3 /i
$s3->putObject(array(
'Bucket' => getenv('S3_BUCKET'),
'Key' => 'i/' . $upload->id64 . '.mp4',
'Body' => $result_mp4,
));
Option 1 is better though, because the SDK will use a file handle to mp4 file instead of loading the entire thing into memory (like file_get_contents does).
I need to download images from AWS bucket into a local directory and zip download them.
I have tried my code, but can't figure out how will I copy the images into my local directory.
here is my function :
public function commomUpload($contentId,$prop_id)
{
$client = S3Client::factory(array(
'key' => 'my key',
'secret' => '----secret-----',
));
$documentFolderName = 'cms_photos';
$docbucket = "propertiesphotos/$prop_id/$documentFolderName";
$data = $this->Photo->find('all',array('conditions'=>array('Photo.content_id'=>$contentId)));
//pr($data);die;
$split_point = '/';
foreach($data as $row){
$string = $row['Photo']['aws_link'];
$result = array_map('strrev', explode($split_point, strrev($string)));
$imageName = $result[0];
$result = $client->getObject(array(
'Bucket' => $docbucket,
'Key' => $imageName
));
$uploads_dir = '/img/uploads/';
if (!copy($result, $uploads_dir)) {
echo "failed to copy $result...\n";
}
//move_uploaded_file($imageName, "$uploads_dir/$imageName");
}
}
Dont complicate things. There exists a very simple tools called s3cmd. Install it in any platforms.Click here to know more. Once you download the images from the s3,use can either use gzip or zip it using just a bash script. Dont forget to configure your s3cmd. You need to have AWS access key and secret key with you.
you can use Amazon S3 cakephp plugin https://github.com/fullybaked/CakeS3
I'm a little sure as to how to launch a download of a file from Amazon S3 with Laravel 4. I'm using the AWS
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
));
// temp file
$file = tempnam('../uploads', 'download_');
file_put_contents($file, $result['Body']);
$response = Response::download($file, 'test-file.txt');
//unlink($file);
return $response;
The above works, but I'm stuck with saving the file locally. How can I use the result from S3 correctly with Response::download()?
Thanks!
EDIT: I've found I can use $s3->getObjectUrl($bucket, $file, $expiration) to generate an access URL. This could work, but it still doesn't solve the problem above completely.
EDIT2:
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
));
header('Content-type: ' . $result['ContentType']);
header('Content-Disposition: attachment; filename="' . $fileName . '"');
header('Content-length:' . $result['ContentLength']);
echo $result['Body'];
Still don't think it's ideal, though?
The S3Client::getObject() method allows you to specify headers that S3 should use when it sends the response. The getObjectUrl() method uses the GetObject operation to generate the URL, and can accept any valid GetObject parameters in its last argument. You should be able to do a direct S3-to-user download with your desired headers using a pre-signed URL by doing something like this:
$downloadUrl = $s3->getObjectUrl($bucket, 'data.txt', '+5 minutes', array(
'ResponseContentDisposition' => 'attachment; filename="' . $fileName . '"',
));
If you want to stream an S3 object from your server, then you should check out the Streaming Amazon S3 Objects From a Web Server article on the AWS Developer Guide
This question is not answered fully. Initially it was asked to how to save a file locally on the server itself from S3 to make use of it.
So, you can use the SaveAs option with getObject method. You can also specify the version id if you are using versioning on your bucket and want to make use of it.
$result = $this->client->getObject(array(
'Bucket'=> $bucket_name,
'Key' => $file_name,
'SaveAs' => $to_file,
'VersionId' => $version_id));
The answer is somewhat outdated with the new SDK. The following works with v3 SDK.
$client->registerStreamWrapper();
$result = $client->headObject([
'Bucket' => $bucket,
'Key' => $key
]);
$headers = $result->toArray();
header('Content-Type: ' . $headers['ContentType']);
header('Content-Disposition: attachment');
// Stop output buffering
if (ob_get_level()) {
ob_end_flush();
}
flush();
// stream the output
readfile("s3://{$bucket}/{$key}");