I have used below code
//include the S3 class
if (!class_exists('S3'))require_once('S3.php');
//AWS access info
if (!defined('awsAccessKey')) define('awsAccessKey', '****************');
if (!defined('awsSecretKey')) define('awsSecretKey', '**************************');
//instantiate the class
$s3 = new S3(awsAccessKey, awsSecretKey);
$s3->putBucket($bucket, S3::ACL_PUBLIC_READ);
in this where we put Folder Name
In S3 doesn't exist the concept of folder. What you see as folder in S3 console is just an illusion of folder.
Since each object accept / in the key, you can simulate a folder hierarchy (i.e: images/myphoto.jpg) but the filesystem is still flat.
S3 console simulate for you the folder hierarchy, but this notion is file-related, so you can't use putBucket, but putObject with a proper key:
From AWS Doc:
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = 'images/photo.jpg';
// $filepath should be absolute path to a file on disk
$filepath = '*** Your File Path ***';
// Instantiate the client.
$s3 = S3Client::factory();
// Upload a file.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ContentType' => 'text/plain',
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata' => array(
'param1' => 'value 1',
'param2' => 'value 2'
)
));
echo $result['ObjectURL'];
Related
My script takes files that have been passed into a function, and combines/saves them as a compressed file using ZipArchive onto the directory on my server.
Then I upload the zipped file to an AWS S3 bucket, and delete the uploaded file off my server.
However, is there a way to save the ZipArchive as a variable or temporary file and upload it directly to AWS without saving to and then deleting from my server?
$files = $_GET['json'];
$zipFolder = new ZipArchive;
$zipPath = "folder/compressedfile.zip";
if ($zipFolder->open($zipPath, ZipArchive::CREATE) === TRUE) {
foreach ($files as $file) {
$textString = $file['text'];
$zipFolder->addFromString($file['name'] . '.txt', $textString);
}
}
$zipFolder->close();
$s3Client = new S3Client([
'region' => '--region--',
'version' => 'latest',
'credentials' => [
'key' => '--key--',
'secret' => '--secret--',
],
]);
$bucketName = '--bucket--';
$result = $s3Client->putObject([
'Bucket' => $bucketName,
'Key' => 'compressedfile.zip',
'SourceFile' => $zipPath
]);
unlink($zipPath);
Yes, you can stream data from memory to an S3 object. You don't have to upload a file from disk.
Take a look at the S3 Stream Wrapper as one option.
Im trying to delete a folder in an s3 bucket that is located in a folder called CreativeEngine the folder structure looks like this CreativeEngine/8943
I want to delete the folder with the called 8943 but it contains files within it. Do I need to do some kind of loop to delete the files first or can I delete the folder? I tried this but it didn't work
<?php
$itemId=$_GET['id'];
require('s3/vendor/autoload.php');
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
// AWS Info
$bucketName = 'mybucket';
$IAM_KEY = 'mykey';
$IAM_SECRET = 'mysecret';
// Connect to AWS
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => $IAM_KEY,
'secret' => $IAM_SECRET
),
'version' => 'latest',
'region' => 'us-east-2'
)
);
$s3Destination='CreativeEngine/'.$itemId;
$keyName = $s3Destination;
try{
$s3->deleteObject(array(
'Bucket' => $bucketName,
'Key' => $keyName
));
} catch (S3Exception $e) {
$data['message']='<li>error'.$e->getMessage().'</li>';
}
?>
This is possible via delete_all_objects($bucket, $pcre), where $pcreis an optional Perl-Compatible Regular Expression (PCRE) to filter the names against (default is PCRE_ALL, which is "/.*/i"), e.g.:
$s3 = new AmazonS3();
$response = $s3->delete_all_objects($bucket, "#myDirectory/.*#");
I'm trying to Download Private S3 Object and store it on website Server
Here is what I'm Trying
$s3 = new S3Client([
'version' => 'latest',
'region' => 'ap-south-1',
'credentials' => array(
'key' => '*****',
'secret' => '*******'
)
]);
$command = $s3->getCommand('GetObject', array(
'Bucket' => 'bucket_name',
'Key' => 'object_name_in_s3'
'ResponseContentDisposition' => 'attachment; filename="'.$my_file_name.'"'
));
$signedUrl = $command->createPresignedUrl('+15 minutes');
echo $signedUrl;
How can i save these files on my server
From Get an Object Using the AWS SDK for PHP:
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
$filepath = '*** Your File Path ***';
// Instantiate the client.
$s3 = S3Client::factory();
// Save object to a file.
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SaveAs' => $filepath
));
If you just want to download a file from the command line (instead of an app), you can use the AWS Command-Line Interface (CLI) -- it has an aws s3 cp command.
The Pre-signed URL in your code can be used to grant time-limited access to a private object stored in an Amazon S3 bucket. Typically, your application generates the URL and includes it in a web page for users to click and download the object. There is no need to use it on the server-side, because the server would have credentials that are authorized to access content in Amazon S3.
I have a script wich resize and crop an image and I would like to upload the image on my amazon S3 on the fly.
The problem is that I get an error message when I try to run my script because I guess the sourcefile is not recognized as a direct path from the disk ($filepath). Do you have any idea to get through this situation?
Fatal error: Uncaught exception 'Aws\Common\Exception\InvalidArgumentException' with message 'You must specify a non-null value for the Body or SourceFile parameters.' in phar:///var/www/submit/aws.phar/Aws/Common/Client/UploadBodyListener.php:...
$myResizedImage = imagecreatetruecolor($width,$height);
imagecopyresampled($myResizedImage,$myImage,0,0,0,0, $width, $height, $origineWidth, $origineHeight);
$myImageCrop = imagecreatetruecolor(612,612);
imagecopy( $myImageCrop, $myResizedImage, 0,0, $posX, $posY, 612, 612);
//Save image on Amazon S3
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'yol';
$keyname = 'image_resized';
$filepath = $myImageCrop;
// Instantiate the client.
$s3 = S3Client::factory(array(
'key' => 'private-key',
'secret' => 'secrete-key',
'region' => 'eu-west-1'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ACL' => 'public-read',
'ContentType' => 'image/jpeg'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
You need to convert your image resource to an actual string containing image data. You can use this function to achieve this:
function image_data($gdimage)
{
ob_start();
imagejpeg($gdimage);
return(ob_get_clean());
}
You are setting the SourceFile of the upload to $filepath, which is assigned from $myImageCrop = imagecreatetruecolor(...). As such, it's not actually a path at all — it's a GD image resource. You can't upload these to S3 directly.
You'll need to either write that image out to a file (using e.g. imagejpeg() + file_put_contents()), or run the upload using data in memory (again, from imagejpeg() or similar).
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'kkkk';
$keyname = 'test';
// $filepath should be absolute path to a file on disk
$newFielName = tempnam(null,null); // take a llok at the tempnam and adjust parameters if needed
imagejpeg($myImageCrop, $newFielName, 100); // use $newFielName in putObjectFile()
$filepath = $newFielName;
// Instantiate the client.
$s3 = S3Client::factory(array(
'key' => 'jjj',
'secret' => 'kkk',
'region' => 'eu-west-1'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ACL' => 'public-read',
'ContentType' => 'image/jpeg'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
I have decide to avail of amazons new server-side encryption with s3, however, I have run into a problem which I am unable to resolve.
I am using the s3 PHP class found here : https://github.com/tpyo/amazon-s3-php-class
I had been using this code to put objects originally (and it was working) :
S3::putObjectFile($file, $s3_bucket_name, $file_path, S3::ACL_PRIVATE,
array(),
array(
"Content-Disposition" => "attachment; filename=$filename",
"Content-Type" => "application/octet-stream"
)
);
I then did as instructed here : http://docs.amazonwebservices.com/AmazonS3/latest/API/index.html?RESTObjectPUT.html and added the 'x-amz-server-side-encryption' request header. But now when I try to put an object it fails without error.
My new code is :
S3::putObjectFile($file, $s3_bucket_name, $file_path, S3::ACL_PRIVATE,
array(),
array(
"Content-Disposition" => "attachment; filename=$filename",
"Content-Type" => "application/octet-stream",
"x-amz-server-side-encryption" => "AES256"
)
);
Has anybody experimented with this new feature or can anyone see an error in the code.
Cheers.
That header should be part of the $metaHeaders array and not $requestHeaders array.
S3::putObjectFile($file, $s3_bucket_name, $file_path, S3::ACL_PRIVATE,
array(
"x-amz-server-side-encryption" => "AES256"
),
array(
"Content-Disposition" => "attachment; filename=$filename",
"Content-Type" => "application/octet-stream"
)
);
Here's the method definition from the docs:
putObject (mixed $input,
string $bucket,
string $uri,
[constant $acl = S3::ACL_PRIVATE],
[array $metaHeaders = array()],
[array $requestHeaders = array()])
You might also consider using the SDK for PHP?
We can upload files with encryption using the code following
$s3->create_object($bucket_name,$destination,array(
'acl'=>AmazonS3::ACL_PUBLIC,
'fileUpload' => $file_local,
'encryption'=>"AES256"));
And you can download latest sdk from here
With the official SDK:
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
// $filepath should be absolute path to a file on disk
$filepath = '*** Your File Path ***';
// Instantiate the client.
$s3 = S3Client::factory();
// Upload a file with server-side encryption.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ServerSideEncryption' => 'AES256',
));
Changing Server-Side Encryption of an Existing Object (Copy Operation)
use Aws\S3\S3Client;
$sourceBucket = '*** Your Source Bucket Name ***';
$sourceKeyname = '*** Your Source Object Key ***';
$targetBucket = '*** Your Target Bucket Name ***';
$targetKeyname = '*** Your Target Object Key ***';
// Instantiate the client.
$s3 = S3Client::factory();
// Copy an object and add server-side encryption.
$result = $s3->copyObject(array(
'Bucket' => $targetBucket,
'Key' => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
'ServerSideEncryption' => 'AES256',
));
Source: http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingPHPSDK.html
With laravel 5+ it can be done easily through filesystems.php config, you don't need to get driver or low level object.
's3' => [
'driver' => 's3',
'key' => "Your Key",
'secret' => "Your Secret",
'region' => "Bucket Region",
'bucket' => "Bucket Name",
'options' => [
'ServerSideEncryption' => 'AES256',
]
],
//Code
$disk->put("filename", "content", "public"); // will have AES for file