Upload image to AWS bucket from remote server - php

I have a web application in PHP up and running. I want this app capable of uploading images to AWS s3 bucket. I am checking the documentation at AWS, but found at least three different documentations for this purpose. But still I am not clear, is possible that my web app hosted with a different hosting service will be able to upload files to AWS ?
If yes, which is the best option ?

You should be able to upload from outside of the AWS network.
Use the AWS PHP SDK at https://aws.amazon.com/sdk-for-php/
Then use the following code:
<?php
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
// Prepare the upload parameters.
$uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
'bucket' => $bucket,
'key' => $keyname
]);
// Perform the upload.
try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
} catch (MultipartUploadException $e) {
echo $e->getMessage() . PHP_EOL;
}
?>
Edit the bucket name, keyname, region and upload file name.
This is the multi-part upload style so you can upload huge files.

Related

Download File from Amazon s3. Download succees but File Empty

I tried to download file from amazon s3 to local storage. Download success, file appear in local storage but the file is empty no content. Looks like missed something in the Code. Need your help friends. Thanks in Advance
here's the code :
<?php
namespace App\Console\Library;
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
use Storage;
class DownloadAWS
{
public function downloadFile(){
$s3_file = Storage::cloud()->url('Something.jsonl');
$s3 = Storage::disk('local')->put('Order.jsonl', $s3_file);
}
}
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
try {
// Get the object.
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
// Display the object in the browser.
header("Content-Type: {$result['ContentType']}");
echo $result['Body'];
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
Currently, you are retrieving an URL to the s3 file and you are putting it in a file. Your code should currently create a file Order.jsonl containing the link to the s3 file.
What you really seem to want is getting the file and storing it locally. You can achieve this with the following code:
public function downloadFile()
{
$s3_file = Storage::cloud()->get('Something.jsonl');
$s3 = Storage::disk('local')->put('Order.jsonl', $s3_file);
}
The only difference is using get() vs. url().

Setting Storage class on Amazon s3 upload (ver 3)

I can't work out how to make this upload as 'reduced redundancy'
Iv added it in there twice, but it doesn't do anything. I think the way I have applied is useless.
I think i need to use this line but it seems i need to rebuild this?
setOption('StorageClass', 'REDUCED_REDUNDANCY')
require_once __DIR__ .'/vendor/autoload.php';
$options = [
'region' => $region,
'credentials' => [
'key' => $accessKeyId,
'secret' => $secretKey
],
'version' => '2006-03-01',
'signature_version' => 'v4',
'StorageClass' => 'REDUCED_REDUNDANCY',
];
$s3Client = new \Aws\S3\S3Client($options);
$uploader = new \Aws\S3\MultipartUploader($s3Client, $filename_dir , [
'bucket' => $bucket,
'key' => $filename,
'StorageClass' => 'REDUCED_REDUNDANCY',
]);
try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}\n";
} catch (\Aws\Exception\MultipartUploadException $e) {
echo $e->getMessage() . "\n";
}
Reduced Redundancy Storage used to be about 20% lower cost, in exchange for only storing 2 copies of the data instead of 3 copies (1 redundant copy instead of 2 redundant copies).
However, with the December 2016 pricing changes to Amazon S3, it is no longer beneficial to use Reduced Redundancy Storage.
Using pricing from US Regions:
Reduced Redundancy Storage = 2.4c/GB
Standard storage = 2.3c/GB
Standard-Infrequent Access storage = 1.25c/GB + 1c/GB retrievals
Therefore, RRS is now more expensive than Standard storage. It is now cheaper to choose Standard or Standard-Infrequent Access.
Setting "StorageClass" like this won't work.
$s3Client = new \Aws\S3\S3Client($options);
Because the StorageClass is only set when the object is uploaded, you can not default all of your requests to a specific configuration during the initialization of the SDK. Each individual PUT request must have it's own options specified for it.
To use the "SetOption" line you mentioned, you may need to update your code to follow the following example found in the AWS PHP SDK Documentation.
Using the AWS PHP SDK for Multipart Upload (High-Level API) Documentation
The following PHP code sample demonstrates how to upload a file using the high-level UploadBuilder object.
<?php
// Include the AWS SDK using the Composer autoloader.
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\Model\MultipartUpload\UploadBuilder;
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
// Instantiate the client.
$s3 = S3Client::factory();
// Prepare the upload parameters.
$uploader = UploadBuilder::newInstance()
->setClient($s3)
->setSource('/path/to/large/file.mov')
->setBucket($bucket)
->setKey($keyname)
->setMinPartSize(25 * 1024 * 1024)
->setOption('Metadata', array(
'param1' => 'value1',
'param2' => 'value2'
))
->setOption('ACL', 'public-read')
->setConcurrency(3)
->build();
// Perform the upload. Abort the upload if something goes wrong.
try {
$uploader->upload();
echo "Upload complete.\n";
} catch (MultipartUploadException $e) {
$uploader->abort();
echo "Upload failed.\n";
echo $e->getMessage() . "\n";
}
So in this case you need to add 'StorageClass' as follows, the position isn't important, only the usage of setOption to set it:
->setOption('ACL', 'public-read')
->setOption('StorageClass', 'REDUCED_REDUNDANCY')
->setConcurrency(3)
->build();

How to use AWS sdk in PHP for file operations for s3 and ec2?

I stuck up in a situation where I need to write files to S3 and EC2 as well.
The below code is perfectly working to write files to S3, but dont know how to write to EC2.
<?php
if(file_exists('aws-autoloader.php')){
require 'aws-autoloader.php';
}else{
die( 'File does not exist');
}
define('AWS_KEY','*****');
define('AWS_SECRET','********');
use Aws\S3\S3Client;
use Aws\Credentials\Credentials;
$credentials = new Credentials(AWS_KEY, AWS_SECRET);
$client = new S3Client([
'version' => 'latest',
'region' => 'ap-southeast-1',
'credentials' => $credentials
]);
$client->registerStreamWrapper();
file_put_contents('s3://mybucket/abc/a.txt', 'test');
#chmod('/var/app/current', 0755);
//Not able to write the content to EC2 instance
#file_put_contents('/var/app/current/b.txt', 'test');
#file_put_contents('file://var/app/current/b.txt', 'test');
?>
It works without any problem with below code, if the application is having write permission.
file_put_contents('/var/app/current/b.txt', 'test');
or
file_put_contents('file:///var/app/current/b.txt', 'test');

uploading posted file to amazon s3

I'm trying to upload a file to Amazon S3 via Laravel 4.
After user submit a form, the file will be passed to a function where I need to use Amazon PHP SDK and upload the file to Amazon S3 bucket.
But how do I upload the file straight away to Amazon S3 without saving the file onto server.
My current code looks like this,
private function uploadVideo($vid){
$file = $vid;
$filename = $file->getClientOriginalName();
if (!class_exists('S3'))require_once('S3.php');
if (!defined('awsAccessKey')) define('awsAccessKey', '123123123');
if (!defined('awsSecretKey')) define('awsSecretKey', '123123123');
$s3 = new S3(awsAccessKey, awsSecretKey);
$s3->putBucket("mybucket", S3::ACL_PUBLIC_READ);
$s3->putObject($vid, "mybucket",$filename , S3::ACL_PUBLIC_READ);
}
Grab the official SDK from http://docs.aws.amazon.com/aws-sdk-php/latest/index.html
This example uses http://docs.aws.amazon.com/aws-sdk-php/latest/class-Aws.S3.S3Client.html#_upload
require('aws.phar');
use Aws\S3\S3Client;
use Aws\Common\Enum\Region;
// Instantiate the S3 client with your AWS credentials and desired AWS region
$client = S3Client::factory(array(
'key' => 'KEY HERE',
'secret' => 'SECRET HERE',
'region' => Region::AP_SOUTHEAST_2 // you will need to change or remove this
));
$result = $client->upload(
'BUCKET HERE',
'OBJECT KEY HERE',
'STRING OF YOUR FILE HERE',
'public-read' // public access ACL
);

Delete object or bucket in Amazon S3?

I created a new amazon bucket called "photos". The bucket url is something like:
www.amazons3.salcaiser.com/photos
Now I upload subfolders containing files, into that bucket for example
www.amazons3.salcaiser.com/photos/thumbs/file.jpg
My questions are, does thumbs/ is assumed a new bucket or is it an object?
Then if I want to delete the entire thumbs/ directory need I first to delete all files inside that or can I delete all in one time?
In the case you are describing, "photos" is the bucket. S3 does not have sub-buckets or directories. Directories are simulated by using slashes in the object key. "thumbs/file.jpg" is an object key and "thumbs/" would be considered a key prefix.
Dagon's examples are good and use the older version 1.x of the AWS SDK for PHP. However, you can do this more easily with the newest 2.4.x version AWS SDK for PHP which includes a helper method for deleting multiple objects.
<?php
// Include the SDK. This line depends on your installation method.
require 'aws.phar';
use Aws\S3\S3Client;
$s3 = S3Client::factory(array(
'key' => 'your-aws-access-key',
'secret' => 'your-aws-secret-key',
));
// Delete the objects in the "photos" bucket with the a prefix of "thumbs/"
$s3->deleteMatchingObjects('photos', 'thumbs/');
//Include s3.php file first in code
if (!class_exists('S3'))
require_once('S3.php');
//AWS access info
if (!defined('awsAccessKey'))
define('awsAccessKey', 'awsAccessKey');
if (!defined('awsSecretKey'))
define('awsSecretKey', 'awsSecretKey');
//instantiate the class
$s3 = new S3(awsAccessKey, awsSecretKey);
if ($s3->deleteObject("bucketname", `filename`)) {
echo 'deleted';
}
else
{
echo 'no file found';
}
found some code snippets for 'directory' deletion - i did not write them:
PHP 5.3+:
$s3 = new AmazonS3();
$bucket = 'your-bucket';
$folder = 'folder/sub-folder/';
$s3->get_object_list($bucket, array(
'prefix' => $folder
))->each(function($node, $i, $s3) {
$s3->batch()->delete_object($bucket, $node);
}, array($s3));
$responses = $s3->batch()->send();
var_dump($responses->areOK());
Older PHP 5.2.x:
$s3 = new AmazonS3();
$bucket = 'your-bucket';
$folder = 'folder/sub-folder/';
$s3->get_object_list($bucket, array(
'prefix' => $folder
))->each('construct_batch_delete', array($s3));
function construct_batch_delete($node, $i, &$s3)
{
$s3->batch()->delete_object($bucket, $node);
}
$responses = $s3->batch()->send();
var_dump($responses->areOK());
I have implemented this in Yii as,
$aws = Yii::$app->awssdk->getAwsSdk();
$s3 = $aws->createS3();
$s3->deleteMatchingObjects('Bucket Name','object key');

Categories