I am trying to copy a 1TB file from one bucket to another. I know that this can be done easily if I log into the AWS S3 panel but I would like to do it using PHP.
I am using the following AWS S3 class from github
public static function copyObject($srcBucket, $srcUri, $bucket, $uri, $acl = self::ACL_PRIVATE, $metaHeaders = array(), $requestHeaders = array(), $storageClass = self::STORAGE_CLASS_STANDARD)
{
$rest = new S3Request('PUT', $bucket, $uri, self::$endpoint);
$rest->setHeader('Content-Length', 0);
foreach ($requestHeaders as $h => $v) $rest->setHeader($h, $v);
foreach ($metaHeaders as $h => $v) $rest->setAmzHeader('x-amz-meta-'.$h, $v);
if ($storageClass !== self::STORAGE_CLASS_STANDARD) // Storage class
$rest->setAmzHeader('x-amz-storage-class', $storageClass);
$rest->setAmzHeader('x-amz-acl', $acl);
$rest->setAmzHeader('x-amz-copy-source', sprintf('/%s/%s', $srcBucket, rawurlencode($srcUri)));
if (sizeof($requestHeaders) > 0 || sizeof($metaHeaders) > 0)
$rest->setAmzHeader('x-amz-metadata-directive', 'REPLACE');
$rest = $rest->getResponse();
if ($rest->error === false && $rest->code !== 200)
$rest->error = array('code' => $rest->code, 'message' => 'Unexpected HTTP status');
if ($rest->error !== false)
{
self::__triggerError(sprintf("S3::copyObject({$srcBucket}, {$srcUri}, {$bucket}, {$uri}): [%s] %s",
$rest->error['code'], $rest->error['message']), __FILE__, __LINE__);
return false;
}
return isset($rest->body->LastModified, $rest->body->ETag) ? array(
'time' => strtotime((string)$rest->body->LastModified),
'hash' => substr((string)$rest->body->ETag, 1, -1)
) : false;
}
I am using it in my PHP code as follows:
$s3 = new S3(AWS_ACCESS_KEY, AWS_SECRET_KEY);
$s3->copyObject($srcBucket, $srcName, $bucketName, $saveName, S3::ACL_PUBLIC_READ_WRITE);
I'm getting no error_log. What am I doing wrong that I am missing, please?
At 1 TB, the object is too large to copy in a single operation. To quote from the S3 REST API documentation:
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic operation using this API. However, for copying an object greater than 5 GB, you must use the multipart upload API.
Unfortunately, it doesn't appear that the S3 class you're using supports multipart uploads, so you'll need to use something else. I'd strongly recommend that you use Amazon's AWS SDK for PHP — it's a bit bigger and more complex than the one you're using right now, but it supports the entirety of the S3 API (as well as other AWS services!), so it'll be able to handle this operation.
Related
I am working on a process to upload a large number of files to S3, and for my smaller files I am building a list of commands using getCommand to upload them concurrently, like this:
$commands = array();
$commands[] = $s3Client->getCommand('PutObject', array(
'Bucket' => 'mybucket',
'Key' => 'filename.ext',
'Body' => fopen('filepath', 'r'),
));
$commands[] = $s3Client->getCommand('PutObject', array(
'Bucket' => 'mybucket',
'Key' => 'filename_2.ext',
'Body' => fopen('filepath_2', 'r'),
));
etc.
try {
$pool = new CommandPool($s3Client, $commands, [
'concurrency' => 5,
'before' => function (CommandInterface $cmd, $iterKey) {
//Do stuff before the file starts to upload
},
'fulfilled' => function (ResultInterface $result, $iterKey, PromiseInterface $aggregatePromise) {
//Do stuff after the file is finished uploading
},
'rejected' => function (AwsException $reason, $iterKey, PromiseInterface $aggregatePromise) {
//Do stuff if the file fails to upload
},
]);
// Initiate the pool transfers
$promise = $pool->promise();
// Force the pool to complete synchronously
$promise->wait();
$promise->then(function() { echo "All the files have finished uploading!"; });
} catch (Exception $e) {
echo "Exception Thrown: Failed to upload: ".$e->getMessage()."<br>\n";
}
This works fine for the smaller files, but some of my files are large enough that I'd like them to automatically be uploaded in multiple parts. So, instead of using getCommand('PutObject'), which uploads an entire file, I'd like to use something like getCommand('ObjectUploader') so that the larger files can be automatically broken up as needed. However, when I try to use getCommand('ObjectUploader') it throws an error and says that it doesn't know what to do with that. I'm guessing that perhaps the command has a different name, which is why it is throwing the error. But, it's also possible that it's not possible to do it like this.
If you've worked on something like this in the past, how have you done it? Or even if you haven't worked on it, I'm open to any ideas you might have.
Thanks!
References:
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_commands.html#command-pool
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-multipart-upload.html#object-uploader
I decided to go a different direction with this, and instead of using an array of concurrent commands I am now using a set of concurrent MultipartUploader promises, as shown in an example on this page: https://500.keboola.com/parallel-multipart-uploads-to-s3-in-php-61ff03ffc043
Here are the basics:
//Create an array of your file paths
$files = ['file1', 'file2', ...];
//Create an array to hold the promises
$promises = [];
//Create MultipartUploader objects, and add them to the promises array
foreach($files as $filePath) {
$uploader = new \Aws\S3\MultipartUploader($s3Client, $filePath, $uploaderOptions);
$promises[$filePath] = $uploader->promise();
}
//Process the promises once they are all complete
$results = \GuzzleHttp\Promise\unwrap($promises);
I am developing a website that will have an audio to text page, i am trying to use the API from Google but it seems to load indefinitly and giving me a timeout, on the Google console it show me that request has been made so i guess it come from my rendering (I am developing on Symfony)
Here's my function
public function transcribeAction($audioFile = 'C:\Users\Poste3\Downloads\rec.flac', $languageCode = 'fr-FR', $options = ['sampleRateHertz' => 16000, 'speechContexts' => ['phrases' => ['The Google Cloud Platform', 'Speech API']]])
{
// Create the speech client
$speech = new SpeechClient([
'keyFilePath' => 'C:\Users\Poste3\Downloads\Speech-74da45e82b8e.json',
'languageCode' => $languageCode,
]);
// Make the API call
$results = $speech->recognize(
fopen($audioFile, 'r'),
$options
);
// Print the results
foreach ($results as $result) {
$alternative = $result->alternatives()[0];
printf('Transcript: %s' . PHP_EOL, $alternative['transcript']);
printf('Confidence: %s' . PHP_EOL, $alternative['confidence']);
}
return $this->render('OCPlatformBundle:Advert:speech.html.twig');
}
And here's the call to the function
{{ render(controller('OCPlatformBundle:Advert:transcribe')) }}
First of all you should dump the response you are getting from Speech API.
Possible problems here:
Key is not correctly configured, and has no permissions to make this operation
Your file is more than 1 minute long, in this case google speech requires you to first upload .flac file to Google Cloud and use longrunningrecognize
I can't work out how to make this upload as 'reduced redundancy'
Iv added it in there twice, but it doesn't do anything. I think the way I have applied is useless.
I think i need to use this line but it seems i need to rebuild this?
setOption('StorageClass', 'REDUCED_REDUNDANCY')
require_once __DIR__ .'/vendor/autoload.php';
$options = [
'region' => $region,
'credentials' => [
'key' => $accessKeyId,
'secret' => $secretKey
],
'version' => '2006-03-01',
'signature_version' => 'v4',
'StorageClass' => 'REDUCED_REDUNDANCY',
];
$s3Client = new \Aws\S3\S3Client($options);
$uploader = new \Aws\S3\MultipartUploader($s3Client, $filename_dir , [
'bucket' => $bucket,
'key' => $filename,
'StorageClass' => 'REDUCED_REDUNDANCY',
]);
try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}\n";
} catch (\Aws\Exception\MultipartUploadException $e) {
echo $e->getMessage() . "\n";
}
Reduced Redundancy Storage used to be about 20% lower cost, in exchange for only storing 2 copies of the data instead of 3 copies (1 redundant copy instead of 2 redundant copies).
However, with the December 2016 pricing changes to Amazon S3, it is no longer beneficial to use Reduced Redundancy Storage.
Using pricing from US Regions:
Reduced Redundancy Storage = 2.4c/GB
Standard storage = 2.3c/GB
Standard-Infrequent Access storage = 1.25c/GB + 1c/GB retrievals
Therefore, RRS is now more expensive than Standard storage. It is now cheaper to choose Standard or Standard-Infrequent Access.
Setting "StorageClass" like this won't work.
$s3Client = new \Aws\S3\S3Client($options);
Because the StorageClass is only set when the object is uploaded, you can not default all of your requests to a specific configuration during the initialization of the SDK. Each individual PUT request must have it's own options specified for it.
To use the "SetOption" line you mentioned, you may need to update your code to follow the following example found in the AWS PHP SDK Documentation.
Using the AWS PHP SDK for Multipart Upload (High-Level API) Documentation
The following PHP code sample demonstrates how to upload a file using the high-level UploadBuilder object.
<?php
// Include the AWS SDK using the Composer autoloader.
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\Model\MultipartUpload\UploadBuilder;
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
// Instantiate the client.
$s3 = S3Client::factory();
// Prepare the upload parameters.
$uploader = UploadBuilder::newInstance()
->setClient($s3)
->setSource('/path/to/large/file.mov')
->setBucket($bucket)
->setKey($keyname)
->setMinPartSize(25 * 1024 * 1024)
->setOption('Metadata', array(
'param1' => 'value1',
'param2' => 'value2'
))
->setOption('ACL', 'public-read')
->setConcurrency(3)
->build();
// Perform the upload. Abort the upload if something goes wrong.
try {
$uploader->upload();
echo "Upload complete.\n";
} catch (MultipartUploadException $e) {
$uploader->abort();
echo "Upload failed.\n";
echo $e->getMessage() . "\n";
}
So in this case you need to add 'StorageClass' as follows, the position isn't important, only the usage of setOption to set it:
->setOption('ACL', 'public-read')
->setOption('StorageClass', 'REDUCED_REDUNDANCY')
->setConcurrency(3)
->build();
I'm trying to delete the folder created in a bucket in Amazon S3 and it gives the error
An unexpected error has occurred. Please try again.
How can I delete the folder?
First you need to understand that there is nothing like folder in Amazon s3 bucket
what you see is object which behaves like folder
one/ // so what you see folder is this but its separate object
one/abc.png
one/tow/
one/tow/a.zip
to delete folder you need to delete every object start with one/ and you can do that by deleteMatchingObjects() function
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials.ini' => [
'key' => $credentials['key'],
'secret' => $credentials['secret'],
],
]);
/* this is what you need*/
$s3->deleteMatchingObjects($bucket, $obj);
I have used phpsdk v3
I am using below code in s3.php class. check it out.
/**
* Delete an empty bucket
*
* #param string $bucket Bucket name
* #return boolean
*/
public function deleteBucket($bucket = '') {
$rest = new S3Request('DELETE', $bucket);
$rest = $rest->getResponse();
if ($rest->error === false && $rest->code !== 204)
$rest->error = array('code' => $rest->code, 'message' => 'Unexpected HTTP status');
if ($rest->error !== false) {
trigger_error(sprintf("S3::deleteBucket({$bucket}): [%s] %s",
$rest->error['code'], $rest->error['message']), E_USER_WARNING);
return false;
}
return true;
}
I have been testing some PHP scripts that uses the aws-sdk-php to upload files to S3 storage. These scripts seems to work nicely when they are executed directly from the browser, but fails when trying to use it through an API class in Luracast Restler 3.0
A sample script that upload some dummy file is like the following:
<?php
require_once (dirname(__FILE__) . "/../lib/aws/aws.phar");
use Aws\Common\Aws;
use Aws\S3\Enum\CannedAcl;
use Aws\S3\Exception\S3Exception;
use Aws\Common\Enum\Region;
function test(){
// Instantiate an S3 client
$s3 = Aws::factory(array(
'key' => 'key',
'secret' => 'secret',
'region' => Region::EU_WEST_1
))->get('s3');
try {
$s3->putObject(array(
'Bucket' => 'my-bucket',
'Key' => '/test/test.txt',
'Body' => 'example file uploaded from php!',
'ACL' => CannedAcl::PUBLIC_READ
));
} catch (S3Exception $e) {
echo "There was an error uploading the file.\n";
}
}
This script is placed in the folder /util/test.php, while the aws-php-sdk is at /lib/aws/aws.phar
To test that this test method is well written I have created another php script at /test/upload.php with the following code:
<?php
require_once (dirname(__FILE__) . '/../util/test.php');
test();
So I can enter in the browser http://mydomain.com/test/upload.php and all is working as expected and the file is uploaded in S3 storage.
However when I call the function test() from an API class with the Restler framework, I have an error that says that Aws cannot be found:
Fatal error: Class 'Aws\Common\Aws' not found in /var/app/current/util/test.php on line 12
However the code is exactly the same that perfectly works when called from upload.php. I have been wasting hours trying to figure out what is happening here, but I cannot get any conclusion. Is like the aws-php-sdk autoloader does not work well under some circumstances.
Any hints?
This held me back quite some time,
The issue is caused by reslters autoloader which sets spl_autoload_unregiste
as described here:
https://github.com/Luracast/Restler/issues/72
One way to get arround the problem is to comment out the relevant lines in vendor/Luracast/Restler/AutoLoader.php
public static function thereCanBeOnlyOne() {
if (static::$perfectLoaders === spl_autoload_functions())
return static::$instance;
/*
if (false !== $loaders = spl_autoload_functions())
if (0 < $count = count($loaders))
for ($i = 0, static::$rogueLoaders += $loaders;
$i < $count && false != ($loader = $loaders[$i]);
$i++)
if ($loader !== static::$perfectLoaders[0])
spl_autoload_unregister($loader);
*/
return static::$instance;
}