I have used Composer to install the AWS SDK for PHP per the getting started instructions found here. I installed it in my html root. I created an IAM user called "ImageUser" with the sole permission of "AmazonS3FullAccess" and captured its keys.
Per the instructions here, I created a file called "credentials" as follows:
[default]
aws_access_key_id = YOUR_AWS_ACCESS_KEY_ID
aws_secret_access_key = YOUR_AWS_SECRET_ACCESS_KEY
Yes, I replaced those upper case words with the appropriate keys. The file resides in the hidden subdirectory ".aws" in the html root. The file's UNIX permissions are 664.
I created this simple file (called "test.php" in a subdirectory of my html root called "t") to test uploading a file to S3:
<?php
// Include the AWS SDK using the Composer autoloader.
require '../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'testbucket';
$keyname = 'test.txt';
// Instantiate the client.
$s3 = S3Client::factory();
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
Unfortunately, it throws a http error 500 at the line:
$s3 = S3Client::factory();
Yes, the autoloader directory is correct. Yes, the bucket exists. No, the file "test.txt" does not already exist.
According to the page noted above, "If no credentials or profiles were explicitly provided to the SDK and no credentials were defined in environment variables, but a credentials file is defined, the SDK will use the 'default' profile." Even so, I also tried explicitly specifying the profile "default" in the factory statement only to get the same results.
What am I doing wrong?
tldr; You have a mix of AWS SDK versions
Per the link you provided in your message (link) you have installed php sdk v3
Per your examples, you use PHP sdk v2
The v3 does not know about the S3Client::factory method so thats the reason it throws you the error. You can continue checking your link to check the usage https://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/basic-usage.html. There are a few methods to get the s3 client
create a client - simple method
<?php
// Include the SDK using the Composer autoloader
require 'vendor/autoload.php';
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
create a client - using sdk class
// Use the us-west-2 region and latest version of each client.
$sharedConfig = [
'region' => 'us-west-2',
'version' => 'latest'
];
// Create an SDK class used to share configuration across clients.
$sdk = new Aws\Sdk($sharedConfig);
// Create an Amazon S3 client using the shared configuration data.
$s3 = $sdk->createS3();
once you have your client you can use your existing code (yes, this one is v3) to put a new object on s3 so you'll get something like
<?php
// Include the AWS SDK using the Composer autoloader.
require '../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'testbucket';
$keyname = 'test.txt';
// Instantiate the client.
-- select method 1 or 2 --
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
Related
I'm using aws S3 php api to create bucket as shown below, but it returns this error message whatever I try
The requested bucket name is not available. The bucket namespace is
shared by all users of the system. Please select a different name and
try again.
when I try to create bucket on aws console which I've tried before on api, it works on aws console
here is my sample code
function createBucket($s3Client, $bucketName)
{
try {
$result = $s3Client->createBucket([
'Bucket' => $bucketName,
]);
return 'The bucket\'s location is: ' .
$result['Location'] . '. ' .
'The bucket\'s effective URI is: ' .
$result['#metadata']['effectiveUri'];
} catch (AwsException $e) {
return 'Error: ' . $e->getAwsErrorMessage();
}
}
function createTheBucket($name)
{
define('AWS_KEY', 'AWS_KEY');
define('AWS_SECRET_KEY', 'AWS_SECRET_KEY');
define('REGION', 'eu-west-1');
// Establish connection with DreamObjects with an S3 client.
$s3Client = new Aws\S3\S3Client([
'version' => '2006-03-01',
'region' => REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' => AWS_SECRET_KEY,
]
]);
echo createBucket($s3Client, $name);
}
The S3 bucket you are trying to create has already been created in the AWS namespace.
Its important to understand that S3 buckets have a unique name amongst the entire AWS global namespace.
Ensure your bucket name does not collide with anyone else's or one of your own.
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes.
If the S3 bucket name is free then its possible that either a hard coded value has overridden the $bucketName variable or that code logic (such as looping or formatting parameters) is trying to recreate a bucket that has already existed.
The best way to discover to validate the variable $bucketName value throughout your scrip execution.
I am trying to copy a folder to another in aws s3 as below
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => 'testbucket',
'secret' => BUCKET_SECRET //Global constant
),
'version' => BUCKET_VERSION, //Global constant
'region' => BUCKET_REGION //Global constant
)
);
$sourceBucket = 'testbucket';
$sourceKeyname = 'admin/collections/Athena'; // Object key
$targetBucket = 'testbucket';
$targetKeyname = 'admin/collections/Athena-New';
// Copy an object.
$s3->copyObject(array(
'Bucket' => $targetBucket,
'Key' => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
));
It is throwing error as
Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with
message 'Error executing "CopyObject" on
"https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New";
AWS HTTP error: Client error: PUT
https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New
resulted in a 404 Not Found response:
NoSuchKeyThe specified key does not
exist.admin/collections/AthenaNoSuchKeyThe specified key does not
exist.admin/collections/Athena29EA131A5AD9CB836OjDNLgbdLPLMd0t7MuNi4JH6AU5pKfRmhCcWigGAaTuRlqoX8X5aMicWTui56rTH1BLRpJJtmc='
I can't figure out why it is making wrong bucket url like
https://testbucket.s3.us-east-2.amazonaws.com/admin/collections/Athena-New
While right aws bucket url is
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena-New
Why it is appending the bucket name to before s3 in url?
In simple words, I wanted to copy the content of
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena
to
https://s3.us-east-2.amazonaws.com/testbucket/admin/collections/Athena-New
It is not possible to "copy a folder" in Amazon S3 because folders do not actually exist.
Instead, the full path of an object is stored in the object's Key (filename).
So, an object might be called:
admin/collections/Athena/foo.txt
If you wish to copy all objects from one "folder" to another "folder", then you will need to:
Obtain a listing of the bucket for the given Prefix (effectively, full path to the folder)
Loop through each object returned, and copy the objects one-at-a-time to the new name (which effectively puts it in a new folder)
So, it would copy admin/collections/Athena/foo.txt to admin/collections/Athena-New/foo.txt
I have been given a task to maintain a cakephp app. There is an issue with uploading files to server. All these time we were using AWS S3 Bucket now we want upload it to our own file server. The coding part was done by my ex-colleague.
I am trying a simple stuff I want to send fileName from my controller to Component files called S3.
Coding which is done is like this:
$this->S3->uploadFile($this->request->data('upload.tmp_name'),$fileKey);
In S3Component file:
I have written the following:
public function uploadFile($filePath, $fileKey, $metaData = [])
{
$fileUtility = new FileUtility(1024 * 1024, array("ppt", "pdf"));
return $fileUtility->uploadFile($_FILES[$fileKey], $filePath);
}
Now how do I pass the values in S3->uploadFile correctly to reflect the uploadfile function S3Compontent file.
Thanks!
If you can be bothered refactoring, you could try Flysystem, a file management package installable through composer. https://flysystem.thephpleague.com/
The nice thing about it is, you just swap an S3 adapter for a local adapter, ftp adapter, etc (there are a few!), and none of your code needs to change!
For instance:
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v3\AwsS3Adapter;
use League\Flysystem\Filesystem;
$client = S3Client::factory([
'credentials' => [
'key' => 'your-key',
'secret' => 'your-secret',
],
'region' => 'your-region',
'version' => 'latest|version',
]);
$adapter = new AwsS3Adapter($client, 'your-bucket-name', 'optional/path/prefix');
And to create a Local Adapter:
use League\Flysystem\Filesystem;
use League\Flysystem\Adapter\Local;
$adapter = new Local(__DIR__.'/path/to/root');
$filesystem = new Filesystem($adapter);
Then, you can make identical calls to $filesystem like these:
$filesystem->write('path/to/file.txt', 'contents');
$contents = $filesystem->read('path/to/file.txt');
$filesystem->delete('path/to/file.txt');
$filesystem->copy('filename.txt', 'duplicate.txt');
Check out the API here: https://flysystem.thephpleague.com/api/
I installed AWS PHP SDK and am trying to use SES. My problem is that it's (apparently) trying to read ~/.aws/credentials no matter what I do. I currently have this code:
$S3_AK = getenv('S3_AK');
$S3_PK = getenv('S3_PK');
$profile = 'default';
$path = '/home/franco/public/site/default.ini';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$client = SesClient::factory(array(
'profile' => 'default',
'region' => 'us-east-1',
'version' => "2010-12-01",
'credentials' => [
'key' => $S3_AK,
'secret' => $S3_PK,
]
));
And am still getting "Cannot read credentials from ~/.aws/credentials" error (after quite a while).
I tried 'credentials' => $provider of course, that was the idea, but as it wasn't working I reverted to hardcoded credentials. I've dumped $S3_AK and $S3_PK and they're fine, I'm actually using them correctly for S3, but there I have Zend's wrapper. I've tried ~/.aws/credentials (no ".ini") to the same result. Both files having 777 permissions.
Curious information: I had to set memory limit to -1 so it would be able to var_dump the exception. The html to the exception is around 200mb.
I'd prefer to use the environment variables, all though the credentials file is fine. I just don't understand why it appears to be trying to read the file even though I've hardcoded the credentials.
EDIT: So a friend showed me this, I removed the profile and also modified the try/catch and noticed the client seems to be created properly, and the error comes from trying to actually send an email.
The trick is just remove 'profile' => 'default' from the factory params, if this is defined we can't use a custom credentials file or environment variables. Is not documented but just works.
I'm using Sns and Sdk v3.
<?php
use Aws\Credentials\CredentialProvider;
$profile = 'sns-reminders';
$path = '../private/credentials';
$provider = CredentialProvider::ini($profile, $path);
$provider = CredentialProvider::memoize($provider);
$sdk = new Aws\Sdk(['credentials' => $provider]);
$sns = $sdk->createSns([
// 'profile' => $profile,
'region' => 'us-east-1',
'version' => 'latest',
]);
This solution will probably only work if you're using version 3 of the SDK. I use something similar to this:
$provider = CredentialsProvider::memoize(CredentialsProvider::ini($profile, $path));
$client = new SesClient([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => $provider]);
I use this for S3Client, DynamoDbClient, and a few other clients, so I am assuming that the SesClient constructor supports the same arguments.
OK, I managed to fix it.
I couldn't read the credentials file but it wasn't exactly my idea.
What was happening was that the actual client was being created successfully, but the try/catch also had the sendEmail included. This was what was failing.
About creating the client with explicit credentials: If you specify region, it will try and read a credentials file.
About the SendEmail, this is the syntax that worked for me, I'd found another one also in the AWS docs site, and that one failed. It must've been for an older SDK.
Im using the Zip method with aws-sdk-php and I get an error with my php app when calling ec2client factory object (or the preceding Use clauses, Im not sure...)
require LIBS_PATH . 'aws-sdk-php/aws-autoloader.php';
use Aws\Common\Aws;
use Aws\Common\Enum\Region;
use Aws\Ec2\Ec2Client;
public function createAWSApp($app_id)
{
// Get users region
$this->app->aws_region = $this->getUserAWSRegion();
// Setup the EC2 object
$this->ec2client = Ec2Client::factory(array(
//'profile' => '<profile in your aws credentials file>',
'key' => AWS_ACCESS_KEY_ID,
'secret' => AWS_SECRET_ACCESS_KEY,
'region' => $this->app->aws_region
));
...
The app's framework results in error
The file Aws\Ec2\Ec2Client.php is missing in the libs folder.
If I remove the Use clauses and explicitly name the files such as with a require_once :
require LIBS_PATH . 'aws-sdk-php/aws-autoloader.php';
use Aws\Common\Aws;
use Aws\Common\Enum\Region;
//use Aws\Ec2\Ec2Client;
require_once LIBS_PATH . 'aws-sdk-php/Aws/Ec2/Ec2client.php';
public function cr...
I get a similar message but looks like PHP default cannot find file error:
Warning: require_once(libs/aws-sdk-php/Aws/Ec2/Ec2client.php): failed to open stream: No such file or directory
LIBS_PATH is correct and works for other libraries Im using, the aws-sdk-php files are definitely present etc.
Please help...