PHP Amazon SDK, S3 Bucket Access Denied - php

I try for the first time to use the PHP AWS SDK ("aws/aws-sdk-php": "^3.19") to use S3.
I created a bucket : 'myfirstbucket-jeremyc'
I created a policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::myfirstbucket-jeremyc/*"
]
}
]
}
I applied the policy to a group and then created a user 's3-myfirstbucket-jeremyc' in this group.
My PHP code is :
<?php
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
error_reporting(E_ALL);
require(__DIR__ . '/vendor/autoload.php');
$s3Client = S3Client::factory([
'credentials' => [
'key' => $_SERVER['AWS_S3_CLIENT_KEY'],
'secret' => $_SERVER['AWS_S3_CLIENT_SECRET']
],
'region' => 'eu-west-1',
'version' => 'latest',
'scheme' => 'http'
]);
$result = $s3Client->putObject(array(
'Bucket' => 'myfirstbucket-jeremyc',
'Key' => 'text.txt',
'Body' => 'Hello, world!',
'ACL' => 'public-read'
));
But i get this error :
Error executing "PutObject" on
"http://s3-eu-west-1.amazonaws.com/myfirstbucket-jeremyc/text.txt";
AWS HTTP error: Client error: PUT
http://s3-eu-west-1.amazonaws.com/myfirstbucket-jeremyc/text.txt
resulted in a 403 Forbidden response
Do you know where i'm wrong ?
Thanks in advance !

You're setting the ACL for the new object but you haven't allowed s3:PutObjectAcl.

Related

Amazon Athena "Error opening Hive split" Access Denied Error

I am trying to run query in Amazon Athena from PHP code:
$client = Aws\Athena\AthenaClient::factory(array(
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => array(
'key' => '<KEY>',
'secret' => '<SECRET>'
)
));
$result1 = $client->StartQueryExecution(array(
'QueryExecutionContext' => array('Database' => 'default'),
'QueryString' => "select * from logs where date between TIMESTAMP '2020-02-27 00:00:00' and TIMESTAMP '2020-02-27 23:59:59' limit 100",
'ResultConfiguration' => array(
'EncryptionConfiguration' => array('EncryptionOption'=> 'SSE_S3'),
'OutputLocation' => 's3://bucket_name/temp'
)
));
and got this error:
Error opening Hive split s3:///data-mining/logs/2019/07/12/07/Log-6-2019-07-12-07-35-01-a1c6d0a9-27e5-458b-b72a-8942a6d2b261.parquet (offset=0, length=756977): com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 4A00D465F919D8AB; S3 Extended Request ID: ...), S3 Extended Request ID: ... (Path: s3://<bucket_name>/data-mining/logs/2019/07/12/07/Log-6-2019-07-12-07-35-01-a1c6d0a9-27e5-458b-b72a-8942a6d2b261.parquet
I can confirm these:
Same query from Athena console (with root user) can be run without problem
I execute query from user which has permissions: AmazonAthenaFullAccess and AmazonS3FullAccess
Make sure you are using an IAM policy associated with the user performing the query that allows operations on the KMS key associated with the parquet files. Even though a bucket may be using SSE_S3, the files may already have been encrypted with KMS instead.
A policy like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:DescribeKey",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:"region":"account":key/"keyid"
]
}
]
}

Set the CORS configuration using Google Cloud Storage PHP API not working

I'm setting up Google Cloud Storage bucket CORS configuration using PHP API, but it doesn't seem to work
I read the document given in : https://googleapis.github.io/google-cloud-php/#/docs/google-cloud/v0.96.0/storage/bucket
Here's my Laravel source code:
use Google\Cloud\Core\ServiceBuilder;
...
$projectId = 'myProjectId';
$bucketName = 'myBucketName';
$gcloud = new ServiceBuilder([
'keyFilePath' => 'resources/google-credentials.json',
'projectId' => $projectId
]);
$storage = $gcloud->storage();
$bucket = $storage->bucket($bucketName);
//change bucket configuration
$result = $bucket->update([
'cors' => [
'maxAgeSeconds' => 3600,
'method' => [
"GET","HEAD"
],
"origin" => [
"*"
],
"responseHeader" => [
"Content-Type"
]
]
]);
//print nothing and bucket doesn't changed
dd($bucket->info()['cors']);
After execute this code, the bucket CORS configuration doesn't changed
(My boss don't want me to use gsutil shell command to deal with this)
You're very close! CORS accepts a list, so you'll just need to make a slight modification:
$result = $bucket->update([
'cors' => [
[
'maxAgeSeconds' => 3600,
'method' => [
"GET","HEAD"
],
"origin" => [
"*"
],
"responseHeader" => [
"Content-Type"
]
]
]
]);
Let me know if it helps :).
The only thing I needed to change was when I config disks in laravel, using this code in config/filesystems.php when adding a disk for google:
'google' => [
'driver' => 's3',
'key' => 'xxx',
'secret' => 'xxx',
'bucket' => 'qrnotesfiles',
'base_url'=>'https://storage.googleapis.com'
]
Here is the code example fist get file contents from request:
$file = $request->file('avatar')
second save it into storage:
Storage::disk('google')->put('avatars/' , $file);

Upload to s3 via PHP fails access denied

I feel a bit stupid to ask this, but is there anything special required to upload something via the current PHP SDK to S3? I can upload via the cli with the same credentials, but when I try the SDK it fails.
Here the code:
<?php
require "awssdk_v3/aws-autoloader.php";
use Aws\S3\S3Client;
function s3_upload($file, $name) {
$s3 = S3Client::factory(
array(
'key' => getenv('AWS_ACCESS_KEY_ID'),
'secret' => getenv('AWS_SECRET_ACCESS_KEY'),
'version' => "2006-03-01",
'region' => getenv('AWS_REGION')
)
);
$result = $s3->putObject(
array(
'Bucket' => getenv('AWS_BUCKET'),
'Key' => $name,
'SourceFile' => $file,
'ContentType' => mime_content_type($file),
'ACL' => 'public-read'
)
);
return true;
}
I call it like this
s3_upload($_FILES['avatarfile']['tmp_name'], "avatar_2.jpg");
The user I use has this policy attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1480066717000",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
},
{
"Sid": "Stmt1480066765000",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket"
]
}
]
}
As mentioned I was able to upload a file from the CLI using that users credentials. The region is Frankfurt, so I specified eu-central-1, correct?
The error I get starts like this:
Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://my-bucket.s3.eu-central-1.amazonaws.com/avatar_2.jpg"; AWS HTTP error: Client error: `PUT https://my-bucket.s3.eu-central-1.amazonaws.com/avatar_2.jpg` resulted in a `403 Forbidden` response
I found the problem thanks to this answer. I'm trying to set the ACL 'public-read', but haven't granted myself s3:PutObjectAcl, just s3:PutObject. Changing either fixes the problem.
Thanks anyway for the help.
Under IAM Policy, add S3:PutObjectAcl in permission if you set S3:PutObject for action.
I ran into this same error message, and it turns out my S3 buckets were created in the wrong region.

ACL not applying during AWS s3 folder upload (uploadDirectory)

For some reason public-read is not being applied when I'm uploading a folder to an S3 bucket. (IE, public can not access the files)
The files upload fine, but they are all set to private. Tried everything I can think of. Feels like I'm missing something basic.
Was using this guide:
https://blogs.aws.amazon.com/php/post/Tx2W9JAA7RXVOXA/Syncing-Data-with-Amazon-S3
Here is my code:
require '../vendor/autoload.php';
use Aws\S3\S3Client;
$client = S3Client::factory(array(
'version' => '2006-03-01',
'region' => 'ap-southeast-2',
'credentials' => array(
'key' => 'MYKEY',
'secret' => 'MYSECRET',
)
));
$dir = 'assets';
$bucket = 'gittestbucket';
$keyPrefix = 'assets';
$options = array(
'params' => array('ACL' => 'public-read'),
'concurrency' => 20,
'debug' => true
);
$UploadAWS = $client->uploadDirectory($dir, $bucket, $keyPrefix, $options);
var_dump($UploadAWS);
My IAM user policy (also has a group of list all buckets):
{
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::gittestbucket",
"arn:aws:s3:::gittestbucket/*",
]
}
]
}
Any help much appreciated. Cheers
I struggled with this a while back.
Try changing you upload statement to this one bellow
$UploadAWS = $client->uploadDirectory($dir, $bucket, $keyPrefix, array(
'concurrency' => 20,
'debug' => true,
'before' => function (\Aws\Command $command) {
$command['ACL'] = strpos($command['Key'], 'CONFIDENTIAL') === false
? 'public-read'
: 'private';
}
));
AWS is shocking sometimes for its documentation as it changes so much

AWS S3 access denied when getting image by url

I am working on AWS EC2 Ubuntu Machine and trying to fetch image from AWS S3 but following error has been shown to me every time.
<Error>
<Code>InvalidArgument</Code>
<Message>
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>7C8B4BF1CE2FDC9E</RequestId>
<HostId>
/L5kjuOET4XFgGter2eFHX+aRSvVm/7VVmIBqQE/oMLeQZ1ditSMZuHPOlsMaKi8hYRnGilTqZY=
</HostId>
</Error>
Here is my bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1441213815928",
"Statement": [
{
"Sid": "Stmt1441213813464",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mytest.sample/*"
}
]
}
Here is the code
require 'aws-autoloader.php';
$credentials = new Aws\Credentials\Credentials('key', 'key');
$bucketName = "mytest.sample";
$s3 = new Aws\S3\S3Client([
'signature' => 'v4',
'version' => 'latest',
'region' => 'ap-southeast-1',
'credentials' => $credentials,
'http' => [
'verify' => '/home/ubuntu/cacert.pem'
],
'Statement' => [
'Action ' => "*",
],
]);
$result = $s3->getObject(array(
'Bucket' => $bucketName,
'Key' => 'about_us.jpg',
));
Html
<img src="<?php echo $result['#metadata']['effectiveUri']; ?>" />
Edit for Michael - sqlbot : here I am using default KMS.
try {
$result = $this->Amazon->S3->putObject(array(
'Bucket' => 'mytest.sample',
'ACL' => 'authenticated-read',
'Key' => $newfilename,
'ServerSideEncryption' => 'aws:kms',
'SourceFile' => $filepath,
'ContentType' => mime_content_type($filepath),
'debug' => [
'logfn' => function ($msg) {
echo $msg . "\n";
},
'stream_size' => 0,
'scrub_auth' => true,
'http' => true,
],
));
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
let me know if you need more.
PHP sdk v2
the Credentials package is Aws\Common\Credentials
to create an S3Client you need a factory
Try something like this
use Aws\S3\S3Client;
use Aws\Common\Credentials\Credentials;
$credentials = new Credentials('YOUR_ACCESS_KEY', 'YOUR_SECRET_KEY');
// Instantiate the S3 client with your AWS credentials
$s3Client = S3Client::factory(array(
'signature' => 'v4',
'region' => 'ap-southeast-1',
'credentials' => $credentials,
.....
]);
)
If that does not work you might try to declare explicitly a SignatureV4 object
use Aws\S3\S3Client;
use Aws\Common\Credentials\Credentials;
use Aws\Common\Signature\SignatureV4;
$credentials = new Credentials('YOUR_ACCESS_KEY', 'YOUR_SECRET_KEY');
// Instantiate the S3 client with your AWS credentials
$s3Client = S3Client::factory(array(
'signature' => new SignatureV4(),
'region' => 'ap-southeast-1',
'credentials' => $credentials,
.....
]);
)
In case you upgrade to sdk v3
You need to have signature_version (instead of signature) as parameter when you declare your s3 client
Statement does not appear to be a valid parameter (http://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/configuration.html#signature-version)
if issue you can turn on debug param to get more output
This would look like this
$s3 = new Aws\S3\S3Client([
'signature_version' => 'v4',
'version' => 'latest',
'region' => 'ap-southeast-1',
'credentials' => $credentials,
'http' => [
'verify' => '/home/ubuntu/cacert.pem'
],
'debug' => true
]);
see here for the full list of available parameter
I have also face this issue with aws:kms encyrption key, I suggest that if you wanted to use kms key then you have to create your kms key in IAM section of AWS Console. I love to recommended AES256 server side encryption, here S3 automatically Encrypted your data while putting and decryption while getting object. Please go through below link:
S3 Server Side encryption with AES256
My Solution is change this line 'ServerSideEncryption' => 'aws:kms' with 'ServerSideEncryption' => 'AES256'
try {
$result = $this->Amazon->S3->putObject(array(
'Bucket' => 'mytest.sample',
'ACL' => 'authenticated-read',
'Key' => $newfilename,
'ServerSideEncryption' => 'AES256',
'SourceFile' => $filepath,
'ContentType' => mime_content_type($filepath),
'debug' => [
'logfn' => function ($msg) {
echo $msg . "\n";
},
'stream_size' => 0,
'scrub_auth' => true,
'http' => true,
],
));
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
Please also update your bucket policy with below json, it will prevent you to upload object with out AES256 encryption
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::yourbucketname/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}

Categories