I'm using the amazon-s3-php-class to help me upload files to Amazon's S3. After I upload the file, i noticed that Everyone can download it if they type the url https://url.com/mybucket/filename.file.
I can manually restrict access by using the AWS console to turn off Open/Download to the user Everyone.
How do I do this with programatically with amazon-s3-php-class? The following code did not do anything:
$s3 = new S3($AZ_KEY_ID, $AZ_KEY_SECRET);
$acp = array("acl"=>array());
$acp["acl"][] = array(
"type" => "Everyone", "uri" => "https://url.com/mybucket/filename.file", "permission" => ""
);
$s3->setAccessControlPolicy("mybucket", "https://url.com/mybucket/filename.file", $acp);
What's wrong with my code?
With the new AWS SDK V3 changing ACL permissions is really easy:
$s3->putObjectAcl([
'Bucket' => 'myBucketName',
'Key' => 'myFileName',
'ACL' => 'private'
]);
ACL can be one of these value 'ACL' => 'private|public-read|public-read-write|authenticated-read|aws-exec-read|bucket-owner-read|bucket-owner-full-control',
$s3 = new S3($AZ_KEY_ID, $AZ_KEY_SECRET);
$acp = $s3->getAccessControlPolicy('mybucket', 'filename.file');
foreach($acp['acl'] as $key => $val) {
if(isset($val['uri']) &&
$val['uri'] == 'http://acs.amazonaws.com/groups/global/AllUsers')
unset($acp['acl'][$key]);
}
$s3->setAccessControlPolicy('mybucket', 'filename.file', $acp)
In function getAccessControlPolicy and setAccessControlPolicy, uri should be the path related to bucket.
AllUsers equals Everyone at AWS console.
Or you can set private acl to an object, when inserting it to S3.
$s3->putObjectFile($uploadFile, 'mybucket', 'filename.file', S3::ACL_PRIVATE)
Related
I'm trying to create signed urls from cloudfront with aws-sdk-php
I have created both Distributions WEB and RTMP
and this is the code i used to do that
this is start.php
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\CloudFront\CloudFrontClient;
$config = require('config.php');
// S3
$client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-2',
]);
// CloudFront
$cloudfront = CloudFrontClient::factory([
'version' => 'latest',
'region' => 'us-east-2',
]);
and this is config.php
<?php
return [
's3'=>[
'key' => 'XXXXXXXXXXXXXXXXXXXXXXXXXX',
'secret' => 'XXXXXXXXXXXXXXXXXXXXXXXXXX',
'bucket' => 'hdamovies',
'region' => 'us-east-2',
],
'cloudFront' => [
'url' => 'https://d2t7o0s69hxjwd.cloudfront.net',
],
];
and this is index.php
<?php
require 'config/start.php';
$video = 'XXXXXXXXXXX.mp4';
$expiry = new DateTime( '+1 hour' );
$url = $cloudfront->getSignedUrl([
'private_key' => 'pk-XXXXXXXXXXXXXXXXXXXXX.pem',
'key_pair_id' => 'XXXXXXXXXXXXXXXXXXXXX',
'url' => "{$config['cloudFront']['url']}/{$video}",
'expires' => strtotime('+10 minutes'),
]);
echo "Downlod";
When i click on the link i get that error
<Error>
<Code>KMS.UnrecognizedClientException</Code>
<Message>No account found for the given parameters</Message>
<RequestId>0F0A772FE67F0503</RequestId>
<HostId>juuIQZKHb1pbmiVkP7NVaKSODFYmBtj3T9AfDNZuXslhb++LcBsw9GNjpT0FG8MxgeQGqbVo+bo=</HostId></Error>
What is the problem here and how can i solve that?
CloudFront does not support downloading objects that were stored, encrypted, in S3 using KMS Keys, apparently because the CloudFront Origin Access Identity is not an IAM user, so it's not possible to authorize it to have the necessary access to KMS.
https://forums.aws.amazon.com/thread.jspa?threadID=268390
I had this issue and had it resolved after setting up the correctly Identities. However, I had a lot of issues with the error even after setting things up correctly. This was because I was attempting to download a file that was originally uploaded when the bucket was KMS encrypted, then later when I changed it to SSE-S3, it still was throwing a KMS error.
After reuploading the file, it seemed to work without any issues. Hope this helps someone else
Using "aws/aws-sdk-php": "^3.0#dev"
I am creating a image sharing website but do not want people to copy my URLs to another site to steal my content/bandwidth.
I was originally storing the objects as
return $s3->putObject([
'Bucket' => $bucket,
'Key' => $key,
'Body' => $file,
'ACL' => 'public-read',
]);
But I have removed 'public-read' so now the URL below no longer works
https://mybucket-images.s3.us-west-1.amazonaws.com/' . $key);
What do I need to do to create a temporary URL that can still be client side cached to access the object?
One thing I was thinking was to change the key once a week or month, but it would require me to update all objects with a cronjob. There must be a way to create a temporary access URL?
Use your server to generate presigned url for the keys in the bucket.
//Creating a presigned request
$s3Client = new Aws\S3\S3Client([
'profile' => 'default',
'region' => 'us-east-2',
'version' => '2006-03-01',
]);
$cmd = $s3Client->getCommand('GetObject', [
'Bucket' => 'my-bucket',
'Key' => 'testKey'
]);
$request = $s3Client->createPresignedRequest($cmd, '+20 minutes');
$presignedUrl = (string) $request->getUri();
taken from https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-presigned-url.html
But you'd have to do this every time there's a request to your page. And the link will be valid everywhere. You just minimize the period of its validity.
If your website is an API based and you retrieve the url via API, this may be relevant to you:
If your website has a login function, you can check for the auth logic prior giving the presigned url.
If not, you can use Http Referrer (which can be spoofed). Or an api key (like in API Gateway)
You can use the following code:
// initiate connection to your S3 bucket
$client = new S3Client(['credentials' => ['key' => 's3 key', 'secret' =>'s3 secrete'], 'region' => 's3 region', 'version' => 'latest']);
$object = $client->getCommand('GetObject', [
'Bucket' => 's3 bucket',
'Key' => 'images/image.png' // file
]);
$presignedRequest = $client->createPresignedRequest($object, '+20 minutes');
$presignedUrl = (string)$presignedRequest->getUri();
if ($presignedUrl) {
return $presignedUrl;//presigned URL
} else {
throw new FileNotFoundException();
}
If your intent is to make your content readable ONLY via a URL posted on your website - versus having the same web client using the same url accessed from another site NOT work, I think you are likely to find that rather difficult. Most of the ways that come to mind are fairly spoofable.
I would take a look at this and see if its good enough for you:
Restricting Access to a Specific HTTP Referrer
I am using AWS SDK, and I am able to create buckets and manipulate keys. At the time of creation of bucket I also want to enable it for website hosting.
This is what I am using for creation
$result = $s3->createBucket([
'Bucket' => $buck_name
]);
From what I found, This is how we add website configuration
$result = $s3->putBucketWebsite(array(
'Bucket' => $buck_name,
'IndexDocument' => array('Suffix' => 'index.html'),
'ErrorDocument' => array('Key' => 'error.html'),
));
But this is not enabling website hosting,I also have uploaded both files(index and error) just in case. But I am getting this error
InvalidArgumentException: Found 1 error while validating the input provided for the PutBucketWebsite operation: [WebsiteConfiguration] is missing and is a required parameter in
Try this way
use Aws\S3\S3Client;
$bucket = $buck_name;
// 1. Instantiate the client.
$s3 = S3Client::factory();
// 2. Add website configuration.
$result = $s3->putBucketWebsite(array(
'Bucket' => $bucket,
'IndexDocument' => array('Suffix' => 'index.html'),
'ErrorDocument' => array('Key' => 'error.html'),
));
// 3. Retrieve website configuration.
$result = $s3->getBucketWebsite(array(
'Bucket' => $bucket,
));
echo $result->getPath('IndexDocument/Suffix');
I have a little problem with AWS S3 service. I'm trying to delete whole bucket and i would like to use deleteBucketAsync().
My code:
$result = $this->s3->listObjects(array(
'Bucket' => $bucket_name,
'Prefix' => ''
));
foreach($result['Contents'] as $file){
$this->s3->deleteObjectAsync(array(
'Bucket' => $bucket_name,
'Key' => $file['Key']
));
}
$result = $this->s3->deleteBucketAsync(
[
'Bucket' => $bucket_name,
]
);
Sometimes this code works and delete whole bucket in seconds. But sometimes it doesn't.
Can someone please explain me how exacly S3 async functions work?
I know there is no concept of folders in S3, it uses a flat file structure. However, i will use the term "folder" for the sake of simplicity.
Preconditions:
An s3 bucket called foo
The folder foo has been made public using the AWS Management Console
Apache
PHP 5
Standard AWS SDK
The problem:
It's possible to upload a folder using the AWS PHP SDK. However, the folder is then only accessible by the user that uploaded the folder and not public readable as i would like it to be.
Procedure:
$sharedConfig = [
'region' => 'us-east-1',
'version' => 'latest',
'visibility' => 'public',
'credentials' => [
'key' => 'xxxxxx',
'secret' => 'xxxxxx',
],
];
// Create an SDK class used to share configuration across clients.
$sdk = new Aws\Sdk($sharedConfig);
// Create an Amazon S3 client using the shared configuration data.
$client = $sdk->createS3();
$client->uploadDirectory("foo", "bucket", "foo", array(
'params' => array('ACL' => 'public-read'),
'concurrency' => 20,
'debug' => true
));
Success Criteria:
I would be able to access a file in the uploaded folder using a "static" link. Fx:
https://s3.amazonaws.com/bucket/foo/001.jpg
I fixed it by using a defined "Before Execute" function.
$result = $client->uploadDirectory("foo", "bucket", "foo", array(
'concurrency' => 20,
'debug' => true,
'before' => function (\Aws\Command $command) {
$command['ACL'] = strpos($command['Key'], 'CONFIDENTIAL') === false
? 'public-read'
: 'private';
}
));
Use can use this:
$s3->uploadDirectory('images', 'bucket', 'prefix',
['params' => array('ACL' => 'public-read')]
);