How to properly secure private data in S3 buckets [closed] - php

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an application where I am storing users' images and audio. We use S3 for storage and I want to make sure that the user data we are storing is both secure and not completely accessible by the outside world.
Here is what I have thought of so far.
use pre-signed URLs with a limited lifespan so that the url in the browser will not be usable once the page loads.
Configure my bucket content to only be accessible via by domain - not sure if that will interfere with a user's ability to download their content.
I had the following questions:
Does using CloudFront to serve the files act as a layer of security since the url in the browser is not actually from s3?
Is it bad, security-wise, if the bucket name and structure is visible in the URL (if page source is viewed in the browser)?
Is it possible to configure s3 to use an alias for the bucket so that I can access the same content via a different URL and therefore not expose the bucket name?
What would be the best way to ensure proper file security while also allowing users to have easy access to their content?
Thanks in advance!

use pre-signed URLs with a limited lifespan so that the url in the browser will not be usable once the page loads.
This, or something very similar, is arguably the best strategy from a standpoint of security and practicality. Your site generates the URLs when the page loads... or... the links actually points back to your server. When clicked, your server verifies the session/authorization and signs a URL on demand, sending the browser there with a temporary redirect.
HTTP/1.1 302 Found
Location: https://example-bucket.s3...
Cache-Control: private, no-cache, no-store
Configure my bucket content to only be accessible via by domain - not sure if that will interfere with a user's ability to download their content.
That's a really primitive tactic, easily defeated. It's okay if you're trying to stop hotlinking to your content, but it doesn't qualify as a "security" measure.
Does using CloudFront to serve the files act as a layer of security since the url in the browser is not actually from s3?
Well, not really. The URL in the browser pointing directly to S3 does not really have security implications. See below.
Is it bad, security-wise, if the bucket name and structure is visible in the URL (if page source is viewed in the browser)?
Not if you secure your content properly. For example, I have a site where the user's "department_id" is clearly visible in the path to a file that is downloadable usimg a signed S3 URL. If the user recognizes that number tries incrementing or decrementing that number to see reports from other departments, it doesn't matter, because they are not in possession of a signes URL for that other file. Signed URLs are tamper-proof to the point of computational infeasibility -- that is, you cannot change anything in a signed URL without invalidating it, and you cannot feasibly reverse engineer a signed URL to the point that you would be able to have enough information to sign a URL for a different object before the heat death of the universe. Of course, such a structure that embeds obvious/guessable values in a URL would be a terrible practice if the content were publicly accessible.
Is it possible to configure s3 to use an alias for the bucket so that I can access the same content via a different URL and therefore not expose the bucket name?
Knowing the bucket name doesn't give you any paricularly useful information. In a secure configuration, knowing the name of the bucket doesn't matter. In certain scenarios, S3 error messages or response headers may actually reveal the bucket's name or location so this isn't necessarily always going to be preventable. A CloudFront URL would hide the bucket name, but again this doesn't give you any meaningful protection, since a bucket name is not a sensitive piece of information.
What would be the best way to ensure proper file security while also allowing users to have easy access to their content?
As above -- use signed URLs.
CloudFront offers a couple of additonal capabilities that differ from S3.
First, note that when a bucket is behind CloudFront, CloudFront can use its own credentials -- an origin access identity -- to sign the requests that it deems authorized so that S3 will allow CloudFront to access the bucket on behalf of the requester, and delivet the content.
You can then use CloudFront pre-signed URLs, which use a different algorithm than S3 signed URLs. Two notable differences:
unlike an S3 signed URL, a CloudFront signed URL can allow the requester to access more than one single object, for example, you could allow access to https://dxxxexample.cloudfront.net/user_files/${user_id}/* (where ${user_id} is a variable containing the user's id, which you substitute into the string before signing the URL). This would be something you might do as an optimization, to allow your code to generate the query string portion of the signed URL, which you could reuse in the process of building a page, to avoid the CPU load of signing many URLs individually in order to render a single page.
unlike an S3 signed URL, a CloudFront signed URL optionally allows you to include the user's IP address, making the signature only usable from that single IP address. You will need to balance this extra security against the possibility of a user's IP address changing while using your site, since that is less likely on desktops and more likely on mobile (particularly if switching spontaneously from mobile data to WiFi).
CloudFront also suports the same authorization capabilities of signed URLs, but using cookies instead. If your entire site runs through CloudFront, this might be a useful option for you to consider. Your entire site could run through CloudFront by pointing your main hostname there, and then configuring multiple origin servers -- both the S3 bucket and the app server itself -- and then configuring cache behaviors with path patterns to choose which paths are sent to which origin.
Signed URLs are the key to what you are trying to accomplish.
Of course, as with any security mechanism, it's important not only to verify that it works as expected, but also that it doesn't "work" when it shouldn't -- that is, be sure you verify that your secure resources are not publicly accessible without a signed URL. If your bucket policy or CloudFront distribution is misconfigured to allow public access, or if you wrongly upload secure content to S3 with x-amz-acl: public-read then of course you have defeated your own security efforts. The services assume you know what you are doing, so these configurations are technically valid. Don't blindly follow configuration or troubleshooting advice without understanding its implications.

Does using CloudFront to serve the files act as a layer of security
since the url in the browser is not actually from s3?
You cannot say it as a security layer but it will not expose S3 bucket URL, because cloud front serves cached content of the S3 objects. If you use signed URL for cloud front then it adds security.
Is it bad, security-wise, if the bucket name and structure is visible
in the URL (if page source is viewed in the browser)?
It is not bad to expose S3 bucket URL if the served content is public open to all. You only have to take care of applying proper policy of not making the put object access to the bucket open to world.
Is it possible to configure s3 to use an alias for the bucket so that
I can access the same content via a different URL and therefore not
expose the bucket name?
Yes this is possible. Check this AWS documentation https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#VirtualHostingCustomURLs
What would be the best way to ensure proper file security while also
allowing users to have easy access to their content?
Using proper IAM policies and granular access to user for the S3 content ensures proper security.

Related

AmazonAWS securing S3 Bucket videos

I'm using the AWS S3 bucket in a very simple way.
There's a bucket, <somebucket1>
There's a folder, <somebucket1>/sitevideos
And video files in it, <somebucket1>/sitevideos/video.mp4
I use this bucket so playback using HTML5 video (<video></video>) is more optimised and doesn't lag compared to just calling the video from the same server of the website (which is ridiculous).
The video files are encrypted, but they are set to be read-only to Public.
Now, my worries are, because they are public, people can download them from the S3 bucket instead of playing them on the website.
The Question
Is there a way to play a video file in S3 bucket, on an HTML video from a remote website, but will refuse downloads of the file if they are accessed directly via the S3 path?
If there are tutorials for this, I'd appreciate it. If this is already on the S3 documentation, I apologise for the laziness, please show me the link. I also heard that you can set them the permission to private, but they can still play on a remote server (although I haven't made that work).
Cheers & many thanks
A Bucket Policy can be configured to Restrict Access to a Specific HTTP Referrer.
For example, if a web page includes an image on the page, then the HTTP request for that object will include a referer. (I presume this would work for a video, too.)
However, this is not very good security, since the HTTP request can be easily manipulated to include the referer (eg in a web scraper).
A more secure method would be to use a Pre-Signed URL. This is a specially-constructed URL that grants time-limited access to a private Amazon S3 object.
When rendering the web page, your app would determine whether the user is permitted to access the object. If so, it would construct the pre-signed URL using AWS credentials. The URL can then be included in the standard HTML tags (eg <img src='...'>). The user will be able to access the object until the expiry time. If they shared the URL with somebody else (eg in a Tweet), other people would also be able to access the object until the expiry time.
By the way, Amazon CloudFront can also serve video content using various video protocols. It also supports pre-signed URLs (and also signed cookies).

How to use php and s3 to share private video that can't be downloaded?

I have a (php) website where teachers upload recordings of their class, and the students can log in and then play back the recording.
I want to make these videos more secure. Currently, the videos are stored on my server, and anyone with the url can download them. So, (1) I want to store them somewhere that can't be downloaded just using a url. And second, I need to stop them from right-clicking and saving the video as it is being played.
I'm trying to work this out with s3 but not getting it...
Is this possible? Does it need to use a special player? Does streaming the video help (can any video be streamed)?
I appreciate the help, I've spent many hours researching this and just getting more confused as I go along!
There are a couple of options you may wish to use.
1. Amazon CloudFront RTMP Distribution
Amazon CloudFront is a Content Distribution Network that caches content closer to users worldwide, in over 60 locations. It also has the ability to service Real-Time Media Playback (RTMP) protocols. This means that your web page could present a media player (eg JW Player, Flowplayer, or Adobe Flash) and CloudFront can serve the content.
See: Working with RTMP Distributions
CloudFront Distributions can also service private content. Your application can generate a URL that provides content for a limited period of time. The content is served via a media protocol, so the entire file cannot be easily downloaded.
See: Serving Private Content through CloudFront
2. Amazon S3 Pre-Signed URLs
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy (as per yours above)
IAM Users and Groups
A Pre-Signed URL
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
Similar to the above example with CloudFront, your application can generate a URL that provides access to S3 content for a limited time period. Once the period expires, the Pre-Signed URL will no longer function. However, during the active period, people would be able to download the entire file, so this might not be advisable for video content you wish to protect.

How to let client Upload and Download files from S3 without compromising credentials

I am creating a file sharing kind of web site (something like wetransfer). I was thinking of using S3 for storage and I want to use different hosting solution instead of EC2 so my web server will be in a different host outside amazon cloud. In order to reduce bandwidth consumption I will need to someway let clients to download and upload files directly from the client (browser).
I was looking at S3 documentation which explained how to directly upload file to S3 from browser client. It looks like we are pretty much exposing all details of my s3 credentials where some can easily look into details and abuse.
Is there any way I can avoid this by something doing something like allow users to upload/download files with a temporary credentials?
Would an IAM User Role work? You should be able to create a user (which will have it's own UUID), give it readonly access to your S3 repository, and pass that user's credentials into your request policy, as well as content and key rules.
If you want to grant all users read/write access, you can, though allowing those users access to specific files only, will be a bit more of a hassle.
I was looking at S3 documentation which explained how to directly upload file to S3 from browser client. It looks like we are pretty much exposing all details of my s3 credentials where some can easily look into details and abuse.
No, you're not. When used properly, the POST-based upload interface documented on that page only gives the user a limited-time authorization to upload one file matching various criteria (e.g, its name, size, MIME type, etc). It's quite safe to use.
Keep in mind that your S3 access key is not sensitive information. Exposing it to users is perfectly fine, and is in fact required for many common operations! Only the secret key needs to be kept private.

Can I grant permission on files on my AS3 bucket via HTTP request parameters?

I have a bucket with files in it in AS3. I have access to the PHP API and a server that can send requests to Amazon on command.
What I want to do is grant access to a file in my bucket using an HTTP GET/POST request. From what I understand using this function:
get_object_url ( $bucket, $filename, $preauth, $opt )
I can make the file publicly accessible for the $preauth amount of time at a given URL. I don't want to do that, I want the file to be privately available at a URL with required POST or GET credentials (deciding who can access the file would be based on a database containing application 'users' and their permissions). I understand the security implications of passing any kind of credentials over GET or POST on a non-HTTPS connection.
Is this possible? I could just download the file from AS3 to my server for the extent of the transaction then do all the controls on my own box, but that's an expensive solution (two file downloads instead of one, when my server shouldn't have had to do a download at all) to a seemingly easy problem.
The short answer is no.
You could look at Amazons IAM for some more ways to secure the content especially in conjunction with Cloudfront but essentially there is no way to provide access to content by passing along a username and password.
Of course, if you are already authenticating users on your site, then you can only supply the signed url to those users. The url only has to be valid at the time the user initiates the download and not for the entire duration of the download.
Also, if you intend to use your server as a proxy between S3 and the user you'll be removing a lot of the benefits of using S3 in the first place. But you could use EC2 as the server to remove the extra cost you mentioned - transfers between S3 and EC2 are free.

Using private ACLs with CloudFront?

I am developing a web app where video files are stored on Amazon S3 and using CloudFront is an optional feature which can be turned on and off at any time.
I have a bunch of video files set with private ACLs, and I use signed URLs to access them. This works great.
However, I want to create a CloudFront RTMP distribution on that bucket, but it would be difficult to programmatically update every single (Could be well over 300) object's ACL each time (And would take a long time for all the requests to happen since you can't do it by batch, right?).
Is there a way to either:
Set ACLs in bulk, in one call?
Set a bucket access policy so that CloudFront can read any private files in the bucket?
I have attempted creating an Origin Access Identity, and then adding this to the bucket's Access Control Policy but this doesn't appear to work.
And finally do I still need to sign the URLs when I send them to the video player?
This does all need to be done programatically in PHP so using CloudBerry and such won't be helpful to me unfortunately.
This is a useful guide to get started, it tells how to set up the private distribution:
http://www.bucketexplorer.com/documentation/cloudfront--how-to-create-private-streaming-distribution.html
You can set the ACLs via the AWS API looping through your videos in a series (I don't think this can be done in bulk, even BucketExplorer does this in a queue). You only need to set the ACLs on each file once. You need to make sure you grant access to the Canonical User you have in your Origin Access Identity for the distribution. This way the distribution can access the protected file from the S3 origin. You then need to set up a key-pair and a trusted signer.
You do need to sign the URLs every time someone accesses the video. There are a number of scripts available. This is a useful guide for Ruby, but you could quite easily rewrite the code in PHP:
http://r2d6.com/posts/1301220789-setting-up-private-streaming-with-cloudfront-and-ruby-on-rails

Categories