Been following this guide on AWS:
AWS Serving Private Content
I have setup a web cloudfront distribution and what I am doing is serving private content from my s3 bucket using an Origin Access Identity to access the resources within.
I am running into a roadblock when it gets to the cookie and signed url authentication--I am using wordpress. I have my cloudfront key pairs, but I am not quite sure where to go from here.
I have successfully served content via cdn when not using OAI and utilizing WP TotalCache plugins and such, but too many things are needed to be made public and the bucket policys for url and ip address restrictions are not quite working for me.
This step in process pretty much sums up where I am stuck:
Write your application to respond to requests from authorized users either with signed URLs or with Set-Cookie headers that set signed cookies. For more information, see Choosing Between Signed URLs and Signed Cookies.
Any next steps or suggestions would be very much appreciate.
Thanks a lot!
Sean
Step 1 - Choose between signed URLs and Cookies. Use Signed Cookies if your users need access to multiple files (restricted area in Wordpress).
Step 2 - Specify the AWS accounts for the creation of signed URLs/Cookies.
Step 3 - Develop your app to set the three required cookies (CloudFront-Key-Pair-Id, CloudFront-Signature and CloudFront-Policy/CloudFront-Expires).
Alternatively use the AWS SDK to generate signed URLs/Cookies. i.e.: PHP's getSignedUrl() or getSignedCookie() methods.
Related
I have a (php) website where teachers upload recordings of their class, and the students can log in and then play back the recording.
I want to make these videos more secure. Currently, the videos are stored on my server, and anyone with the url can download them. So, (1) I want to store them somewhere that can't be downloaded just using a url. And second, I need to stop them from right-clicking and saving the video as it is being played.
I'm trying to work this out with s3 but not getting it...
Is this possible? Does it need to use a special player? Does streaming the video help (can any video be streamed)?
I appreciate the help, I've spent many hours researching this and just getting more confused as I go along!
There are a couple of options you may wish to use.
1. Amazon CloudFront RTMP Distribution
Amazon CloudFront is a Content Distribution Network that caches content closer to users worldwide, in over 60 locations. It also has the ability to service Real-Time Media Playback (RTMP) protocols. This means that your web page could present a media player (eg JW Player, Flowplayer, or Adobe Flash) and CloudFront can serve the content.
See: Working with RTMP Distributions
CloudFront Distributions can also service private content. Your application can generate a URL that provides content for a limited period of time. The content is served via a media protocol, so the entire file cannot be easily downloaded.
See: Serving Private Content through CloudFront
2. Amazon S3 Pre-Signed URLs
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy (as per yours above)
IAM Users and Groups
A Pre-Signed URL
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
Similar to the above example with CloudFront, your application can generate a URL that provides access to S3 content for a limited time period. Once the period expires, the Pre-Signed URL will no longer function. However, during the active period, people would be able to download the entire file, so this might not be advisable for video content you wish to protect.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an application where I am storing users' images and audio. We use S3 for storage and I want to make sure that the user data we are storing is both secure and not completely accessible by the outside world.
Here is what I have thought of so far.
use pre-signed URLs with a limited lifespan so that the url in the browser will not be usable once the page loads.
Configure my bucket content to only be accessible via by domain - not sure if that will interfere with a user's ability to download their content.
I had the following questions:
Does using CloudFront to serve the files act as a layer of security since the url in the browser is not actually from s3?
Is it bad, security-wise, if the bucket name and structure is visible in the URL (if page source is viewed in the browser)?
Is it possible to configure s3 to use an alias for the bucket so that I can access the same content via a different URL and therefore not expose the bucket name?
What would be the best way to ensure proper file security while also allowing users to have easy access to their content?
Thanks in advance!
use pre-signed URLs with a limited lifespan so that the url in the browser will not be usable once the page loads.
This, or something very similar, is arguably the best strategy from a standpoint of security and practicality. Your site generates the URLs when the page loads... or... the links actually points back to your server. When clicked, your server verifies the session/authorization and signs a URL on demand, sending the browser there with a temporary redirect.
HTTP/1.1 302 Found
Location: https://example-bucket.s3...
Cache-Control: private, no-cache, no-store
Configure my bucket content to only be accessible via by domain - not sure if that will interfere with a user's ability to download their content.
That's a really primitive tactic, easily defeated. It's okay if you're trying to stop hotlinking to your content, but it doesn't qualify as a "security" measure.
Does using CloudFront to serve the files act as a layer of security since the url in the browser is not actually from s3?
Well, not really. The URL in the browser pointing directly to S3 does not really have security implications. See below.
Is it bad, security-wise, if the bucket name and structure is visible in the URL (if page source is viewed in the browser)?
Not if you secure your content properly. For example, I have a site where the user's "department_id" is clearly visible in the path to a file that is downloadable usimg a signed S3 URL. If the user recognizes that number tries incrementing or decrementing that number to see reports from other departments, it doesn't matter, because they are not in possession of a signes URL for that other file. Signed URLs are tamper-proof to the point of computational infeasibility -- that is, you cannot change anything in a signed URL without invalidating it, and you cannot feasibly reverse engineer a signed URL to the point that you would be able to have enough information to sign a URL for a different object before the heat death of the universe. Of course, such a structure that embeds obvious/guessable values in a URL would be a terrible practice if the content were publicly accessible.
Is it possible to configure s3 to use an alias for the bucket so that I can access the same content via a different URL and therefore not expose the bucket name?
Knowing the bucket name doesn't give you any paricularly useful information. In a secure configuration, knowing the name of the bucket doesn't matter. In certain scenarios, S3 error messages or response headers may actually reveal the bucket's name or location so this isn't necessarily always going to be preventable. A CloudFront URL would hide the bucket name, but again this doesn't give you any meaningful protection, since a bucket name is not a sensitive piece of information.
What would be the best way to ensure proper file security while also allowing users to have easy access to their content?
As above -- use signed URLs.
CloudFront offers a couple of additonal capabilities that differ from S3.
First, note that when a bucket is behind CloudFront, CloudFront can use its own credentials -- an origin access identity -- to sign the requests that it deems authorized so that S3 will allow CloudFront to access the bucket on behalf of the requester, and delivet the content.
You can then use CloudFront pre-signed URLs, which use a different algorithm than S3 signed URLs. Two notable differences:
unlike an S3 signed URL, a CloudFront signed URL can allow the requester to access more than one single object, for example, you could allow access to https://dxxxexample.cloudfront.net/user_files/${user_id}/* (where ${user_id} is a variable containing the user's id, which you substitute into the string before signing the URL). This would be something you might do as an optimization, to allow your code to generate the query string portion of the signed URL, which you could reuse in the process of building a page, to avoid the CPU load of signing many URLs individually in order to render a single page.
unlike an S3 signed URL, a CloudFront signed URL optionally allows you to include the user's IP address, making the signature only usable from that single IP address. You will need to balance this extra security against the possibility of a user's IP address changing while using your site, since that is less likely on desktops and more likely on mobile (particularly if switching spontaneously from mobile data to WiFi).
CloudFront also suports the same authorization capabilities of signed URLs, but using cookies instead. If your entire site runs through CloudFront, this might be a useful option for you to consider. Your entire site could run through CloudFront by pointing your main hostname there, and then configuring multiple origin servers -- both the S3 bucket and the app server itself -- and then configuring cache behaviors with path patterns to choose which paths are sent to which origin.
Signed URLs are the key to what you are trying to accomplish.
Of course, as with any security mechanism, it's important not only to verify that it works as expected, but also that it doesn't "work" when it shouldn't -- that is, be sure you verify that your secure resources are not publicly accessible without a signed URL. If your bucket policy or CloudFront distribution is misconfigured to allow public access, or if you wrongly upload secure content to S3 with x-amz-acl: public-read then of course you have defeated your own security efforts. The services assume you know what you are doing, so these configurations are technically valid. Don't blindly follow configuration or troubleshooting advice without understanding its implications.
Does using CloudFront to serve the files act as a layer of security
since the url in the browser is not actually from s3?
You cannot say it as a security layer but it will not expose S3 bucket URL, because cloud front serves cached content of the S3 objects. If you use signed URL for cloud front then it adds security.
Is it bad, security-wise, if the bucket name and structure is visible
in the URL (if page source is viewed in the browser)?
It is not bad to expose S3 bucket URL if the served content is public open to all. You only have to take care of applying proper policy of not making the put object access to the bucket open to world.
Is it possible to configure s3 to use an alias for the bucket so that
I can access the same content via a different URL and therefore not
expose the bucket name?
Yes this is possible. Check this AWS documentation https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#VirtualHostingCustomURLs
What would be the best way to ensure proper file security while also
allowing users to have easy access to their content?
Using proper IAM policies and granular access to user for the S3 content ensures proper security.
So here's the problem I have.
I have some objects (files) on Amazon S3, which are not publicly accessible by everyone.
Now I want to build a web app for a service and one of the features of this service is that users can upload files which are saved onto S3 and the user can allow other users of the service (but not the public in general) to access to these files. So I need the app to be able to do the following:
1) Capture the request to the S3 object so that I can record it for analytics and metric purposes.
2) Authenticate the user account making the request to access the S3 object to see if they're authorized to download it.
Now, while I understand that Amazon has S3 IAM policies, I find that they're applied on a per Amazon User, Group and/or Bucket basis so I'm not sure I can solve my problem using S3 IAM policies.
My best guess is to create a route /request/<oject-id> on my web app to accept the request, save info about the request for analytics/metric purposes, and then authorize the user making the request based on the file in question, and finally either serve the file (by redirecting the request to S3) or return an error response if the user is not authorized to access the object.
Is this a good way of doing it? Are there any caveats / issues / considerations to this approach? Am wrong about not using IAM policies? If so, am overlooking something?
Also, I should point out that I did small proof-of-concept by testing it on PHP and to redirect the request to Amazon S3 to serve the object I use the header command like so:
if($allowed) {
header( "Location: $S3ObjectURL" );
}else
header($_SERVER["SERVER_PROTOCOL"]." 404 Not Found");
and while it did work, I'm wondering if I'm doing it correct. Should I be redirecting with a particular HTTP code (301, 302?) and why?
I'd appreciate any suggestions and feedback.
You don't need to rely upon IAM for each user of your service. Instead, you can generate a pre-signed URL for each object on your web server and return that to the user so they can use it to download the object. This provides a lot of added security because the URLs are temporary and need to be regenerated each time the user wants to access the object (allowing you to reauthenticate them). Check out the tutorial on how to do this here.
I have a bucket with files in it in AS3. I have access to the PHP API and a server that can send requests to Amazon on command.
What I want to do is grant access to a file in my bucket using an HTTP GET/POST request. From what I understand using this function:
get_object_url ( $bucket, $filename, $preauth, $opt )
I can make the file publicly accessible for the $preauth amount of time at a given URL. I don't want to do that, I want the file to be privately available at a URL with required POST or GET credentials (deciding who can access the file would be based on a database containing application 'users' and their permissions). I understand the security implications of passing any kind of credentials over GET or POST on a non-HTTPS connection.
Is this possible? I could just download the file from AS3 to my server for the extent of the transaction then do all the controls on my own box, but that's an expensive solution (two file downloads instead of one, when my server shouldn't have had to do a download at all) to a seemingly easy problem.
The short answer is no.
You could look at Amazons IAM for some more ways to secure the content especially in conjunction with Cloudfront but essentially there is no way to provide access to content by passing along a username and password.
Of course, if you are already authenticating users on your site, then you can only supply the signed url to those users. The url only has to be valid at the time the user initiates the download and not for the entire duration of the download.
Also, if you intend to use your server as a proxy between S3 and the user you'll be removing a lot of the benefits of using S3 in the first place. But you could use EC2 as the server to remove the extra cost you mentioned - transfers between S3 and EC2 are free.
I am developing a web app where video files are stored on Amazon S3 and using CloudFront is an optional feature which can be turned on and off at any time.
I have a bunch of video files set with private ACLs, and I use signed URLs to access them. This works great.
However, I want to create a CloudFront RTMP distribution on that bucket, but it would be difficult to programmatically update every single (Could be well over 300) object's ACL each time (And would take a long time for all the requests to happen since you can't do it by batch, right?).
Is there a way to either:
Set ACLs in bulk, in one call?
Set a bucket access policy so that CloudFront can read any private files in the bucket?
I have attempted creating an Origin Access Identity, and then adding this to the bucket's Access Control Policy but this doesn't appear to work.
And finally do I still need to sign the URLs when I send them to the video player?
This does all need to be done programatically in PHP so using CloudBerry and such won't be helpful to me unfortunately.
This is a useful guide to get started, it tells how to set up the private distribution:
http://www.bucketexplorer.com/documentation/cloudfront--how-to-create-private-streaming-distribution.html
You can set the ACLs via the AWS API looping through your videos in a series (I don't think this can be done in bulk, even BucketExplorer does this in a queue). You only need to set the ACLs on each file once. You need to make sure you grant access to the Canonical User you have in your Origin Access Identity for the distribution. This way the distribution can access the protected file from the S3 origin. You then need to set up a key-pair and a trusted signer.
You do need to sign the URLs every time someone accesses the video. There are a number of scripts available. This is a useful guide for Ruby, but you could quite easily rewrite the code in PHP:
http://r2d6.com/posts/1301220789-setting-up-private-streaming-with-cloudfront-and-ruby-on-rails