I am developing a web app where video files are stored on Amazon S3 and using CloudFront is an optional feature which can be turned on and off at any time.
I have a bunch of video files set with private ACLs, and I use signed URLs to access them. This works great.
However, I want to create a CloudFront RTMP distribution on that bucket, but it would be difficult to programmatically update every single (Could be well over 300) object's ACL each time (And would take a long time for all the requests to happen since you can't do it by batch, right?).
Is there a way to either:
Set ACLs in bulk, in one call?
Set a bucket access policy so that CloudFront can read any private files in the bucket?
I have attempted creating an Origin Access Identity, and then adding this to the bucket's Access Control Policy but this doesn't appear to work.
And finally do I still need to sign the URLs when I send them to the video player?
This does all need to be done programatically in PHP so using CloudBerry and such won't be helpful to me unfortunately.
This is a useful guide to get started, it tells how to set up the private distribution:
http://www.bucketexplorer.com/documentation/cloudfront--how-to-create-private-streaming-distribution.html
You can set the ACLs via the AWS API looping through your videos in a series (I don't think this can be done in bulk, even BucketExplorer does this in a queue). You only need to set the ACLs on each file once. You need to make sure you grant access to the Canonical User you have in your Origin Access Identity for the distribution. This way the distribution can access the protected file from the S3 origin. You then need to set up a key-pair and a trusted signer.
You do need to sign the URLs every time someone accesses the video. There are a number of scripts available. This is a useful guide for Ruby, but you could quite easily rewrite the code in PHP:
http://r2d6.com/posts/1301220789-setting-up-private-streaming-with-cloudfront-and-ruby-on-rails
Related
I am, for the first time, implementing file uploads using S3 (in this case specifically user profile avatar images) using Flysystem. I'm currently at the point where I have created an S3 bucket, and a user can upload an image, which is then visible online in the bucket console.
I now need the ability to display those images when requested (i.e. viewing that user's profile). I assumed that the process for this would be to generate the URL (e.g https://s3.my-region.amazonaws.com/my-bucket/my-filename.jpeg) and use that as the src of an image tag however to do this, the file (or bucket) must be marked as public. This seemed reasonable to me because the files within are not really private. When updating the bucket to public status however you are presented with a message stating;
We highly recommend that you never grant any kind of public access to your S3 bucket.
Is there a different, or more secure, way to achieve direct image linking like this that a newcomer to AWS is not seeing?
The warning is there because many people unintentionally make information public. However, if you are happy for these particular files to be accessed by anyone on the Internet at any time, then you can certainly make the individual objects public or create an Amazon S3 bucket policy to make a particular path public.
The alternative method to granting access is to create an S3 Pre-Signed URL, which is a time-limited URL that grants access to a private object.
Your application would be responsible for verifying that the user should be given access to a particular object. It would then generate the URL, supplying a duration for the access. Your application can then insert the URL into the src field and the image would appear as normal. However, once the duration has passed, it will no longer be accessible.
This is typically used when providing access to private files -- similar to how DropBox gives access to a private file without making the file itself public.
I would recommend to put cloudfront at the front (no pun intended) of the static assets, this way it would serve it all over their datacenters and not just the region you uploaded it, and I think this would charge you less because it does not use bandwidth from your S3 bucket.
This way you give cloudfront permissions to your S3 bucket and there is no need to set files public in your bucket manually. Google how to set IAM user for cloudfront and S3 to get you set up.
I have a (php) website where teachers upload recordings of their class, and the students can log in and then play back the recording.
I want to make these videos more secure. Currently, the videos are stored on my server, and anyone with the url can download them. So, (1) I want to store them somewhere that can't be downloaded just using a url. And second, I need to stop them from right-clicking and saving the video as it is being played.
I'm trying to work this out with s3 but not getting it...
Is this possible? Does it need to use a special player? Does streaming the video help (can any video be streamed)?
I appreciate the help, I've spent many hours researching this and just getting more confused as I go along!
There are a couple of options you may wish to use.
1. Amazon CloudFront RTMP Distribution
Amazon CloudFront is a Content Distribution Network that caches content closer to users worldwide, in over 60 locations. It also has the ability to service Real-Time Media Playback (RTMP) protocols. This means that your web page could present a media player (eg JW Player, Flowplayer, or Adobe Flash) and CloudFront can serve the content.
See: Working with RTMP Distributions
CloudFront Distributions can also service private content. Your application can generate a URL that provides content for a limited period of time. The content is served via a media protocol, so the entire file cannot be easily downloaded.
See: Serving Private Content through CloudFront
2. Amazon S3 Pre-Signed URLs
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy (as per yours above)
IAM Users and Groups
A Pre-Signed URL
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
Similar to the above example with CloudFront, your application can generate a URL that provides access to S3 content for a limited time period. Once the period expires, the Pre-Signed URL will no longer function. However, during the active period, people would be able to download the entire file, so this might not be advisable for video content you wish to protect.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an application where I am storing users' images and audio. We use S3 for storage and I want to make sure that the user data we are storing is both secure and not completely accessible by the outside world.
Here is what I have thought of so far.
use pre-signed URLs with a limited lifespan so that the url in the browser will not be usable once the page loads.
Configure my bucket content to only be accessible via by domain - not sure if that will interfere with a user's ability to download their content.
I had the following questions:
Does using CloudFront to serve the files act as a layer of security since the url in the browser is not actually from s3?
Is it bad, security-wise, if the bucket name and structure is visible in the URL (if page source is viewed in the browser)?
Is it possible to configure s3 to use an alias for the bucket so that I can access the same content via a different URL and therefore not expose the bucket name?
What would be the best way to ensure proper file security while also allowing users to have easy access to their content?
Thanks in advance!
use pre-signed URLs with a limited lifespan so that the url in the browser will not be usable once the page loads.
This, or something very similar, is arguably the best strategy from a standpoint of security and practicality. Your site generates the URLs when the page loads... or... the links actually points back to your server. When clicked, your server verifies the session/authorization and signs a URL on demand, sending the browser there with a temporary redirect.
HTTP/1.1 302 Found
Location: https://example-bucket.s3...
Cache-Control: private, no-cache, no-store
Configure my bucket content to only be accessible via by domain - not sure if that will interfere with a user's ability to download their content.
That's a really primitive tactic, easily defeated. It's okay if you're trying to stop hotlinking to your content, but it doesn't qualify as a "security" measure.
Does using CloudFront to serve the files act as a layer of security since the url in the browser is not actually from s3?
Well, not really. The URL in the browser pointing directly to S3 does not really have security implications. See below.
Is it bad, security-wise, if the bucket name and structure is visible in the URL (if page source is viewed in the browser)?
Not if you secure your content properly. For example, I have a site where the user's "department_id" is clearly visible in the path to a file that is downloadable usimg a signed S3 URL. If the user recognizes that number tries incrementing or decrementing that number to see reports from other departments, it doesn't matter, because they are not in possession of a signes URL for that other file. Signed URLs are tamper-proof to the point of computational infeasibility -- that is, you cannot change anything in a signed URL without invalidating it, and you cannot feasibly reverse engineer a signed URL to the point that you would be able to have enough information to sign a URL for a different object before the heat death of the universe. Of course, such a structure that embeds obvious/guessable values in a URL would be a terrible practice if the content were publicly accessible.
Is it possible to configure s3 to use an alias for the bucket so that I can access the same content via a different URL and therefore not expose the bucket name?
Knowing the bucket name doesn't give you any paricularly useful information. In a secure configuration, knowing the name of the bucket doesn't matter. In certain scenarios, S3 error messages or response headers may actually reveal the bucket's name or location so this isn't necessarily always going to be preventable. A CloudFront URL would hide the bucket name, but again this doesn't give you any meaningful protection, since a bucket name is not a sensitive piece of information.
What would be the best way to ensure proper file security while also allowing users to have easy access to their content?
As above -- use signed URLs.
CloudFront offers a couple of additonal capabilities that differ from S3.
First, note that when a bucket is behind CloudFront, CloudFront can use its own credentials -- an origin access identity -- to sign the requests that it deems authorized so that S3 will allow CloudFront to access the bucket on behalf of the requester, and delivet the content.
You can then use CloudFront pre-signed URLs, which use a different algorithm than S3 signed URLs. Two notable differences:
unlike an S3 signed URL, a CloudFront signed URL can allow the requester to access more than one single object, for example, you could allow access to https://dxxxexample.cloudfront.net/user_files/${user_id}/* (where ${user_id} is a variable containing the user's id, which you substitute into the string before signing the URL). This would be something you might do as an optimization, to allow your code to generate the query string portion of the signed URL, which you could reuse in the process of building a page, to avoid the CPU load of signing many URLs individually in order to render a single page.
unlike an S3 signed URL, a CloudFront signed URL optionally allows you to include the user's IP address, making the signature only usable from that single IP address. You will need to balance this extra security against the possibility of a user's IP address changing while using your site, since that is less likely on desktops and more likely on mobile (particularly if switching spontaneously from mobile data to WiFi).
CloudFront also suports the same authorization capabilities of signed URLs, but using cookies instead. If your entire site runs through CloudFront, this might be a useful option for you to consider. Your entire site could run through CloudFront by pointing your main hostname there, and then configuring multiple origin servers -- both the S3 bucket and the app server itself -- and then configuring cache behaviors with path patterns to choose which paths are sent to which origin.
Signed URLs are the key to what you are trying to accomplish.
Of course, as with any security mechanism, it's important not only to verify that it works as expected, but also that it doesn't "work" when it shouldn't -- that is, be sure you verify that your secure resources are not publicly accessible without a signed URL. If your bucket policy or CloudFront distribution is misconfigured to allow public access, or if you wrongly upload secure content to S3 with x-amz-acl: public-read then of course you have defeated your own security efforts. The services assume you know what you are doing, so these configurations are technically valid. Don't blindly follow configuration or troubleshooting advice without understanding its implications.
Does using CloudFront to serve the files act as a layer of security
since the url in the browser is not actually from s3?
You cannot say it as a security layer but it will not expose S3 bucket URL, because cloud front serves cached content of the S3 objects. If you use signed URL for cloud front then it adds security.
Is it bad, security-wise, if the bucket name and structure is visible
in the URL (if page source is viewed in the browser)?
It is not bad to expose S3 bucket URL if the served content is public open to all. You only have to take care of applying proper policy of not making the put object access to the bucket open to world.
Is it possible to configure s3 to use an alias for the bucket so that
I can access the same content via a different URL and therefore not
expose the bucket name?
Yes this is possible. Check this AWS documentation https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#VirtualHostingCustomURLs
What would be the best way to ensure proper file security while also
allowing users to have easy access to their content?
Using proper IAM policies and granular access to user for the S3 content ensures proper security.
Been following this guide on AWS:
AWS Serving Private Content
I have setup a web cloudfront distribution and what I am doing is serving private content from my s3 bucket using an Origin Access Identity to access the resources within.
I am running into a roadblock when it gets to the cookie and signed url authentication--I am using wordpress. I have my cloudfront key pairs, but I am not quite sure where to go from here.
I have successfully served content via cdn when not using OAI and utilizing WP TotalCache plugins and such, but too many things are needed to be made public and the bucket policys for url and ip address restrictions are not quite working for me.
This step in process pretty much sums up where I am stuck:
Write your application to respond to requests from authorized users either with signed URLs or with Set-Cookie headers that set signed cookies. For more information, see Choosing Between Signed URLs and Signed Cookies.
Any next steps or suggestions would be very much appreciate.
Thanks a lot!
Sean
Step 1 - Choose between signed URLs and Cookies. Use Signed Cookies if your users need access to multiple files (restricted area in Wordpress).
Step 2 - Specify the AWS accounts for the creation of signed URLs/Cookies.
Step 3 - Develop your app to set the three required cookies (CloudFront-Key-Pair-Id, CloudFront-Signature and CloudFront-Policy/CloudFront-Expires).
Alternatively use the AWS SDK to generate signed URLs/Cookies. i.e.: PHP's getSignedUrl() or getSignedCookie() methods.
I am creating a file sharing kind of web site (something like wetransfer). I was thinking of using S3 for storage and I want to use different hosting solution instead of EC2 so my web server will be in a different host outside amazon cloud. In order to reduce bandwidth consumption I will need to someway let clients to download and upload files directly from the client (browser).
I was looking at S3 documentation which explained how to directly upload file to S3 from browser client. It looks like we are pretty much exposing all details of my s3 credentials where some can easily look into details and abuse.
Is there any way I can avoid this by something doing something like allow users to upload/download files with a temporary credentials?
Would an IAM User Role work? You should be able to create a user (which will have it's own UUID), give it readonly access to your S3 repository, and pass that user's credentials into your request policy, as well as content and key rules.
If you want to grant all users read/write access, you can, though allowing those users access to specific files only, will be a bit more of a hassle.
I was looking at S3 documentation which explained how to directly upload file to S3 from browser client. It looks like we are pretty much exposing all details of my s3 credentials where some can easily look into details and abuse.
No, you're not. When used properly, the POST-based upload interface documented on that page only gives the user a limited-time authorization to upload one file matching various criteria (e.g, its name, size, MIME type, etc). It's quite safe to use.
Keep in mind that your S3 access key is not sensitive information. Exposing it to users is perfectly fine, and is in fact required for many common operations! Only the secret key needs to be kept private.