I am currently writing an application using the Yii-framework in PHP that stores a large number of files uploaded by the users of the application. I decided that since the number of files are going to be ever increasing, it would be beneficial to use Amazon S3 to store these files and when requested, the server could retrieve the files and send it to the user. (The server is an EC2 instance in the same zone)
Since the files are all confidential, the server has to verify the identity of the user and their credentials before allowing the user to receive the file. Is there a way to send the file to the user in this case directly from S3 or do I have to pull the data to the server first and then serve it to the user.
If So, Is there any way to cache the recently uploaded files on the local server so that it does not have to go to s3 to look for the file. In most cases, the most recently uploaded files will be requested repeatedly by multiple clients.
Any help would be greatly appreciated!
Authenticated clients can download files directly from S3 by signing the appropriate URLs on the server prior to displaying the page/urls to the client.
For more information, see: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
Note that for confidential files you may also want to consider server-side/client side encryption. Finally, for static files ( such as images ) you may want to set the appropriate cache headers as well.
Use AWS Cloud Front to server these static files. Rather than sending the files to the user, send them links to the files. The Links need to be cloud front links & not direct links to the S3 bucket.
This has the benefit of keeping load low on your server as well as caching files close to your users for better performance.
More details here Serving Private Content through CloudFront
Related
I was wondering if it's possible to manipulate or change the names of files upon deploying the distribution.
Reason for during this, is that we don't have the actual files on our own servers but is provided by a partner. Is it somehow possible to run a php function upon deploy to change the name of the file on the cdn ?
So eg.
partner.example.com/image/123120913.jpg
to
1234.cloudfront.com/image/SHOE-NAME.jpg
One way is to import all images first to local storage and upon that download change filename - but seems very extensive.
As we can provide the image name easy if it's possible to run a php function upon deploying.
Amazon CloudFront is a caching service that retrieves content from a specified origin (eg web server, Amazon S3), stores it in a cache and then serves it to users.
Amazon CloudFront does not create aliases to filenames. It simply passes the request to the origin. If the origin is a web server, you could write a web app that returns any type of information given the request URL, but CloudFront cannot rename or map filenames.
I am creating a file sharing kind of web site (something like wetransfer). I was thinking of using S3 for storage and I want to use different hosting solution instead of EC2 so my web server will be in a different host outside amazon cloud. In order to reduce bandwidth consumption I will need to someway let clients to download and upload files directly from the client (browser).
I was looking at S3 documentation which explained how to directly upload file to S3 from browser client. It looks like we are pretty much exposing all details of my s3 credentials where some can easily look into details and abuse.
Is there any way I can avoid this by something doing something like allow users to upload/download files with a temporary credentials?
Would an IAM User Role work? You should be able to create a user (which will have it's own UUID), give it readonly access to your S3 repository, and pass that user's credentials into your request policy, as well as content and key rules.
If you want to grant all users read/write access, you can, though allowing those users access to specific files only, will be a bit more of a hassle.
I was looking at S3 documentation which explained how to directly upload file to S3 from browser client. It looks like we are pretty much exposing all details of my s3 credentials where some can easily look into details and abuse.
No, you're not. When used properly, the POST-based upload interface documented on that page only gives the user a limited-time authorization to upload one file matching various criteria (e.g, its name, size, MIME type, etc). It's quite safe to use.
Keep in mind that your S3 access key is not sensitive information. Exposing it to users is perfectly fine, and is in fact required for many common operations! Only the secret key needs to be kept private.
I have a web page on a web hosting and images are stored on Amazon S3. I want with php be able to download multiple images from Amazon S3 through my web page in a zip file.
What are my options and what is the best?
What I know, it is not possible to compress files on S3. Can I use Amazon lambda?
Best solution I've come across.
The user selects on my website which images they want to downloaded.
I get the file name from my database on my web host and download the
images from S3 to a temporary directory on my web host.
A zip file is created in a temporary directory and a link is sent
to the user. After a certain time, I clear up the temporary directory (with a script) on my web host.
But it would be great if there are a way that did not go through my hosting to create and download the zip-file.
AWS S3 is "basic building blocks", so it doesn't support a feature like zipping multiple objects together.
You've come up with a good method to do it, though you could stream the objects into a zip file rather than downloading them. EC2 instances can do this very quickly because they tend to have fast connections to S3.
Lambda doesn't work for this, as it is only triggered when an object is placed into an S3 bucket. You are doing the opposite.
I have a bucket with files in it in AS3. I have access to the PHP API and a server that can send requests to Amazon on command.
What I want to do is grant access to a file in my bucket using an HTTP GET/POST request. From what I understand using this function:
get_object_url ( $bucket, $filename, $preauth, $opt )
I can make the file publicly accessible for the $preauth amount of time at a given URL. I don't want to do that, I want the file to be privately available at a URL with required POST or GET credentials (deciding who can access the file would be based on a database containing application 'users' and their permissions). I understand the security implications of passing any kind of credentials over GET or POST on a non-HTTPS connection.
Is this possible? I could just download the file from AS3 to my server for the extent of the transaction then do all the controls on my own box, but that's an expensive solution (two file downloads instead of one, when my server shouldn't have had to do a download at all) to a seemingly easy problem.
The short answer is no.
You could look at Amazons IAM for some more ways to secure the content especially in conjunction with Cloudfront but essentially there is no way to provide access to content by passing along a username and password.
Of course, if you are already authenticating users on your site, then you can only supply the signed url to those users. The url only has to be valid at the time the user initiates the download and not for the entire duration of the download.
Also, if you intend to use your server as a proxy between S3 and the user you'll be removing a lot of the benefits of using S3 in the first place. But you could use EC2 as the server to remove the extra cost you mentioned - transfers between S3 and EC2 are free.
I am launching a web application soon that will be serving a fair amount of images so I'd like to have a main web server and a static content server and possibly a separate database server later on.
I'd like the user to:
login and be able to upload a photo
the photo is renamed a randrom string
the photo is processed into a thumbnail
the photo and thumbnail are stored into a filesystem on the static server.
the photo and thumbnail's directory and filename are stored in a mysql database
The problem is I don't know how to have the user instantly upload an image to a separate server.
I thought about using amazon s3, but you can't edit filenames before posting them. (through POST, I'd rather not use the REST api)
I could also use php's ftp function to upload to a separate server, but I'd like to dynamically create folders based on the properties of the image (so I don't have all the images in one big folder obviously), but I don't know how this would work if I used ftp...
Or I could save them locally and use a CDN, I'm not too familiar with CDN's so I don't know if using them this way would be appropriate or cost-effective.
What are my options here? I'd like the images to be available instantly (no cron jobs/queues)
Thanks.
You can create directories over FTP with PHP, so that should not be a showstopper.
I thought about using amazon s3, but you can't edit filenames before posting them. (through POST, I'd rather not use the REST api)
If you let your PHP server do the uploading to S3 via POST, you can name the files whatever you want. You should do that anyway, letting your users upload to S3 directly, without your PHP code inbetween, sounds like bad for security to me.