AWS Lambda as a file verification system for s3 uploads - php

I need to use s3 to store content from users and govern access, in a social network type of service.
So far this is what I have thought of doing:
Client tells LAMP server he wants to upload a file
LAMP authenticates, and generates a presigned url for s3 where user can upload. It also creates an encrypted version of that key using private key. Then it adds this key, along with the user who started it in a mysql table(along with when it was started)
LAMP then sends the key and the digital signature from 2. to client.
Client then uploads the file to s3
After finishing, he tells LAMP that that file was completed. It sends in the key and the digital signature.
LAMP makes sure both the key and the signature match. If they do, LAMP knows that the client is honest about the key being given to him(and he has not randomly generated it)
LAMP then checks s3 to make sure that the file with that key exists, if it does,then delete the row which was added in 2.
After asking around, I was told that it is not possible for s3 itself to verify that a file is a 'valid' file of a certain type(I want to enforce only images )
So I decided to use aws lambda to verify it(if its a 'wrong' file, just delete it) . Just after the file is uploaded to s3, lambda can be fired.
However, it is possible that BEFORE lambda finishes checking the file,step 7 above gets executed. This means my server will think file is valid.
Is there any way to make an s3 upload+lambda execution atomic ?
Any suggestions are welcome

Related

Amazon pre signed url, allow only certain file types?

I need to use s3 to store content from users and govern access, in a social network type of service.
So far this is what I have thought of doing:
Client tells LAMP server he wants to upload a file
LAMP authenticates, and generates a presigned url for s3 where user can upload. It also creates an encrypted version of that key using private key. Then it adds this key, along with the user who started it in a mysql table(along with when it was started)
LAMP then sends the key and the digital signature from 2. to client.
Client then uploads the file to s3
After finishing, he tells LAMP that that file was completed. It sends in the key and the digital signature.
LAMP makes sure both the key and the signature match. If they do, LAMP knows that the client is honest about the key being given to him(and he has not randomly generated it)
LAMP then checks s3 to make sure that the file with that key exists, if it does,then delete the row which was added in 2.
My question is:
Does the above data flow have some serious flaw, inefficiency ?
How do I make sure the user is only allowed to upload only valid files(like png,jpg,pdf etc).I believe just checking the extension is not enough as it may be changed

Amazon S3 presigned url - Invalidate manually or one time upload

I am using S3 to accept direct uploads from the user to S3. Therefore I will be using pre-signed urls.
After successful upload, AWS Lambda will make sure that the file upload is an image, and then the client will tell my server that he has finished uploading.
Then my server will check if that file exists in S3 (if Lambda detects an invalid image, it deletes it). If it does, then the rest of the application logic will follow.
However, there is a loophole in this mechanism. A user can use the same url to upload a malicious file after telling my server that he has finished uploading (and initially passing a valid file).
Lambda will still delete the file, but now my server will think that a file exists whereas it actually does not.
Is there any way to generate a one-time upload pre-signed url, or is it possible to forcefully invalidate a url that was generated but has not yet expired?
A pre-signed URL expires at a set date/time. It is not possible to create a one-time use pre-signed URL.
It is also not possible to invalidate a pre-signed URL. However, the pre-signed URL uses permissions from the Access Key that is referenced by the pre-signed URL. If permissions are removed from the User linked to the Access Key, then the pre-signed URL will not work.
Turning this into an answer...
Once a file is uploaded, have Lambda move it (using the Copy Object API), i.e. from uploads/123.png to received/123.png or something similar.
If a malicious user attempts to re-use the signed URL, it'll go to uploads/123.png. Worst-case, Lambda checks it again and rejects the new file. Since your server's looking in received/ instead of uploads/ for files to process, we've rendered things safe.

How uploading to S3 through PHP script on EC2 works?

Please help me understand the process of uploading files to Amazon S3 server via PHP. I have a website on EC2, which will have PHP script for uploading file to S3 server from client's machine. What I need to understand, is whether the file will go directly to S3 from client's machine, or if it will first be uploaded onto EC2, and then to S3. If it's the second option, then how can I optimize the upload so that file goes directly to S3 from client's machine?
It is possible to upload a file to S3 using any of the scenarios you specified.
In the first scenario, the file gets uploaded to your PHP backend on EC2 and then you upload it from PHP to S3 via a PUT request. Basically, in this scenario, all uploads pass through your EC2 server.
The second option is to upload the file directly to S3 from the client's browser. This is done by using a POST request directly to S3 and a policy that you can generate using your PHP logic, and attach it to the POST request. This policy is basically a set of rules that allow S3 to accept the upload (without it anyone would be able to upload anything in your bucket).
In this second scenario, your PHP scripts on EC2 will only need to generate a valid policy for the upload, but the actual file that's being uploaded will go directly to S3 without passing trough your EC2 server.
You can get more info on the second scenario here:
http://aws.amazon.com/articles/1434
even if it's not PHP specific, it explains how to generate the policy and how to form the POST request.
You can also get more information by reading through the API docs for POST requests:
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html
EDIT: The official AWS SDK for PHP contains a helper class for doing this: http://docs.aws.amazon.com/aws-sdk-php-2/latest/class-Aws.S3.Model.PostObject.html

File from client to S3 through PHP

I need service that allow to upload large files to S3, only for authenticated client.
So,
1. I need to check if is client has an access (for example HMAC or any unique key)
2. Check file format (extension/MIME-type, it must be only MUSIC)
3. If everything is ok - upload it to S3
Looks very simple. But, I don't want to store file at the service. I want to stream it directly to S3 after all checkings. Or, if auth. data is invalid, request must be aborted and file wouldn't e send.
Can you advise what technology I should use or right way for searching?

Can I grant permission on files on my AS3 bucket via HTTP request parameters?

I have a bucket with files in it in AS3. I have access to the PHP API and a server that can send requests to Amazon on command.
What I want to do is grant access to a file in my bucket using an HTTP GET/POST request. From what I understand using this function:
get_object_url ( $bucket, $filename, $preauth, $opt )
I can make the file publicly accessible for the $preauth amount of time at a given URL. I don't want to do that, I want the file to be privately available at a URL with required POST or GET credentials (deciding who can access the file would be based on a database containing application 'users' and their permissions). I understand the security implications of passing any kind of credentials over GET or POST on a non-HTTPS connection.
Is this possible? I could just download the file from AS3 to my server for the extent of the transaction then do all the controls on my own box, but that's an expensive solution (two file downloads instead of one, when my server shouldn't have had to do a download at all) to a seemingly easy problem.
The short answer is no.
You could look at Amazons IAM for some more ways to secure the content especially in conjunction with Cloudfront but essentially there is no way to provide access to content by passing along a username and password.
Of course, if you are already authenticating users on your site, then you can only supply the signed url to those users. The url only has to be valid at the time the user initiates the download and not for the entire duration of the download.
Also, if you intend to use your server as a proxy between S3 and the user you'll be removing a lot of the benefits of using S3 in the first place. But you could use EC2 as the server to remove the extra cost you mentioned - transfers between S3 and EC2 are free.

Categories