File from client to S3 through PHP - php

I need service that allow to upload large files to S3, only for authenticated client.
So,
1. I need to check if is client has an access (for example HMAC or any unique key)
2. Check file format (extension/MIME-type, it must be only MUSIC)
3. If everything is ok - upload it to S3
Looks very simple. But, I don't want to store file at the service. I want to stream it directly to S3 after all checkings. Or, if auth. data is invalid, request must be aborted and file wouldn't e send.
Can you advise what technology I should use or right way for searching?

Related

Azure Blob PHP SDK - Upload directly from a custom multipart API request to Azure Storage

I am using the Azure\azure-storage-php library for PHP to upload a blob file (image or video in my case) to the Azure Blob Storage. The file is uploaded from a mobile app using a multipart API call to the server which then uploads it to the blob storage.
The problems in this scenario are:
The file takes double the time to upload, as it is fully uploaded at first to the server then from the server to Azure.
Once uploaded to the server, a success response (HTTP 200) is returned to the mobile app (from our server API). But actually the file is not yet available in Azure and might take some time depending on its size to be ready.
What I'm looking for is a way to 'stream' the file immediately from the multipart to azure (as a pass-through) to prevent this 'double upload' scenario.
I don't want to give the mobile app the direct link to the blob storage to prevent abuse, in addition I need to perform extra checks (ex mimetype checking) which is why I need the server in between.
Is this achievable?
For reference, here's the sample I'm using (my code is basically the same): storage-blobs-php-quickstart/blob/master/phpQS.php.
It sounds like you constructed a multipart/form request to send form data with a blob file to your server from your mobile app, then to put the blob in its request to Azure Blob Storage from your server. So in the process while, actually the same blob content is transfered from mobile to your server to Azure Storage that takes double the time of uploading a blob.
However, if you want to directly upload file to Azure Storage to across your server, it's impossible for using a multipart/form request, you must have to separate a standalone request for the file in multipart/form to do the upload operation.
In this case, to putting a blob from mobile app, I think you can directly using Azure Storage SDK for Android or iOS, even for JS on browser (a sample for uploading blob from browser directly) for using webkit widget in mobile app to upload file.
As you think, it may be not secure to write the account name & key of Azure Storage to your mobile app. But you can make an API for generating a blob url with sas token and write permission in PHP on your server, and then call it from mobile to get the url to upload blob.
The sample code for generate a blob url with sas token and write permission is like the function generateBlobDownloadLinkWithSAS of the offical sample code, just change 'r' with 'cw'.
Then, a simple PUT request with the sas url is enough for uploading file, as the REST document section Example: Upload a Blob using a Container’s Shared Access Signature said as the sample below.
PUT https://myaccount.blob.core.windows.net/pictures/photo.jpg?sv=2015-02-21&st=2015-07-01T08%3a49Z&se=2015-07-02T08%3a49Z&
sr=c&sp=w&si=YWJjZGVmZw%3d%3d&sig=Rcp6gQRfV7WDlURdVTqCa%2bqEArnfJxDgE%2bKH3TCChIs%3d HTTP/1.1
Host: myaccount.blob.core.windows.net
Content-Length: 12
Hello World.

AWS Lambda as a file verification system for s3 uploads

I need to use s3 to store content from users and govern access, in a social network type of service.
So far this is what I have thought of doing:
Client tells LAMP server he wants to upload a file
LAMP authenticates, and generates a presigned url for s3 where user can upload. It also creates an encrypted version of that key using private key. Then it adds this key, along with the user who started it in a mysql table(along with when it was started)
LAMP then sends the key and the digital signature from 2. to client.
Client then uploads the file to s3
After finishing, he tells LAMP that that file was completed. It sends in the key and the digital signature.
LAMP makes sure both the key and the signature match. If they do, LAMP knows that the client is honest about the key being given to him(and he has not randomly generated it)
LAMP then checks s3 to make sure that the file with that key exists, if it does,then delete the row which was added in 2.
After asking around, I was told that it is not possible for s3 itself to verify that a file is a 'valid' file of a certain type(I want to enforce only images )
So I decided to use aws lambda to verify it(if its a 'wrong' file, just delete it) . Just after the file is uploaded to s3, lambda can be fired.
However, it is possible that BEFORE lambda finishes checking the file,step 7 above gets executed. This means my server will think file is valid.
Is there any way to make an s3 upload+lambda execution atomic ?
Any suggestions are welcome

Amazon pre signed url, allow only certain file types?

I need to use s3 to store content from users and govern access, in a social network type of service.
So far this is what I have thought of doing:
Client tells LAMP server he wants to upload a file
LAMP authenticates, and generates a presigned url for s3 where user can upload. It also creates an encrypted version of that key using private key. Then it adds this key, along with the user who started it in a mysql table(along with when it was started)
LAMP then sends the key and the digital signature from 2. to client.
Client then uploads the file to s3
After finishing, he tells LAMP that that file was completed. It sends in the key and the digital signature.
LAMP makes sure both the key and the signature match. If they do, LAMP knows that the client is honest about the key being given to him(and he has not randomly generated it)
LAMP then checks s3 to make sure that the file with that key exists, if it does,then delete the row which was added in 2.
My question is:
Does the above data flow have some serious flaw, inefficiency ?
How do I make sure the user is only allowed to upload only valid files(like png,jpg,pdf etc).I believe just checking the extension is not enough as it may be changed

AJAX file upload with secret query vars

We're creating a form that allows users to upload large files. On mobile devices and slow connections, it might take a while to upload, so it seems important for this to be handled by an AJAX call that shows the users a progress bar (or something to let them know it's still working).
Here's the problem: The endpoint for the upload is a 3rd party API which expects our secret API key as one of the parameters. Here's a link directly to the section in their documentation. This API key cannot be exposed to the users on the client side.
My first instinct is to submit the form to an intermediate PHP script on our site, which has the API key, and then uploads the file to the API. But I'm pretty sure this will mean uploading the file twice: once to our server. Then again from our server to the API endpoint. Even if the form is submitted with AJAX, it's not a great result for the user to wait twice as long for it to complete.
So: What's the smoothest way to let users upload files while keeping our API key safe?
Some details that may or may not be important:
Our site is a PHP web app built on the CakePHP framework (v2.x). The files being uploaded are video files of all different formats between 1 and 5 minutes long. The API is a company called Wistia (see link to docs above). The file sizes seem to range from 3-30MB. We have no ability to change the way the 3rd party API works.
Uploading twice shouldn't be an issue - should it?
Its from your server to their API - this is what servers and APIs are meant for - exchanging data.
Javascript is not meant for this.
There is no way to hide it on the client, so your first instinct was correct - you will need to forward the file from the server.
It should be possible to read raw post stream from php://input, you can get the uploaded file from there (if you can parse it :)) and start upload to api server right away.
But even if the communication between mobile device and your script is slow, your script likely will likely upload fast to api server. So is it really needed?

Can I grant permission on files on my AS3 bucket via HTTP request parameters?

I have a bucket with files in it in AS3. I have access to the PHP API and a server that can send requests to Amazon on command.
What I want to do is grant access to a file in my bucket using an HTTP GET/POST request. From what I understand using this function:
get_object_url ( $bucket, $filename, $preauth, $opt )
I can make the file publicly accessible for the $preauth amount of time at a given URL. I don't want to do that, I want the file to be privately available at a URL with required POST or GET credentials (deciding who can access the file would be based on a database containing application 'users' and their permissions). I understand the security implications of passing any kind of credentials over GET or POST on a non-HTTPS connection.
Is this possible? I could just download the file from AS3 to my server for the extent of the transaction then do all the controls on my own box, but that's an expensive solution (two file downloads instead of one, when my server shouldn't have had to do a download at all) to a seemingly easy problem.
The short answer is no.
You could look at Amazons IAM for some more ways to secure the content especially in conjunction with Cloudfront but essentially there is no way to provide access to content by passing along a username and password.
Of course, if you are already authenticating users on your site, then you can only supply the signed url to those users. The url only has to be valid at the time the user initiates the download and not for the entire duration of the download.
Also, if you intend to use your server as a proxy between S3 and the user you'll be removing a lot of the benefits of using S3 in the first place. But you could use EC2 as the server to remove the extra cost you mentioned - transfers between S3 and EC2 are free.

Categories