I have the following code as part of a backend sample for generating download links which are valid for 15 minutes.
$url = "{$bucket}/{$key}";
$request = $this->s3Client->get($url);
return $this->s3Client->createPresignedUrl($request, '+15 minutes');
From what I understand, this makes a web request to Amazon, and I need to generate about 20-30 download links per pageload.
So how would I go about not abusing the Amazon API, while still allowing clients to download files? The 2 options I could think of are:
Generate the links client-side (either on click or on page load) and store them inside cookies, so that upon refresh, no extra API calls are made.
Generate the links server-side and store them either in cookies or in the session.
What I'm interested in is:
What's the best practice to generate the links (client- vs. server-side)?
How should I cache the links (ie. where)?
This is a non-issue, because creating a pre-signed URL with the AWS SDK for PHP does not make a request to AWS. You may also find that the S3Client::getObjectUrl() method is easier to use, since it is an abstraction of what you are doing now.
Related
I have to build an application where users can download videos from a site but cannot share them. My first solution is to save these files in a hidden location on the users computer since one of the requirements is that the user should be able to watch the downloaded videos offline.
Please how do I go about saving a file in a location the user cannot see using php.
Thanks.
One solution is to generate token for each request for a video. That token would have its lifetime. Php script should be serving the content instead of giving direct access to resource to user. The script should check if the token is still active before serving the content.
It is up to you how to pass the token. The simplest way is to make it contained in uri.
No matter where you put your videos in your directory structure, you always send the data as a partial request. Once data is acquired by the user, it could be saved an reproduced.
There are techniques, however, to protect your video from direct download through curl, wget or other ways of download. And this is using a secure token and an expiration, passed as parameters. This way your video download window will be limited and generating the token manually would be pretty hard.
Chidiebere Onwunyirigbo, its a Great Question. One solution for your requirement is Steganography. It is the process of concealing your data (videos) behind other files (multimedia files like image, audio, video), in your case preferable would be Image Steganography. It is quite a old technology but new to many, you can get several ready tools/code for it on the internet which you can customize as per your need. From your side you have to provide the file that is already embedded inside the image for download. Only the tool coded for retrieval and rendering the hidden video can render your video. So, for this part all the users of your site have to first download this desktop application from your site for viewing the video. This will keep your videos safe on the users computer offline, because every user who takes the video will require the reverse steganography tool to be downloaded from your site. You can even embed secret info like users IP inside the Stego image along with the video and for each tool download, associate user computer's IP with the tool. If IP embedded inside Stego image matches with that of tool only then you allow to play else redirect application to get it registered. But the limitation would be that, the users will have to download your application and will be able to view videos only on your desktop application which will render the Steganographed video.
You cannot hide information on the user computer. Even if your process are running on a windows comp as SYSTEM user, a power user can take ownership of the files.
The only solution you have, are developing or using a known DRM system, for allowing only playing the video on a determined computer or another specifications (for example, if the program have the authentication token of some user).
At any case, you need to do two things for this:
- You need a custom application to play the video, if you want to check DRM.
- You need to recode / modify something on the video before download, for adding on them a code for allow only play on the destination computer or data used for authenticating DRM.
When I go to the url of my bucket file it downloads straight away. However I only want users that are logged into my application to have access to these files.
I have been searching for hours but cannot find out how to do this in php from my app. I am using laravel to do this so the code may not look familiar. But essentially it just generates the url to my bucket file and then redirect to that link which downloads it
$url = Storage::url('Shoots/2016/06/first video shoot/videos/high.mp4');
return redirect($url);
How can i make this file only accessible for users logged into my application?
We ran into a similar issue for an application I'm working on. The solution we ended up working with is generating S3 signed URLS, that have short expiration times on them. This allows us to generate a new signed link with every request to the web server, pass that link to our known auth'd user, who then has access for a very limited amount of time, (a few seconds). In the case of images we wanted to display in the DOM, we had our API respond with an HTTP 303 (See Other) header and the signed URL, that expired with-in a couple of second. This allowed the browser time to download the image and display it before the link expired.
A couple of risks around this solution: We know a user could possibly request a signed URL and share it with another service before the expiration happens programmatically, or an un-auth'd user who was intercepting network traffic could potentially intercept the request and make it themselves, we felt these were edge case enough that we were comfortable with our solution.
I'm building a MYSQL database driven website on a AWS EC2 instance. Users can submit their data and we will automatically create a web page for them. The webpage will display their submitted data including a photo. Upon original submission of the user's data, we store the photo in S3 using AWSSDKforPHP. When their information is approved by an administrator a webpage is created via a php script.
I've tried the method of generating a signed url. This however requires a expiration time. Is there a way around this? It also includes your access and secret key in the request. Not sure if that is the most secure way to do things. I'd like the method to be as secure as possible so I don't want to make the bucket public. Unless there is a way to make the stored images available only for viewing.
What's the best method to use in a situation like this? Thanks for your help!!
Basically URLs to amazon S3 buckets are s3.amazonaws/bucket_name/key_name all you need to do is make sure the content on those buckets is publicly available and that mime type for those keys is indeed image (image/jpeg or image/png)
I used this web tool to manage my site. I opend my bucket just to its ip and now i can control the restriction from the tool, for the basic viewer I gave read access only
I must say this is the first time I ask anything here, and I'm not a developer, so please be patient with my lack of knownledge. This requirement is for a website I am creating with some friends, so it's not that I'm making money with this.
This is the problem: I want to implement some kind of restriction to downloads, very much in the same way Rapidshare or any other file sharing service does:
The user should be able to download only 1 file simultaneously
The user should wait before being able to download another file, let's say 2 hours.
However, I am not trying to create a file sharing website. I am going to upload all the files to Amazon S3, and the only thing I need is to be able to restrict the downloads. I will create the links to the files. I don't care if users are registered or not, they should be able to download anyway.
The website is built in Joomla!, which uses Apache + MySQL. The files would be located at Amazon's servers.
My question is the following. Is there any way to implement this in a not-so-extremely-complicated way? Do you know some script or web-service that could help me get this done?
I have looked around, but the only thing I've found are Payment gateways, and we don't plan to charge for downloads.
Any help will be much appreciated.
Thanks!
UPDATE: I solved this problem using this script: http://www.vibralogix.com/linklokurl/features.php
As far as I know, there is no way to check on the current status of a download from S3. Having said that, S3 really does have plenty of bandwidth available, so I wouldn't worry too much about overloading their servers :) Just last week, Amazon announced that S3 is now serving an average of 650,000 objects / second.
If you want to implement something like #Pushpesh's solution in PHP, one solution would be to use the Amazon SDK for PHP and do something like this:
<?php
#Generate presigned S3 URL to download S3 object from
# Include AWS SDK for PHP and create S3
require_once("./aws-sdk/sdk.class.php");
$s3 = new AmazonS3();
# Let S3 know which file we want to be downloading
$s3_bucket_name = "yours3bucketname";
$s3_object_path = "folder1/object1.zip";
$s3_url_lifetime = "10 minutes";
$filename = "download.zip";
#Check whether the user has already downloaded a file in last two hours
$user_can_download = true;
if($user_can_download) {
$s3_url = $s3->get_object_url($s3_bucket_name, $s3_object_path, $s3_url_lifetime, array('response' => array('content-type' => 'application/force-download', 'content-disposition' => 'attachment; filename={$filename}')));
header("Location: {$s3_url}");
}
else {
echo "Sorry, you need to wait a bit longer before you can download a file again...";
}
?>
This uses the get_object_url function, which generates pre-signed URLs that allow you to let others download files you've set to private in S3 without making these files publicly available.
As you can see, the link this generates will only be valid for 10 minutes, and it's a unique link. So you can safely let people download from this link without having to worry about people spreading the link: the link will have expired. The only way people can get a new, valid link is to go through your download script, which will refuse to generate a new link if the IP/user that is trying to initiate a download has already exceeded their usage limit. It's important that you set these files to private in S3, though: if you make them publicly available, this won't do much good. You probably also want to take a look at the docs for the S3 API that generates these pre-signed URLs.
Only 2 ways comes to my mind - you either copy a file with unique hash and let apache serve it.. then you don't have any control over when user actually ends his download (or starts). Useful for big files. Another way is to pass it through php. Still, you would need to kill download session in case user stops download.
If there is no plugin for that you won't be able to do it the easy way by adding a script or copy & paste "some" code.
So either hire somebody or you'll need to learn what you need to approach the task an your own, in other words learn how to program. Your question already contains the steps you need to implement: Record who is downloading what and when and keep track of the status of the download.
I have not tried to track a download status before but I'm not sure if it is possible to get the status somehow directly from the server that is sending the file. But you can get it from the client: Download Status with PHP and JavaScript
I'm further not sure if this will work properly in your scenario because the file will come from S3.
S3 itself has a so called feature "query string protection":
With Query string authentication, you have the ability to share Amazon
S3 objects through URLs that are valid for a predefined expiration
time.
So you need to lookup the S3 API to figure out how to implement this.
What you could try is to send an ajax request to your server when the user clicked the download link, send the amazon s3 link your server generated back as a response and have the client side javascript somehow trigger that file download then.
You can monitor user downloads by their ip address, store it in a database along with the time at which the user downloaded and the session id (with hashes of course) and check this before each download request. If the current time is less than 2 hours within the same session, block requests, else give them the download.
Table Structure:
Id Ip_Addr Session_Hash Timestamp
1 10.123.13.67 sdhiehh_09# 1978478656
2 78.86.56.11 mkdhfBH&^# 1282973867
3 54.112.192.10 _)*knbh 1445465565
This is a very basic implementation. I'm sure more robust methods exist. Hope this helps.
I am doing some benchmark testing on my web app and notice that the responses from Facebooks API are a lot slower than Twitters.
** For the record, I am using the twitter-async library for Twitter API integration and Facebooks own library here
With the Twitter library I can save an oAuth token & secret, I then use these to create an instance and make calls, simple. For Facebook, unless I ask for offline_permission, I must store an oAuth code and recreate an oAuth access token each time the user logs into my app.
Given the above I can:
Retrieve a Twitter users timeline in 0.02 seconds.
Get a FB oAuth Access Code in 1.16 seconds, then I can get the users details in 2.31 seconds, totalling 3.47 seconds to get the users details.
These statistics are from using functions Facebook has provided in their PHP API library. I also tried implementing my own CURL functions to get this information via a request and the results are not much better.
Is this the same kind of response times others are getting using the Facebook API?
Besides requesting offline permission and storing the permanent access token, how else can I speed up these requests, is the problem on my end or Facebooks?
Thanks,
Chris
I also have the experience the Facebook API is quite slow. I believe the facebook PHP API does not much more than wrap around CURL in the case of API calls so it makes sense that this didn't improve the speeds.
I work on a canvas page, which means for existing users, I get an access token and fb_UID as he/she comes in. At first, I did a /me graph call and sometimes a /me/friends. The first takes like 0.6 secs, the second usually a bit more. So in that case I can (to some extend) confirm your findings.
That's why I've now switched to storing important stuff locally and updating it only when needed (real time update API). Basically, I don't need any API calls during 'normal' operation.
I realize you are probably integrating FB on your own page, and perhaps use a bit more info than just name, fb-UID & friends, and that this solution is not totally answering your question. But perhaps it can still function as a small piece of the puzzle ;)
I am looking forward to other perspectives on this as well!
My application calls multiple URL's from Facebook. It does take some time :/
This is why I decided to write a function which stores the results in $_SESSION so I can use it again later, along with a timestamp to see if the data is too old.
This doesn't solve the actual problem, it just saves you having to keep fetching it.
What I like to do for end user experience, is forward them to page with a loading .gif - then have javascript request the page that actually fetches data. That way, the user remains on a loading page with a nice gif to stare at, until the next page is ready.