AWS Cloudfront - and filenames upon deploying - php

I was wondering if it's possible to manipulate or change the names of files upon deploying the distribution.
Reason for during this, is that we don't have the actual files on our own servers but is provided by a partner. Is it somehow possible to run a php function upon deploy to change the name of the file on the cdn ?
So eg.
partner.example.com/image/123120913.jpg
to
1234.cloudfront.com/image/SHOE-NAME.jpg
One way is to import all images first to local storage and upon that download change filename - but seems very extensive.
As we can provide the image name easy if it's possible to run a php function upon deploying.

Amazon CloudFront is a caching service that retrieves content from a specified origin (eg web server, Amazon S3), stores it in a cache and then serves it to users.
Amazon CloudFront does not create aliases to filenames. It simply passes the request to the origin. If the origin is a web server, you could write a web app that returns any type of information given the request URL, but CloudFront cannot rename or map filenames.

Related

Is it possible to use AWS CloudFront for PHP files?

I'm trying to load PHP files such as index.php using AWS CloudFront.
The documentation states -
Create a web distribution if you want to:
Speed up distribution of static and dynamic content, for example,
.html, .css, .php, and graphics files. Distribute media files using
HTTP or HTTPS. Add, update, or delete objects, and submit data from
web forms. Use live streaming to stream an event in real time.
However, when I upload PHP files to the relative CloudFront bucket it ends up downloading the file and opening it. How will allow me to host PHP files?
However, when I upload PHP files to the relative CloudFront bucket
There is no such thing as a CloudFront bucket, so you are likely referring to an S3 bucket, configured behind CloudFront as an origin.
CloudFront works with dynamic content, such as might be generated with PHP, but the PHP site needs to be hosted on a server that supports it -- not S3.
You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting. (emphasis added)
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
See AWS Web Site Solutions for options for hosting sites involving static or dynamic content, bearing in mind that PHP requires a solution supporting server-side scripting and dynamic content, so not all solutions presented there (including S3) will fit your needs... but these are all compatible with CloudFront -- which is only tasked with delivering the rendered content, not the original rendering.
CloudFront is designed to serve content to end users and not execute your code. Your PHP files would be on an EC2 instance running PHP and a webserver (Apache, Nginx) which you could then put behind CloudFront to get the benefits. This would then generate the HTML for CloudFront to serve. CloudFront itself does not handle the processing and just deals with the static HTML. When using CloudFront with S3 it will serve up the content directly to the end user.
I am not quite sure where you found that snippet but the introduction does not seem to list .php for me.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Amazon CloudFront is a web service that speeds up distribution of your
static and dynamic web content, such as .html, .css, .js, and image
files, to your users.

How to find S3 connected to a cloudfront URL

The client has a site running on PHP deployed over AWS infrastructure. I have access to AWS console of a client.
There are some images in his site that is loading from cloudfront (which I understand is mapped to an S3 bucket). I need to update these images, but I do not know which S3 bucket they are in, since client has a lot of S3 buckets configured. How do I figure this one from the console?
First, you will need to find the CloudFront distribution that is serving the content. Find the distribution with the same cloudfront.net URL that you're using to access the images.
Then, look at the behaviors and origins of the distribution to determine when it is going to each origin, and which origin is serving the path. This will then tell you which Amazon S3 bucket is being used.
also to add on John's answer; after you have changed the image in origin S3 bucket; you will have to invalidate the cache for those images in CloudFront .. otherwise even though origin has changed u will keep seeing old images

Change Cloudfront next origin file request

I have setup a cloudfront system for a website.
To serve of the fly picture transformation, i added a custom origin, being the website.
So, in my distribution, i have 2 origins :
- s3 bucket
- mywebsite.com/images
Wen i call cdn.mywebsite.com/500/picture.jpg
It will call my website like : website.com/api.php/file/500/picture.jpg
I get the s3 object, create the thumb, save it on server then upload to s3.
Up to here, it all works.
Now, i would like that the next request to this file does not go to my origin custom website, but to the s3 stored file.
I cannot find a way to define order of importance (weight) for multiple origins.
It seams that once frontcloud has a "route", it keeps the same one.
Any ideas ?
You cannot do multiple origins for a Cloud Front Distributions from the AWS side you have to customize this using either
Amazon CloudFront REST API
Bucket Explorer
Check this guide has both steps http://www.bucketexplorer.com/documentation/amazon-s3--how-to-create-distributions-post-distribution-with-multiple-origin-servers.html

Amazon S3 for file Uploads and caching

I am currently writing an application using the Yii-framework in PHP that stores a large number of files uploaded by the users of the application. I decided that since the number of files are going to be ever increasing, it would be beneficial to use Amazon S3 to store these files and when requested, the server could retrieve the files and send it to the user. (The server is an EC2 instance in the same zone)
Since the files are all confidential, the server has to verify the identity of the user and their credentials before allowing the user to receive the file. Is there a way to send the file to the user in this case directly from S3 or do I have to pull the data to the server first and then serve it to the user.
If So, Is there any way to cache the recently uploaded files on the local server so that it does not have to go to s3 to look for the file. In most cases, the most recently uploaded files will be requested repeatedly by multiple clients.
Any help would be greatly appreciated!
Authenticated clients can download files directly from S3 by signing the appropriate URLs on the server prior to displaying the page/urls to the client.
For more information, see: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
Note that for confidential files you may also want to consider server-side/client side encryption. Finally, for static files ( such as images ) you may want to set the appropriate cache headers as well.
Use AWS Cloud Front to server these static files. Rather than sending the files to the user, send them links to the files. The Links need to be cloud front links & not direct links to the S3 bucket.
This has the benefit of keeping load low on your server as well as caching files close to your users for better performance.
More details here Serving Private Content through CloudFront

Can I grant permission on files on my AS3 bucket via HTTP request parameters?

I have a bucket with files in it in AS3. I have access to the PHP API and a server that can send requests to Amazon on command.
What I want to do is grant access to a file in my bucket using an HTTP GET/POST request. From what I understand using this function:
get_object_url ( $bucket, $filename, $preauth, $opt )
I can make the file publicly accessible for the $preauth amount of time at a given URL. I don't want to do that, I want the file to be privately available at a URL with required POST or GET credentials (deciding who can access the file would be based on a database containing application 'users' and their permissions). I understand the security implications of passing any kind of credentials over GET or POST on a non-HTTPS connection.
Is this possible? I could just download the file from AS3 to my server for the extent of the transaction then do all the controls on my own box, but that's an expensive solution (two file downloads instead of one, when my server shouldn't have had to do a download at all) to a seemingly easy problem.
The short answer is no.
You could look at Amazons IAM for some more ways to secure the content especially in conjunction with Cloudfront but essentially there is no way to provide access to content by passing along a username and password.
Of course, if you are already authenticating users on your site, then you can only supply the signed url to those users. The url only has to be valid at the time the user initiates the download and not for the entire duration of the download.
Also, if you intend to use your server as a proxy between S3 and the user you'll be removing a lot of the benefits of using S3 in the first place. But you could use EC2 as the server to remove the extra cost you mentioned - transfers between S3 and EC2 are free.

Categories