I'm uploading images to s3, saving the absolute URLs in my database, and then displaying them in my frontend. There are two problems I see - the first one is that I need to mark them as public while uploading:
$path = $image->storeAs('folder', md5('file_name.jpg'), ['disk' => 's3']);
Storage::disk('s3')->setVisibility($path, 'public');
$url = Storage::disk('s3')->url($path);
The second problem is that the URLs I use in the website have the bucket name in it obviously, for example:
<img src="https://bucket-name-staging.s3.eu-central-1.amazonaws.com/folder/84b4b3j4j34j12j3123h21jh321k312312.jpg">
This also makes it blocked by adblock sometimes. Is this the recommended way to do it? Can I display images on the website if they are private in the s3 bucket? (I tried it - but that's a 403 response).
As for first question, please check the following approaches mentioned here:
Upload files Securely to AWS S3
Uploading Photos to Amazon S3 from a Browser
As for second problem, please check:
Route 53
CloudFront(is needed if you would like to load files by https)
If you are using pure HTML code to show the s3 images on your website there is no way else flagging them as public, but in this case, those files will be available for all people around the world and they can show your images and absolutely you will be charged or using signed url, but you are using PHP you can try this
Related
I hope you are all doing great. My question is
I am using cantaloupe image server which serves images according to user specify parameters in url using IIIF image API v2.0
Here is the url.
https://iiif.blavatnikarchive.org/iiif/2/baf__be12495f1d825e832cd7b66f0ee30c8adda804cd6c19e627537107b714b95356/full/!1000,1000/0/default.jpg (1000x1000 image)
https://iiif.blavatnikarchive.org/iiif/2/baf__be12495f1d825e832cd7b66f0ee30c8adda804cd6c19e627537107b714b95356/full/!512,512/0/default.jpg (512x512 image)
Image server takes time to process image for user defined dimensions and region extraction around 4sec. Therefore what i am doing is to pre generating some thumbnails using image server and storing them on amazon s3 so if user requests same thumbnail again and again I serve them pre generated thumbnail. Two benefits
1- Image server not computing it every time and load on server will be low.
2- Image serving through static thumb will be faster because it is pre generated.
Problem is now actually two servers are involved.
1- Image server for dynamic content creation. https://iiif.blavatnikarchive.org
2- Amazon s3 buckets for static thumbnails which were pre generated. assets.s3.amazonaws.com/image-name
I want to serve images using one url so end user don't redirect to different locations for same image with different sizes. So i decided to serve images using API
https://api.blavatnikarchive.org/baf__be12495f1d825e832cd7b66f0ee30c8adda804cd6c19e627537107b714b95356/full/!512,512/0/default.jpg (Apache using php)
In my api I know which request is for static thumbnail and i need to take it from s3 bucket and which one is for dynamic size and need to take it from image server. In API i need to get image using file_get_contents("url of image whether its amazon s3 url or image server url") so request downloads it first on my api server and then serve to client and client browser downloads it again which is time consuming and takes around 2s for one image which is not acceptable. Image serving time should be less than a sec. I am here to know is there a way to map my api url directly to image server and amazon server.
Like
If user type https://api.blavatnikarchive.org/baf__be12495f1d825e832cd7b66f0ee30c8adda804cd6c19e627537107b714b95356/full/!1000,1000/0/default.jpg
It should map to https://iiif.blavatnikarchive.org/iiif/2/baf__be12495f1d825e832cd7b66f0ee30c8adda804cd6c19e627537107b714b95356/full/!1000,1000/0/default.jpg (1000x1000 image)
Or specify
https://iiif.blavatnikarchive.org/iiif/2/baf__be12495f1d825e832cd7b66f0ee30c8adda804cd6c19e627537107b714b95356/full/!512,512/0/default.jpg (512x512 image)
Should map to directly static image thumb
https://baf-iiif-assets.s3.amazonaws.com/be12495f1d825e832cd7b66f0ee30c8adda804cd6c19e627537107b714b95356
or you can suggest me solution how i can gather things to one url? I want to keep user on my api url and dont want to use any redirection. How can i achieve this?
Many thanks.
I have thousands of images to display on browser form private bucket of S3.
What is best way to get these private file on browser.
In order to get private image from s3 I have found multiple solutions listed below:
Make the files public. (can't use this as per requirment)
Generate pre-signed urls for files.
Pulls the image via the API from S3, caches it and serves.
By changing bucket policy.
Currently I am using signed url to get images but for every image I have to generate signed url. it will take lot of processing time.
My Question is, what is best way? And how to achieve this?
Your method of using Pre-signed URLs is correct.
You should generate these URLs when serving the HTML page that contains the images. This can be done with a couple of lines of code, or the createPresignedRequest PHP call. (I'm not familiar with Laravel, but you tagged your question with PHP.)
Thus, the page will contain dynamic content, created for the user on-the-fly.
My question is about HTML and PHP.
This is my setup right now:
A website where user have accounts
A FTP server with pictures (currently none)
Files are currently saved on the website in the "PICTURES" folder (which is accessible by everybody who know the full URL)
So, I would like to know how I can display the images without storing them on the website (which will fix my URL problem).
My idea was to move the files on the FTP server, and when a users logon and request a page with those images, download them through a FTP connection, save them on the website, display the images, and remove them. Which would make them accessible only between the downloading time. But this solutions sounds REALLY bad to me.
You need always to have a place where your images are stored. But, if you don't want to give a user the chance to know where are stored, you can create a system which is used to show the images.
Think about this, if you want to download a file from Mega, you can't access to the URL where the file is stored, instead of that, the server itselfs calls a system who assign you a "key" and you can download the file only through that system using your "key".
You could use a system like "base64" so you can encode your image, and show it using it, or, you can use the "header" modifier so, you can display an image using a PHP code.
For example your image tag will be like:
<img src="processImage.php?id=01&user=10&key=123" />
So, your processImage will return a "tricky" image, actually not the image, but the code processed by PHP will be returned, like using "imagejpg()" function with the header "Content-Type:image/jpeg" and then the user will not know where the image is stored actually but the img will works actually.
Happy Friday!
I was curious if the following was possible:
I'd like to be able to upload media to different locations based on its file type. I really appreciate that Fine Uploader can upload directly to Amazon S3, but I don't want everything to go straight to S3. I'm using WordPress and need to generate different image sizes for uploaded media, so I'd like to upload images to my server for processing and then over to S3 ( via this plugin ). Any other media ( audio, video, etc ) I'd like to upload to S3. What do you think? Is this possible?
Are you ok have two separate Fine Uploader instances? This is what would be required. Essentially, you would have to setup a button tied to a traditional endpoint Fine Uploader, and another tied to a Fine Uploader S3 instance. The buttons could have specific validation restrictions tied to them to prevent users from accidentally submitting an image file to the S3 uploader.
Another option is to provide your own file input element, check the submitted files, and then pass the appropriate file(s) to the appropriate Fine Uploader instance via Fine Uploader's API (addBlobs or addFiles).
Another possibility: just allow your users to upload all files to S3, and pull the each image file back down to your server (temporarily) after they have reached the bucket, modify it, and send it back to S3.
Note that I am working on a feature for Fine Uploader 4.4 that will allow you to specify images sizes via options, and Fine Uploader will scale the images and send each scaled image separately to whatever endpoint you choose. See issue #1061 for details/progress updates.
I have PHP application running on google app engine which uses google cloud storage to store images.
I'm displaying images using CloudStorageTools::getImageServingUrl and the url is successfully pointed to images. I re-size the image using =sXXX format.
e.g: http://lh3.ggpht.com/AddEfddJKeiesklEaldaooea9as9e7de=s144
The problem is once I delete the previous image and replace it with another image using the same image name and it displays the older image. Even though I clear browser cache it doesn't solve. But when I remove =sXXX part from the url, it points to the new image without any problem at all. How can I overcome this?
Thanks & Regards!
Now a lot familiar with GAE PHP but I ll help a bit.
A serving url is persistent until (see answer here):
a. call delete_serving_url, or
b. delete the underling blob.
Now I've searched and the function CloudStorageTools::deleteImageServingUrl() exists for PHP so try calling that and then create a new one.
Add "?".microtime() to the URL generated by getImageServingUrl to force refresh.