We have developed our application in Laravel and now we are planning to transfer it to Amazon server where we have to separate our application logic with file storage. Basically, we want to move our whole application storage to a cloud server(Amazon S3) and application logic to Amazon EC2 server.
In our system, we manipulate (resize images, merge images, make thumbnails from videos, etc) many storage files locally. We will not be going to store any files on the application server once we migrate to Amazon server. So, our concern is how we can manipulate cloud server files?
Earlier all files are present on the application server so file manipulation was easy to process but after migrating whole storage to cloud server how we can manipulate files that are on the cloud server with manipulation logic resides on the application server?
Any Response will be helpful
Thanks in advance...
To manipulate S3 file, I think first we need to download the file locally. Once, we have file locally we can apply any operation on that particular file. We can delete the local file later.
Here are the documents to directly upload from or download to a local file using the Amazon S3.
https://aws.amazon.com/blogs/developer/transferring-files-to-and-from-amazon-s3/
https://docs.aws.amazon.com/aws-sdk-php/v3/guide/
Thanks
Related
I have project that needs to output hundreds of photos from one template file, compress them into a .zip file, and push them to the customer's browser. After that, the .zip file can be deleted.
Google App Engine (PHP) does not allow you to write files like you would in a standard web server.
How can this be accomplished with GAE flexible?
As you have already known, App Engine flexible does not allow you to write files on the system even if it runs on a VM. The reason is that it runs within more Docker containers and you will not have the guarantee that you will find the file.
An alternative for this is to change a bit your workflow and to use Cloud Storage as an intermediate. You can send the photos directly to Cloud Storage, and the users will be able to download them directly from Cloud Storage. Here you have a guide on how to achieve this from App Engine flex for PHP.
I have a web page on a web hosting and images are stored on Amazon S3. I want with php be able to download multiple images from Amazon S3 through my web page in a zip file.
What are my options and what is the best?
What I know, it is not possible to compress files on S3. Can I use Amazon lambda?
Best solution I've come across.
The user selects on my website which images they want to downloaded.
I get the file name from my database on my web host and download the
images from S3 to a temporary directory on my web host.
A zip file is created in a temporary directory and a link is sent
to the user. After a certain time, I clear up the temporary directory (with a script) on my web host.
But it would be great if there are a way that did not go through my hosting to create and download the zip-file.
AWS S3 is "basic building blocks", so it doesn't support a feature like zipping multiple objects together.
You've come up with a good method to do it, though you could stream the objects into a zip file rather than downloading them. EC2 instances can do this very quickly because they tend to have fast connections to S3.
Lambda doesn't work for this, as it is only triggered when an object is placed into an S3 bucket. You are doing the opposite.
I have Wordpress instance on Amazon Elastic BeanStalk. When I upload instance with EB scripts, the whole page is being replaced, also uploaded images which can be attached to posts. And after such automatic deploy, posts have missing pictures:)
I tried to solve this:
1) I logged into Amazon machine with SFTP, but my user ec2-user has only read-access to files. So I was not able overwrite only part of application, with retaining uploaded files.
2) I read I can use Amazon S3 as external storage for upload files. This is still not tested by me:). Do you know if this is good approach?
3) Any other approach for this problem? How to organize it on amazon: machine backup probably should be set?
The Elastic Beanstalk environment is essentially stateless; meaning that all data that is persisted to disk will be lost when the application is updated, the server is rebuilt or the environment scales.
The best way in my option is to use a plugin that writes all media files to AWS S3; something similar to the Amazon S3 and Cloudfront plugin.
You log files should also be shipped to a remote syslog server which you can either build yourself or use a 3rd party.
Google: loggly, logstash, graylog, splunk
If using Windows Azure and an ASP.NET web site with PHP script to upload files, can I access those files from the server or must I use the Data Storage facilities?
i.e. I'd like to reference the files directly from html\server... etc. I think I probably should be able to.
Thank you.
How are you hosting your ASP.NET website. If it's hosted in a Web Role, then you won't have access to a persistent "standard" file system for that role. (especially if you've scaled over multiple instances).
Have a look at the following tutorial on using blob storage from PHP. http://www.windowsazure.com/en-us/develop/php/how-to-guides/blob-service/
You can use the blob storage quite easily to access them with standard HTTP links from your ASP.NET site. i.e.
http://your-storage-account.blob.core.windows.net/your-container/file.txt
I am currently writing an application using the Yii-framework in PHP that stores a large number of files uploaded by the users of the application. I decided that since the number of files are going to be ever increasing, it would be beneficial to use Amazon S3 to store these files and when requested, the server could retrieve the files and send it to the user. (The server is an EC2 instance in the same zone)
Since the files are all confidential, the server has to verify the identity of the user and their credentials before allowing the user to receive the file. Is there a way to send the file to the user in this case directly from S3 or do I have to pull the data to the server first and then serve it to the user.
If So, Is there any way to cache the recently uploaded files on the local server so that it does not have to go to s3 to look for the file. In most cases, the most recently uploaded files will be requested repeatedly by multiple clients.
Any help would be greatly appreciated!
Authenticated clients can download files directly from S3 by signing the appropriate URLs on the server prior to displaying the page/urls to the client.
For more information, see: http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
Note that for confidential files you may also want to consider server-side/client side encryption. Finally, for static files ( such as images ) you may want to set the appropriate cache headers as well.
Use AWS Cloud Front to server these static files. Rather than sending the files to the user, send them links to the files. The Links need to be cloud front links & not direct links to the S3 bucket.
This has the benefit of keeping load low on your server as well as caching files close to your users for better performance.
More details here Serving Private Content through CloudFront