I want users to upload large files (like HD videos and big PDFs) and am using this line of code to upload the files. Storage::disk('s3')->putFile('uploads', new File($request->file('file_upload')));
the problem is even though I have a good internet speed it takes a very long time to upload the file. what can I do to get a faster file upload?
There are actually two network calls involved in the process.
From client-side the file gets uploaded to your server.
Via server-to-server call the file gets uploaded from your server to s3.
The only way to reduce the delay is to directly upload the files from client to s3 using client-side SDKs securely. With this the files will be directly stored in the S3 bucket.
Once the files are uploaded to s3 via AWS S3 client-side SDKs, you can post the attributes of the file along with the download URL to Laravel and update it to DB.
The plus point of this approach is it allows you to show actual file upload progress at the client-side.
This can be done via the AWS amplify library which provides great integration with S3: https://docs.amplify.aws/start
The other options:
JS: https://softwareontheroad.com/aws-s3-secure-direct-upload/
Android:
https://grokonez.com/android/uploaddownload-files-images-amazon-s3-android
iOS:
https://aws.amazon.com/blogs/mobile/amazon-s3-transfer-utility-for-ios/
Please use this
$file_name = $request->file('name');
$disk = Storage::disk('s3');
$disk->put($filePath, fopen($file_name, 'r+'));
Instead of
Storage::disk('s3')->put($filePath, file_get_contents($file_name));
And also increase
post_max_size,upload_max_filesize,max_execution_time,memory_limit
Related
I want to transcode video in 360p, 480p, 720p and then upload to amazon s3.
Currently we are using php library FFMPEG
I have successfully transcode video on my server. But I did not get that how to achieve same on amazon s3.
Do I need to first upload original video on s3 and then get that video and transcode in different format and send to amazon s3? is it possible?
Or if any other way than please suggest me.
S3 is not a block file system, it is an object file system. The difference here is that, normally, you cant mount a S3 bucket like a standard unix FS and work on file with fopen(), fwrite() ect... Some trick exists to work on S3 like any other FS but I would suggest an other option :
You have to transcode the video on a localy mounted FS (like an AWS EFS, or a local file system), then "push" (or upload) the whole transcoded video onto the S3 bucket. Of course, you can improve this process in may ways (remove temp file, do parallel works, use Lambda service, or task in containers...). You should avoid to do many upload/download from or to S3 (because it is time and cost consuming). Use a local storage as much as possible, then push the resulting data when they are ready on S3.
Also AWS have a service to do video transcodification : https://aws.amazon.com/en/elastictranscoder/
1) I have upload form
2) It uploads file to my local storage move_uploaded_file.
3) It uses zend putObject function to move file to s3 object.
Everything works ok till I have file size of around 30Mb to 40 Mb. The problem is when I try uploading larger files like 80 Mb, 100 Mb or so, the file moving to s3 takes ages to complete the upload. My code is something like this:
$orginalPath = APPLICATION_PATH."/../storage/".$fileName;
move_uploaded_file($data['files']['tmp_name'], "$orginalPath");
$s3 = new Zend_Service_Amazon_S3($accessKey, $secretKey);
$s3->putObject($path, file_get_contents($orginalPath),
array(Zend_Service_Amazon_S3::S3_ACL_HEADER =>Zend_Service_Amazon_S3::S3_ACL_PUBLIC_READ));
Can you help how to handle large files move quickly I tried using streamWrapper like this
$s3->registerStreamWrapper("s3");
file_put_contents("s3://my-bucket-name/orginal/$fileName", file_get_contents($orginalPath));
But no luck, it take same long time to move file.
Hence, is there an efficient way to move file quickly to s3 bucket?
The answer is a worker process. You can start a PHP worker script via PHP CLI on server boot, perhaps with a GearmanClient php extension and gearman server running on your box. Then you queue a background job to upload the file to S3 while your main site PHP code returns success after issuing the job and the file happily uploads in the background while your foreground site continues on it's merry way. Another way of doing this is making another server do all of this task while your main site remains utilization free of this process. I am doing this now. It works well.
You could consider using the more direct POST to S3 feature. The AWS SDK for PHP has a class to help generate the data for the form.
I'm running php on Nginx and users are uploading files to my server using my app.
I need to write a process that will check if files have finished uploading and then move them to Amazon S3 bucket.
My questions are:
How to check if files finished uploading?
Will it be faster to upload it directly to S3?
It is always faster to cut out the middle-man and upload directly to S3.
I have another issue with amazon and its related to file uploads.I am using jqueryFileUpload and amazon API's to uplaod files to amazon S3.I have succeeded in uploading it,but it involves a trick.
I had to store the image on my server and then move it to S3 from there using putObjectFile method of S3.Now the plugin comes with great functions to crop/resize images and I have been using them since long.Now when I integrate the plugin with AWS,i am facing performance issues with upload.The time taken for uploads is longer than normal and this raises questions of us using AWS S3 over traditional way.
I had to make changes to my UploadHandler.php file to make it work.These are the changes made.i added a part of AWS upload code to the file from line 735 to 750
$bucket = "elasticbeanstalk-2-66938761981";
$s3 = new S3(awsAccessKey, awsSecretKey);
$response = $s3->putObjectFile($file_path,$bucket,$file->name,S3::ACL_PUBLIC_READ);
$thumbResponse = $s3->putObjectFile('files/thumbnail/'.$file->name,$bucket,'images/'.$file->name,S3::ACL_PUBLIC_READ);
//echo $response;
//echo $thumbResponse;
if ($response==1) {
//echo 'HERER enter!!';
} else {
$file->error = "<strong>Something went wrong while uploading your file... sorry.</strong>";
}
return $file;
Here is a link to s3 class on git.
The normal upload to my current server(not amazon),same image uploads in 15 secs,but on amazon S3 it takes around 23 secs and I am not able to figure out a better solution.I have to store the image on my sever before uploading to S3 as I am not sure if I can process them on the fly and upload directly to S3.Can anyone suggest the right way to approach the problem?Is it possible to resize the images to different sizes in memory and upload directly to S3 avoiding the overhead of saving it to our server?If yes can anyone guide me in the right direction?
Thank you for the attention.
I believe the approximate 8secs is the overhead here for creating versions of image in different sizes.
You may take different approaches to get rid of the resizing overhead at time of upload. The basic idea will be to allow the uploading script to finish execution and return the response, and do the resizing process as a separate script.
I like to suggest following approaches:
Approach 1. Don't resize during the upload! Create resized versions on-the-fly only when it is being requested for the first time and cache the generated images to serve directly for later requests. I saw a few mentions of Amazon CloudFront as a solution in some other threads in Stackoverflow.
Approach 2. Invoke the code for creating resized versions as a separate asynchronous request after the upload of original image. There will be a delay in scaled versions being available. So write necessary code to show some place holder images in the website until the scaled versions become available. You will have to figure out some way to identify whether scaled version is available yet or not(For example check file is existing, or set some flag in database). Some ways for making asynchronous cURL requests are suggested here if you would like to try it out.
I think both approaches will have equal level of complexity.
Some other approaches are suggested as answers for this other question.
A newbie question but I have googled abit and can't seem to find any solution.
I want to allow users to directly upload files to S3, not via my server first. By doing so, is there any way the files can be checked for size limit and permitted types before actually uploading to S3? Preferably not to use flash but javascript.
If you are talking about security problem (people uploading huge file to your bucket), yes, You CAN restrict file size with browser-based upload to S3.
Here is an example of the "policy" variable, where "content-length-range" is the key point.
"expiration": "'.date('Y-m-d\TG:i:s\Z', time()+10).'",
"conditions": [
{"bucket": "xxx"},
{"acl": "public-read"},
["starts-with","xxx",""],
{"success_action_redirect": "xxx"},
["starts-with", "$Content-Type", "image/jpeg"],
["content-length-range", 0, 10485760]
]
In this case, if the uploading file size > 10mb, the upload request will be rejected by Amazon.
Of course, before starting the upload process, you should use javascript to check the file size and make some alerts if it does.
getting file size in javascript
AWS wrote a tutorial explaining how to create HTML POST forms that allow your web site visitors to upload files into your S3 account using a standard web browser. It uses S3 pre-signed URLs to prevent tampering and you can restrict access by file size.
To do what you are wanting to do, you will need to upload through your own web service. This is probably best anyway, as providing global write access to your end users to your S3 bucket is a security nightmare, not too mention there would be nothing stopping them from uploading huge files and jacking up your charges.