1) I have upload form
2) It uploads file to my local storage move_uploaded_file.
3) It uses zend putObject function to move file to s3 object.
Everything works ok till I have file size of around 30Mb to 40 Mb. The problem is when I try uploading larger files like 80 Mb, 100 Mb or so, the file moving to s3 takes ages to complete the upload. My code is something like this:
$orginalPath = APPLICATION_PATH."/../storage/".$fileName;
move_uploaded_file($data['files']['tmp_name'], "$orginalPath");
$s3 = new Zend_Service_Amazon_S3($accessKey, $secretKey);
$s3->putObject($path, file_get_contents($orginalPath),
array(Zend_Service_Amazon_S3::S3_ACL_HEADER =>Zend_Service_Amazon_S3::S3_ACL_PUBLIC_READ));
Can you help how to handle large files move quickly I tried using streamWrapper like this
$s3->registerStreamWrapper("s3");
file_put_contents("s3://my-bucket-name/orginal/$fileName", file_get_contents($orginalPath));
But no luck, it take same long time to move file.
Hence, is there an efficient way to move file quickly to s3 bucket?
The answer is a worker process. You can start a PHP worker script via PHP CLI on server boot, perhaps with a GearmanClient php extension and gearman server running on your box. Then you queue a background job to upload the file to S3 while your main site PHP code returns success after issuing the job and the file happily uploads in the background while your foreground site continues on it's merry way. Another way of doing this is making another server do all of this task while your main site remains utilization free of this process. I am doing this now. It works well.
You could consider using the more direct POST to S3 feature. The AWS SDK for PHP has a class to help generate the data for the form.
Related
I'm working on a project that processes some files on the server at the user's request, then uploads the resulting ZIP file as a blob to Azure for storage. A few cases involve extremely large files which take about an hour to upload. It would be helpful if I could at any random moment run a separate process that queries Azure for the upload progress while the main process on the local server is still preoccupied with uploading the file.
Is there a way to do this in PHP? (If it's of any help, this project is running on Phalcon.)
Is there a way to monitor progress of uploaded to Azure cloud server via PHP?
I don't think it's possible because uploading file is a single task and even though internally the file is split into multiple chunks and these chunks get uploaded, the code actually wait for the entire task to finish.
I'm stuck with this specific problem for two days and can't find a solution.
So I have Laravel 7.0 project hosted on AWS Elastic Beanstalk, which is running fine. I also have S3 bucket, used for saving uploaded videos, which are uploaded to server via user form.
The problem is, smaller files(< 10 MB) are uploaded without a problem. But once it comes to bigger files, Storage::disk('s3')->put('videos/lorem.mp4', fopen($request->file('file'), 'r+')); method returns false and file is not uploaded to S3. If I use 'public' disk instead of 's3' disk, file is uploaded without a problem.
I also tried to upload file manually via AWS CLI with the same IAM user and it was uploaded without a problem.
PHP and nginx configuration are correctly configured to accept big files.
I know that this is very specific question, but if anyone have a hint or solution, please do share.
This is likely a timeout issue on the upload. Increasing the PHP's max_execution_time might help. I'd try that first.
Else, you should look into uploading to local disk and have an entirely different process for picking up the files from disk and uploading them to S3. I was amazed at how much throughput you can gain from this lab. It's also worth checking out the job queues feature in Laravel.
I want users to upload large files (like HD videos and big PDFs) and am using this line of code to upload the files. Storage::disk('s3')->putFile('uploads', new File($request->file('file_upload')));
the problem is even though I have a good internet speed it takes a very long time to upload the file. what can I do to get a faster file upload?
There are actually two network calls involved in the process.
From client-side the file gets uploaded to your server.
Via server-to-server call the file gets uploaded from your server to s3.
The only way to reduce the delay is to directly upload the files from client to s3 using client-side SDKs securely. With this the files will be directly stored in the S3 bucket.
Once the files are uploaded to s3 via AWS S3 client-side SDKs, you can post the attributes of the file along with the download URL to Laravel and update it to DB.
The plus point of this approach is it allows you to show actual file upload progress at the client-side.
This can be done via the AWS amplify library which provides great integration with S3: https://docs.amplify.aws/start
The other options:
JS: https://softwareontheroad.com/aws-s3-secure-direct-upload/
Android:
https://grokonez.com/android/uploaddownload-files-images-amazon-s3-android
iOS:
https://aws.amazon.com/blogs/mobile/amazon-s3-transfer-utility-for-ios/
Please use this
$file_name = $request->file('name');
$disk = Storage::disk('s3');
$disk->put($filePath, fopen($file_name, 'r+'));
Instead of
Storage::disk('s3')->put($filePath, file_get_contents($file_name));
And also increase
post_max_size,upload_max_filesize,max_execution_time,memory_limit
I'm running php on Nginx and users are uploading files to my server using my app.
I need to write a process that will check if files have finished uploading and then move them to Amazon S3 bucket.
My questions are:
How to check if files finished uploading?
Will it be faster to upload it directly to S3?
It is always faster to cut out the middle-man and upload directly to S3.
I have another issue with amazon and its related to file uploads.I am using jqueryFileUpload and amazon API's to uplaod files to amazon S3.I have succeeded in uploading it,but it involves a trick.
I had to store the image on my server and then move it to S3 from there using putObjectFile method of S3.Now the plugin comes with great functions to crop/resize images and I have been using them since long.Now when I integrate the plugin with AWS,i am facing performance issues with upload.The time taken for uploads is longer than normal and this raises questions of us using AWS S3 over traditional way.
I had to make changes to my UploadHandler.php file to make it work.These are the changes made.i added a part of AWS upload code to the file from line 735 to 750
$bucket = "elasticbeanstalk-2-66938761981";
$s3 = new S3(awsAccessKey, awsSecretKey);
$response = $s3->putObjectFile($file_path,$bucket,$file->name,S3::ACL_PUBLIC_READ);
$thumbResponse = $s3->putObjectFile('files/thumbnail/'.$file->name,$bucket,'images/'.$file->name,S3::ACL_PUBLIC_READ);
//echo $response;
//echo $thumbResponse;
if ($response==1) {
//echo 'HERER enter!!';
} else {
$file->error = "<strong>Something went wrong while uploading your file... sorry.</strong>";
}
return $file;
Here is a link to s3 class on git.
The normal upload to my current server(not amazon),same image uploads in 15 secs,but on amazon S3 it takes around 23 secs and I am not able to figure out a better solution.I have to store the image on my sever before uploading to S3 as I am not sure if I can process them on the fly and upload directly to S3.Can anyone suggest the right way to approach the problem?Is it possible to resize the images to different sizes in memory and upload directly to S3 avoiding the overhead of saving it to our server?If yes can anyone guide me in the right direction?
Thank you for the attention.
I believe the approximate 8secs is the overhead here for creating versions of image in different sizes.
You may take different approaches to get rid of the resizing overhead at time of upload. The basic idea will be to allow the uploading script to finish execution and return the response, and do the resizing process as a separate script.
I like to suggest following approaches:
Approach 1. Don't resize during the upload! Create resized versions on-the-fly only when it is being requested for the first time and cache the generated images to serve directly for later requests. I saw a few mentions of Amazon CloudFront as a solution in some other threads in Stackoverflow.
Approach 2. Invoke the code for creating resized versions as a separate asynchronous request after the upload of original image. There will be a delay in scaled versions being available. So write necessary code to show some place holder images in the website until the scaled versions become available. You will have to figure out some way to identify whether scaled version is available yet or not(For example check file is existing, or set some flag in database). Some ways for making asynchronous cURL requests are suggested here if you would like to try it out.
I think both approaches will have equal level of complexity.
Some other approaches are suggested as answers for this other question.