Uploading multiple files to AWS S3 with Progress Bar - php

I'm using Laravel to upload multiple files into an S3 bucket but can't get the progress data correctly on client side.
On client side I have the following (simplified):
var xhr = new XMLHttpRequest();
xhr.addEventListener("progress", updateProgress, false);
xhr.addEventListener("load", transferComplete, false);
xhr.open("POST", "my_url_to_upload");
xhr.send(formData);//formData is defined earlier and contains multiple files
function updateProgress(e) {
console.log('updateProgress', e);
}
function transferComplete(e) {
console.log("Files uploaded successfully.", e);
}
On Laravel's side:
$files = \Input::file('file');
$s3 = \Storage::disk('s3');
foreach ($files as $file)
{
$file_name = "/some_folder/" . $file->getClientOriginalName();
$s3->put($file_name, file_get_contents($file), 'public');
}
This works great as far as uploading the files to the S3 bucket.
The problem is that when uploading multiple files, the client side updateProgress function is only called once and only when all files have been uploaded (instead of after each file and during the uploads).
Ideally, the progress bar will get updated periodically during file uploads, so that when uploading large files, it will show close to real time progress (and not just when each file is completed).
How would I get Laravel (or PHP in general) to report back the progress, during the uploads?

Here's how to do it in SDK v3:
$client = new S3Client(/* config */);
$result = $client->putObject([
'Bucket' => 'bucket-name',
'Key' => 'bucket-name/file.ext',
'SourceFile' => 'local-file.ext',
'ContentType' => 'application/pdf',
'#http' => [
'progress' => function ($downloadTotalSize, $downloadSizeSoFar, $uploadTotalSize, $uploadSizeSoFar) {
printf(
"%s of %s downloaded, %s of %s uploaded.\n",
$downloadSizeSoFar,
$downloadTotalSize,
$uploadSizeSoFar,
$uploadTotalSize
);
}
]
]);
This is explained in the AWS docs - S3 Config section. It works by exposing GuzzleHttp's progress property-callable, as explained in this SO answer.
I understand you're doing it a bit differently (with Streams) but I'm sure you can adjust my example to your needs.

Related

Google Drive API upload 1 fiile from php

on output when open page -
File ID: 1qG8tteyVhAbB_rbu_VUvaE9ReqnSjEAh...
But on google drive no new files are created. I want upload file to cron but now i want only download test.pdf and end.
require_once './google-api-php-client/vendor/autoload.php';
use Google\Client;
use Google\Service\Drive;
function uploadBasic()
{
try {
$client = new Client();
//$client->useApplicationDefaultCredentials();
$client->setAuthConfig('./google-api-php-client/1710-6c50418be6b2.json');
$client->addScope(Drive::DRIVE);
$driveService = new Drive($client);
$fileMetadata = new Drive\DriveFile(array(
'parents' => ['225qhcKKyf8Ot0IhrRxRtqgHNTxLV1LiyI'],
'name' => 'test.pdf',
'mimeType' => 'application/pdf'));
$mimeType=mime_content_type($fileMetadata);
$content = file_get_contents('https://example.com/test.pdf');
$file = $driveService->files->create($fileMetadata, array([
'data' => $content,
'mimeType' => 'application/pdf',
'uploadType' => 'multipart',
'fields' => 'id']));
printf("File ID: %s\n", $file->id);
return $file->id;
} catch(Exception $e) {
echo "Error Message: ".$e;
}
}
uploadBasic();
how to debug issue
The fastest way to debug this is to do a File.list. This will tell you if in fact the file was uploaded.
You are not setting parents yin your meta data, so the file will have been uploaded to the root directory.
service account
Remember if you are using a service account that the files are uploaded into the service accounts google drive account, not your personal drive account.
To upload to your personal drive account you would need to create a directory on your drive account, share that directory with your service account using the service account email address. The service account email address can be found in the json key file its the only one with an #.
Then set parents in the meta data to the folder on your drive account
$fileMetadata = new Drive\DriveFile(array(
'parents' => { 'FolderId' }
'name' => 'ASB-Background-3.png'));
File 0 size error after edit
You edited your question. It originally stated you were doing this
$content = file_get_contents('./google-api-php-client/ASB-Background-3.png');
It is bad practice to update your question and change your code. It changes the answer to your question and in this case your error message.
That being said From the documentation for file_get_contents
file_get_contents — Reads entire file into a string
There is nothing in the documentation that states that this method could load a file from a url. So your edit changing to a URL is probably not going to work.
file_get_contents('https://example.com/test.pdf');
Your file is uploading with 0 because your not giving it a file. Download that file on to the machine running it and then send it, or write our own method which will accept a url and return a string file conents.
upload image
Files are uploaded in two parts first the fileMetadata and then the file itself.
MimeType must be properly set to that of the file you are uploading. file_get_contents will only work on a file that is currently accessible by your code.
If the file size is 0 make sure
to check the most recent uploaded file. every create will create a new file.
ensure that your code has access to the file you are uploading.
make sure the mimeType is correct.
Sample.
try {
$client = new Client();
$client->useApplicationDefaultCredentials();
$client->addScope(Drive::DRIVE);
$driveService = new Drive($client);
$fileMetadata = new Drive\DriveFile(array(
'name' => 'photo.jpg'));
$content = file_get_contents('../files/photo.jpg');
$file = $driveService->files->create($fileMetadata, array([
'data' => $content,
'mimeType' => 'image/jpeg',
'uploadType' => 'multipart',
'fields' => 'id']));
printf("File ID: %s\n", $file->id);
return $file->id;
} catch(Exception $e) {
echo "Error Message: ".$e;
}

When using the AWS PHP SDK 3.x is there a way to batch-upload multipart files to S3 in parallel using a getCommand array?

I am working on a process to upload a large number of files to S3, and for my smaller files I am building a list of commands using getCommand to upload them concurrently, like this:
$commands = array();
$commands[] = $s3Client->getCommand('PutObject', array(
'Bucket' => 'mybucket',
'Key' => 'filename.ext',
'Body' => fopen('filepath', 'r'),
));
$commands[] = $s3Client->getCommand('PutObject', array(
'Bucket' => 'mybucket',
'Key' => 'filename_2.ext',
'Body' => fopen('filepath_2', 'r'),
));
etc.
try {
$pool = new CommandPool($s3Client, $commands, [
'concurrency' => 5,
'before' => function (CommandInterface $cmd, $iterKey) {
//Do stuff before the file starts to upload
},
'fulfilled' => function (ResultInterface $result, $iterKey, PromiseInterface $aggregatePromise) {
//Do stuff after the file is finished uploading
},
'rejected' => function (AwsException $reason, $iterKey, PromiseInterface $aggregatePromise) {
//Do stuff if the file fails to upload
},
]);
// Initiate the pool transfers
$promise = $pool->promise();
// Force the pool to complete synchronously
$promise->wait();
$promise->then(function() { echo "All the files have finished uploading!"; });
} catch (Exception $e) {
echo "Exception Thrown: Failed to upload: ".$e->getMessage()."<br>\n";
}
This works fine for the smaller files, but some of my files are large enough that I'd like them to automatically be uploaded in multiple parts. So, instead of using getCommand('PutObject'), which uploads an entire file, I'd like to use something like getCommand('ObjectUploader') so that the larger files can be automatically broken up as needed. However, when I try to use getCommand('ObjectUploader') it throws an error and says that it doesn't know what to do with that. I'm guessing that perhaps the command has a different name, which is why it is throwing the error. But, it's also possible that it's not possible to do it like this.
If you've worked on something like this in the past, how have you done it? Or even if you haven't worked on it, I'm open to any ideas you might have.
Thanks!
References:
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_commands.html#command-pool
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-multipart-upload.html#object-uploader
I decided to go a different direction with this, and instead of using an array of concurrent commands I am now using a set of concurrent MultipartUploader promises, as shown in an example on this page: https://500.keboola.com/parallel-multipart-uploads-to-s3-in-php-61ff03ffc043
Here are the basics:
//Create an array of your file paths
$files = ['file1', 'file2', ...];
//Create an array to hold the promises
$promises = [];
//Create MultipartUploader objects, and add them to the promises array
foreach($files as $filePath) {
$uploader = new \Aws\S3\MultipartUploader($s3Client, $filePath, $uploaderOptions);
$promises[$filePath] = $uploader->promise();
}
//Process the promises once they are all complete
$results = \GuzzleHttp\Promise\unwrap($promises);

Upload image to AWS bucket from remote server

I have a web application in PHP up and running. I want this app capable of uploading images to AWS s3 bucket. I am checking the documentation at AWS, but found at least three different documentations for this purpose. But still I am not clear, is possible that my web app hosted with a different hosting service will be able to upload files to AWS ?
If yes, which is the best option ?
You should be able to upload from outside of the AWS network.
Use the AWS PHP SDK at https://aws.amazon.com/sdk-for-php/
Then use the following code:
<?php
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
// Prepare the upload parameters.
$uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
'bucket' => $bucket,
'key' => $keyname
]);
// Perform the upload.
try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
} catch (MultipartUploadException $e) {
echo $e->getMessage() . PHP_EOL;
}
?>
Edit the bucket name, keyname, region and upload file name.
This is the multi-part upload style so you can upload huge files.

Save PHP file_get_contents stream to AWS S3

Im trying to get a remote file (an image) using PHP and then put that image to an S3 Bucket.
It mostly works, except that the file that is uploaded to S3 is empty when downloaded back again.
Here is my code so far:
$src = "http://images.neventum.com/2016/83/thumb100x100/56f3fb1b9dbd6-acluvaq_63324840.png";
$name = md5(uniqid());
$ext = pathinfo($src, PATHINFO_EXTENSION);
$file_content = file_get_contents($src);
$filename = "{$name}.{$ext}";
try {
$file = $this->s3->putObject([
'Bucket' => getEnv('s3bucket'),
'Key' => $filename,
'Body' => $file_content,
'ACL' => 'private'
]);
} catch (S3Exception $e) {
//Catch
}
Hope you can help. Thank you so much in advance.
Update 1:
The problem is not with the ACL (I have tested using "public"), is that the saved object on S3 is not uploded correctly (I think is something to do with the encoding, but have not been able to figure it out)
Once you upload image to S3 bucket it will be private you can't able to download directly. You need to give public read access to make object available for download to users.

Putting multiple objects in Amazon S3 with different meta data fails

I've got an array of file information that is being looped through and using the AWS PHP 2 skd to upload the contents of these files to the cloud, it all works brilliantly until I try and add meta data, at this point, it adds the the metadata to the first object created, but after that I get the following error message:
Fatal error: Uncaught Aws\S3\Exception\SignatureDoesNotMatchException: AWS Error Code: SignatureDoesNotMatch, Status Code: 403, AWS Request ID: 8FC9360F2EB687EE, AWS Error Type: client, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., User-Agent: aws-sdk-php2/2.2.1 Guzzle/3.3.1 curl/7.24.0 PHP/5.3.13 thrown in D:\Inetpub\wwwroot\ThirdParty_Resources\AWS_SDK2\aws\aws-sdk-php\src\Aws\Common\Exception\NamespaceExceptionFactory.php on line 89
I've cropped the code from my loop to highlight the area that is being naughty.
foreach($aFiles as $aFile) {
$arr_ObjectMeta = array(
'OriginalFileName' => $aFile['FileName']
, 'Description' => $aFile['FileDesc']
, 'Keywords' => $aFile['FileKeyW']
);
// get the file to upload
$obj_FileUpload = $obj_S3->putObject($sBucket, $sBucketFolder . $sFilenameToSave, $sFile, 'public-read', $arr_ObjectMeta);
if($obj_FileUpload) {
$files_uploaded++;
} else {
$files_not_uploaded++;
}
// clear the file upload S3 response object
unset($obj_FileUpload);
// delete the downloaded file
unlink($sServerUploadFolder.$sFilenameToSave);
}
So the second time around the loop, it seems to bomb because of the different meta values. When the meta data is the same, the loop executes without issue. Any help/pointers would be great.
You might be confusing the putObject method method with the upload helper method.
The upload helper method is available as of version 2.4 of the SDK. Using the upload method you could do the following:
try {
$sKey = $sBucketFolder . $sFilenameToSave;
$obj_FileUpload = $obj_S3->upload($sBucket, $sKey, $sFile, 'public-read', array(
'Metadata' => $arr_ObjectMeta
));
$files_uploaded++;
} catch (\Aws\S3\Exception\S3Exception $e) {
$files_not_uploaded++;
}
You can do the same thing with the putObject method as well, it is just slightly more verbose.
try {
$obj_FileUpload = $obj_S3->putObject(array(
'Bucket' => $sBucket
'Key' => $sBucketFolder . $sFilenameToSave,
'SourceFile' => $sFile,
'ACL' => 'public-read'
'Metadata' => $arr_ObjectMeta
));
$files_uploaded++;
} catch (\Aws\S3\Exception\S3Exception $e) {
$files_not_uploaded++;
}

Categories