Downloading a file from a S3 Server using TPYO's PHP Class - php

So, here's what I want to do:
I want to use TPYO's (Undesigned) Amazon S3 Class to get a file from my S3 Bucket, and download it. I'm having alot of trouble getting it to work.
I'm using this code, but it's not working for me for some reason:
if ($s3->copyObject($bucketName, $filename, $bucketName, $filename, "public-read", array(), array("Content-Type" => "application/octet-stream", "Content-Disposition" => "attachment"))) {
// Successful
} else {
// Failed
}
I tried using any other questions, but I couldn't manage to do it.

Ok,
I found a way of doing it. I basically dismissed the whole idea of using TPYO's S3 PHP class to download a file from my bucket server. So instead, what I did was I made my bucket's stream an octet-stream, and made any files accessible via URL.
Thanks,
Tom.

Related

Downloading directly from S3 vs Downloading through the server

This is regards users' uploads - which are hosted in an S3 bucket - and the best approach for downloading them. Currently I use the following:
return response()->streamDownload(function(){
// Fetch the file
print Storage::disk('s3')->get('file');
}, 'file-name.ext');
This works just fine, but as you can see, the file is first fetched from S3, then streamed via the server to the user's browser. Which, I think, unnecessary work (calls to S3 and bandwidth), since we could just force-download it off S3 servers instead.
I have two questions here: How to force-download the file off s3, and more importantly; am I giving this too much thought? But I really hate the idea of downloading the file twice and putting more pressure on the server!
The pre-signed urls feature was the way to go! Here is the code:
$url = Storage::disk('s3')->temporaryUrl($path, now()->addMinutes(5), [
'ResponseContentDisposition' => "attachment; filename=$fileName.txt"
]);

How to get file size from Amazon AWS S3 Version 2?

I am forced to use Version 2 of AWS S3, because i cannot update PHP to 5.5 on this server in order to use Version 3.
I made this PHP script to download files from AWS, which works good:
//http://docs.aws.amazon.com/aws-sdk-php/v2/api/class-Aws.S3.S3Client.html#_createPresignedUrl
// Get a command object from the client and pass in any options
// available in the GetObject command (e.g. ResponseContentDisposition)
$command = $s3Client->getCommand('GetObject', array(
'Bucket' => $bucket,
'Key' => $objectKey,
'ResponseContentDisposition' => 'attachment; filename="' . $originFilename . '"'
));
// Create a signed URL from the command object that will last for
// 10 minutes from the current time
$signedUrl = $command->createPresignedUrl('+1000 minutes');
$file = file_get_contents($signedUrl);
The problem is that i want to be sure that the file_get_contents() downloads the entire file and to detect and fix any error (like server going offline during a download, etc...), so i thought the following flow:
I ask AWS the file size
I download the file
I check the size. If it's not equal i re-download the file
So, how to get file size from AWS? I found this, but it doesn't work for my version.
You can use the HEAD Object REST API to determine the size of the object stored on S3.
HEAD Object will return the meta-data associated with the stored S3 Object, including the size on disk of the object, within the Content-Length header.
http://docs.aws.amazon.com/aws-sdk-php/v2/api/class-Aws.S3.S3Client.html#_headObject

Symfony2 file download cutoff from Apache Server

I have written a Symfony2(PHP MVC Framework) script to download a zip file from the server. But the file download stops in the midway. I have increased the max_execution_time in apache configuration. Still the problem is persisting.
Do anyone have the quick fix for this?
Thanks in advance.
It seems like you may have an issue with a large file (downloading an archive of videos). You should use a StreamedResponse. This way, you don't have to store the entire contents of your file in memory, it will just stream to the client. The way you are currently doing it makes the file load into memory before it can start to download. You can see why this could be a problem. Here is a simple example of how you can stream a file to the client:
$path = "//usr/www/users/jjdqlo/Wellness/web/yoga_videos/archive.zip";
return new StreamedResponse(
function () use ($path) { // first param is a callback, where you do the readfile()
readfile($path);
},
200, // second param is the http status code
array( // third param is an array of header settings
'Content-Disposition' => 'attachment;filename="archive.zip"',
'Content-Type' => 'application/zip'
)
);
Give this a shot. Assuming the problem is because of file size, this should solve the issue.

deleting files from s3 bucket using simple db

I am trying to delete the files on Amazon s3 bucket using simpledb. But for some reason it does not delete the file and says that it has deleted it.
I am using S3 classdeleteObject method to delete the file.
Below is the sample code :
$bucketName = "bucket";
$s3 = new S3($awsAccessKey, $awsSecretKey);
if ($s3->deleteObject($bucketName, $url))
{
echo "deleted url";
}
else
{
echo "cannot delete";
}
After execution the script echoes "deleted url" which should happen when deletion is successfully completed. But when I open the URL again, the file is still there and has not been deleted.
Please help.
Thanks a lot.
You are using the unofficial S3.php class. The GitHub repo with documentation is here: https://github.com/tpyo/amazon-s3-php-class
This code is not provided by AWS, and should not be confused with either AWS SDK for PHP 1.x or AWS SDK for PHP 2.x.
Make sure you are giving the right file name for the object in parameter. I guess, you are storing file in Amazon S3 and you are keeping the URL of that file into Amazon SimpleDB. So you need to provide a valid file name in parameter instead of URL.

How to upload files directly to S3 using PHP and with progress bar

There are some similar questions but none have a good answers in how to upload files directly to S3 using PHP with progress bar. Is it even possible adding a progress bar without using Flash?
NOTE: I am referring to uploading from client browser directly to S3.
I've done this in our project. You can't upload directly to S3 using AJAX because of standard cross domain security policies; instead, you need to use either a regular form POST or Flash. You'll need to send the security policy and signature in a relatively complex process, as explained in the S3 docs.
YES, it is possible to do this in PHP SDK v3.
$client = new S3Client(/* config */);
$result = $client->putObject([
'Bucket' => 'bucket-name',
'Key' => 'bucket-name/file.ext',
'SourceFile' => 'local-file.ext',
'ContentType' => 'application/pdf',
'#http' => [
'progress' => function ($downloadTotalSize, $downloadSizeSoFar, $uploadTotalSize, $uploadSizeSoFar) {
// handle your progress bar percentage
printf(
"%s of %s downloaded, %s of %s uploaded.\n",
$downloadSizeSoFar,
$downloadTotalSize,
$uploadSizeSoFar,
$uploadTotalSize
);
}
]
]);
This is explained in the AWS docs - S3 Config section. It works by exposing GuzzleHttp's progress property-callable, as explained in this SO answer.
Technically speaking, with PHP you cannot go from client --> S3. Your solution, if you want to use PHP would either have to be designed as follows:
Client -> Web Server (PHP) -> Amazon S3
Client with PHP server embedded -> Amazon S3
The AWS PHP SDK: http://aws.amazon.com/sdkforphp/ is very well written and contains a specific example on how to send a file from a Client --> Server --> S3
With respects to the progress bar, there are many options available. A quick search of stackoverflow.com shows a question answered identical to this one:
Upload File Directly to S3 with Progress Bar
'pass through' php upload to amazon's s3?
It is possible to directly upload, but progress bar is impossible:
http://undesigned.org.za/2007/10/22/amazon-s3-php-class/
see example_form in the downloads,
direct upload from browser to S3

Categories