My S3 contains .gz objects that contain JSON within. I simply want to access this JSON without actually downloading objects to a file.
$iterator = $client->getIterator('ListObjects', array(
'Bucket' => $bucket
));
foreach ($iterator as $object) {
$object = $object['Key'];
$result = $client->getObject(array(
'Bucket' => $bucket,
'Key' => $object
));
echo $result['Body'] . "\n";
}
When I run the above in the shell it outputs gibberish on the echo line. What's the correct way to simply retrieve the contents of the .gz object and save to a variable?
Thank you
You can use stream wrapper like this.
$client->registerStreamWrapper();
if ($stream = fopen('s3://bucket/key.gz', 'r')) {
// While the stream is still open
while (!feof($stream)) {
// Read 1024 bytes from the stream
$d = fread($stream, 1024);
echo zlib_decode($d);
}
// Be sure to close the stream resource when you're done with it
fclose($stream);
}
If you are sending it to a browser you dont need to zlib_decode it, just set a header:
header('Content-Encoding: gzip');
Related
I'm willing to create a php function, trigger in js, that can :
Retrieve all AWS S3 bucket files from a specific folder (I can also provide the path of each files)
Create a Zip containing all S3 files
Download the Zip when the trigger is hit (cta)
I'm able to download a single file with the getObject method from this example However, I can't find any informations in order to download multiple file and Zip it.
I tried the downloadBucket method, however it download all files inside my project architecture and not as a zip file.
Here is my code:
<?php
// AWS Info + Connection
$IAM_KEY = 'XXXXX';
$IAM_SECRET = 'XXXXX';
$region = 'XXXXX';
require '../../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$FolderToDownload="FolderToDownload";
// Connection OK
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => $IAM_KEY,
'secret' => $IAM_SECRET
),
'version' => 'latest',
'region' => $region
)
);
try {
$bucketName = 'XXXXX';
$destination = 'NeedAZipFileNotAFolderPath';
$options = array('debug'=>true);
// ADD All Files into the folder: NeedAZipFileNotAFolderPath/Files1.png...Files12.png...
$s3->downloadBucket($destination,$bucketName,$FolderToDownload,$options);
// Looking for a zip file that can be downloaded
// Can I use downloadBucket? Is there a better way to do it?
// if needed I can create an array of all files (paths) that needs to be added to the zip file & dl
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}
?>
If some one can help, would be nice.
Thanks
You can use ZipArchive, list the objects from a prefix (the bucket's folder). In the case, I also use 'registerStreamWrapper' to get the keys and added to the created zip file.
Something like that:
<?php
require '/path/to/sdk-or-autoloader';
$s3 = Aws\S3\S3Client::factory(/* sdk config */);
$s3->registerStreamWrapper();
$zip = new ZipArchive;
$zip->open('/path/to/zip-you-are-creating.zip', ZipArchive::CREATE);
$bucket = 'your-bucket';
$prefix = 'your-prefix-folder'; // ex.: 'image/test/folder/'
$objects = $s3->getIterator('ListObjects', array(
'Bucket' => $bucket,
'Prefix' => $prefix
));
foreach ($objects as $object) {
$contents = file_get_contents("s3://{$bucket}/{$object['Key']}"); // get file
$zip->addFromString($object['Key'], $contents); // add file contents in zip
}
$zip->close();
// Download de zip file
header("Content-Description: File Transfer");
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"/path/to/zip-you-are-creating.zip\"");
readfile ('/path/to/zip-you-are-creating.zip');
?>
You can also delete the zip file after the download, if you prefer.
Must work in most of cases, but I don't want to save the file in the server, but I don't see how I can download multiple objects directly from the AWS S3 bucket by the browser without saving a file in my server. If anyone know, please share with us.
I have this mostly working but having a tough time finalizing it.
For now I have a simple route:
Route::get('file/{id}/', 'FileController#fileStream')->name('file');
this route connects to an action in the FileController:
public function fileStream($id){
$audio = \App\Audio::where('id', $id)->first();
$client = S3Client::factory([
'credentials' => [
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
],
'region' => env('S3REGION'),
'version' => 'latest',
]);
// Register the stream wrapper from an S3Client object
$client->registerStreamWrapper();
if ($stream = fopen('s3://[bucket_name]/'. $audio->audio_url, 'r')) {
while (!feof($stream)) {
echo fread($stream, 1024);
}
fclose($stream);
}
}
This works to the browser: if I go to a url: /file/1 it looks up the right file, and in a clean browser window I get:
And then in my view I am trying to output the audio like:
<audio>
<source src="{{ url('file', ['id' => $section->id]) }}" type="{{ $section->audio_mime_type}}"></audio>
</audio>
But no player is getting output to the screen.
TIA
You should use Laravel Streamed response
return response()->streamDownload(function () use ($audio) {
if ($stream = fopen('s3://[bucket_name]/'. $audio->audio_url, 'r')) {
while (!feof($stream)) {
echo fread($stream, 1024);
flush();
}
fclose($stream);
}
}, 'file-name.ext');
//Get Url from s3 using
$fileUrl = \Storage::disk('s3')->url($filePath);
$fileName = 'name_of_file.extension';
//Set headers
header('Content-Description: File Transfer');
header('Content-Disposition: attachment; filename='.$fileName);
if (!($stream = fopen($response, 'r'))) {
throw new \Exception('Could not open stream for reading file:
['.$fileName.']');
}
// Check if the stream has more data to read
while (!feof($stream)) {
// Read 1024 bytes from the stream
echo fread($stream, 1024);
}
// Be sure to close the stream resource when you're done with it
fclose($stream);
Use Laravel/Symfony Response class. Echoing the response could not be setting the right headers.
Even if the headers are set up correctly, you are relying on the echo in the controller action, therefore you should do exit(0); at the end of the controller. Bear in mind that this is rather ugly and it kills the script, you should always aim to use Response classes mentioned above.
How can I download file from s3 in php. The following code not working for me...
Here is my code
upload file should work correct
try {
$s3->putObject([
'Bucket' => $config['s3']['bucket'],//'eliuserfiles',
'Key' => "uploads/{$name}",
'Body' => fopen($tmp_name, 'r+'),
//'SourceFile' => $pathToFile,
'ACL' => 'public-read',
]);
download gives me error
$objInfo = $s3->get_object_headers($config['s3']['bucket'], "uploads/{$name}");
$obj = $s3->get_object($config['s3']['bucket'], "uploads/{$name}");
header('Content-type: ' . $objInfo->header['_info']['content_type']);
echo $obj->body;
error
PHP Catchable fatal error: Argument 2 passed to Aws\\AwsClient::getCommand() must be of the type array, string given, called in /php_workspace/s3_php/vendor/aws/aws-sdk-php/src/AwsClient.php on line 167 and defined in /php_workspace/s3_php/vendor/aws/aws-sdk-php/src/AwsClient.php on line 211, referer: http://localhost/upload.php
Simple way to do this:
include Amazon S3 PHP Class in you're project.
instantiate the class:
1. OO method (e,g; $s3->getObject(...)):
$s3 = new S3($awsAccessKey, $awsSecretKey);
2. Statically (e,g; S3::getObject(...)):
S3::setAuth($awsAccessKey, $awsSecretKey);
Then get Objects:
// Return an entire object buffer:
$object = S3::getObject($bucket, $uri));
var_dump($object);
Usually, the most efficient way to do this is to save the object to a file or resource
<?php
// To save it to a file (unbuffered write stream):
if (($object = S3::getObject($bucket, $uri, "/tmp/savefile.txt")) !== false) {
print_r($object);
}
// To write it to a resource (unbuffered write stream):
$fp = fopen("/tmp/savefile.txt", "wb");
if (($object = S3::getObject($bucket, $uri, $fp)) !== false) {
print_r($object);
}
?>
S3 Class -With Examples
S3 Class -Documentation
You can try this :
$bucket= "bucket-name";
$filetodownload = "name-of-the-file";
$resultbool = $s3->doesObjectExist ($bucket, $filetodownload );
if ($resultbool) {
$result = $client->getObject ( [
'Bucket' => $bucket,
'Key' => $filetodownload
] );
}
else
{
echo "file not found";die;
}
header ( "Content-Type: {$result['ContentType']}" );
header ( "Content-Disposition: attachment; filename=" . $filetodownload );
header ('Pragma: public');
echo $result ['Body'];
die ();
I am using flysystem with IRON IO queue and I am attempting to run a DB query that will be taking ~1.8 million records and while doing 5000 at at time. Here is the error message I am receiving with file sizes of 50+ MB:
PHP Fatal error: Allowed memory size of ########## bytes exhausted
Here are the steps I would like to take:
1) Get the data
2) Turn it into a CSV appropriate string (i.e. implode(',', $dataArray) . "\r\n")
3) Get the file from the server (in this case S3)
4) Read that files' contents and append this new string to it and re-write that content to the S3 file
Here is a brief run down of the code I have:
public function fire($job, $data)
{
// First set the headers and write the initial file to server
$this->filesystem->write($this->filename, implode(',', $this->setHeaders($parameters)) . "\r\n", [
'visibility' => 'public',
'mimetype' => 'text/csv',
]);
// Loop to get new sets of data
$offset = 0;
while ($this->exportResult) {
$this->exportResult = $this->getData($parameters, $offset);
if ($this->exportResult) {
$this->writeToFile($this->exportResult);
$offset += 5000;
}
}
}
private function writeToFile($contentToBeAdded = '')
{
$content = $this->filesystem->read($this->filename);
// Append new data
$content .= $contentToBeAdded;
$this->filesystem->update($this->filename, $content, [
'visibility' => 'public'
]);
}
I'm assuming this is NOT the most efficient? I am going off of these docs:
PHPLeague Flysystem
If anyone can point me in a more appropriate direction, that would be awesome!
Flysystem supports read/write/update stream
Please check latest API https://flysystem.thephpleague.com/api/
$stream = fopen('/path/to/database.backup', 'r+');
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Using write you can also directly set the visibility
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream, [
'visibility' => AdapterInterface::VISIBILITY_PRIVATE
]);
if (is_resource($stream)) {
fclose($stream);
}
// Or update a file with stream contents
$filesystem->updateStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Retrieve a read-stream
$stream = $filesystem->readStream('something/is/here.ext');
$contents = stream_get_contents($stream);
fclose($stream);
// Create or overwrite using a stream.
$putStream = tmpfile();
fwrite($putStream, $contents);
rewind($putStream);
$filesystem->putStream('somewhere/here.txt', $putStream);
if (is_resource($putStream)) {
fclose($putStream);
}
If you are working with S3, I would use the AWS SDK for PHP directly to solve this particular problem. Appending to a file is actually very easy using the SDK's S3 streamwrapper, and doesn't force you to read the entire file into memory.
$s3 = \Aws\S3\S3Client::factory($clientConfig);
$s3->registerStreamWrapper();
$appendHandle = fopen("s3://{$bucket}/{$key}", 'a');
fwrite($appendHandle, $data);
fclose($appendHandle);
I'm a little sure as to how to launch a download of a file from Amazon S3 with Laravel 4. I'm using the AWS
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
));
// temp file
$file = tempnam('../uploads', 'download_');
file_put_contents($file, $result['Body']);
$response = Response::download($file, 'test-file.txt');
//unlink($file);
return $response;
The above works, but I'm stuck with saving the file locally. How can I use the result from S3 correctly with Response::download()?
Thanks!
EDIT: I've found I can use $s3->getObjectUrl($bucket, $file, $expiration) to generate an access URL. This could work, but it still doesn't solve the problem above completely.
EDIT2:
$result = $s3->getObject(array(
'Bucket' => $bucket,
'Key' => 'data.txt',
));
header('Content-type: ' . $result['ContentType']);
header('Content-Disposition: attachment; filename="' . $fileName . '"');
header('Content-length:' . $result['ContentLength']);
echo $result['Body'];
Still don't think it's ideal, though?
The S3Client::getObject() method allows you to specify headers that S3 should use when it sends the response. The getObjectUrl() method uses the GetObject operation to generate the URL, and can accept any valid GetObject parameters in its last argument. You should be able to do a direct S3-to-user download with your desired headers using a pre-signed URL by doing something like this:
$downloadUrl = $s3->getObjectUrl($bucket, 'data.txt', '+5 minutes', array(
'ResponseContentDisposition' => 'attachment; filename="' . $fileName . '"',
));
If you want to stream an S3 object from your server, then you should check out the Streaming Amazon S3 Objects From a Web Server article on the AWS Developer Guide
This question is not answered fully. Initially it was asked to how to save a file locally on the server itself from S3 to make use of it.
So, you can use the SaveAs option with getObject method. You can also specify the version id if you are using versioning on your bucket and want to make use of it.
$result = $this->client->getObject(array(
'Bucket'=> $bucket_name,
'Key' => $file_name,
'SaveAs' => $to_file,
'VersionId' => $version_id));
The answer is somewhat outdated with the new SDK. The following works with v3 SDK.
$client->registerStreamWrapper();
$result = $client->headObject([
'Bucket' => $bucket,
'Key' => $key
]);
$headers = $result->toArray();
header('Content-Type: ' . $headers['ContentType']);
header('Content-Disposition: attachment');
// Stop output buffering
if (ob_get_level()) {
ob_end_flush();
}
flush();
// stream the output
readfile("s3://{$bucket}/{$key}");