How to stream an large file from S3 to a laravel view - php

I have this mostly working but having a tough time finalizing it.
For now I have a simple route:
Route::get('file/{id}/', 'FileController#fileStream')->name('file');
this route connects to an action in the FileController:
public function fileStream($id){
$audio = \App\Audio::where('id', $id)->first();
$client = S3Client::factory([
'credentials' => [
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
],
'region' => env('S3REGION'),
'version' => 'latest',
]);
// Register the stream wrapper from an S3Client object
$client->registerStreamWrapper();
if ($stream = fopen('s3://[bucket_name]/'. $audio->audio_url, 'r')) {
while (!feof($stream)) {
echo fread($stream, 1024);
}
fclose($stream);
}
}
This works to the browser: if I go to a url: /file/1 it looks up the right file, and in a clean browser window I get:
And then in my view I am trying to output the audio like:
<audio>
<source src="{{ url('file', ['id' => $section->id]) }}" type="{{ $section->audio_mime_type}}"></audio>
</audio>
But no player is getting output to the screen.
TIA

You should use Laravel Streamed response
return response()->streamDownload(function () use ($audio) {
if ($stream = fopen('s3://[bucket_name]/'. $audio->audio_url, 'r')) {
while (!feof($stream)) {
echo fread($stream, 1024);
flush();
}
fclose($stream);
}
}, 'file-name.ext');

//Get Url from s3 using
$fileUrl = \Storage::disk('s3')->url($filePath);
$fileName = 'name_of_file.extension';
//Set headers
header('Content-Description: File Transfer');
header('Content-Disposition: attachment; filename='.$fileName);
if (!($stream = fopen($response, 'r'))) {
throw new \Exception('Could not open stream for reading file:
['.$fileName.']');
}
// Check if the stream has more data to read
while (!feof($stream)) {
// Read 1024 bytes from the stream
echo fread($stream, 1024);
}
// Be sure to close the stream resource when you're done with it
fclose($stream);

Use Laravel/Symfony Response class. Echoing the response could not be setting the right headers.
Even if the headers are set up correctly, you are relying on the echo in the controller action, therefore you should do exit(0); at the end of the controller. Bear in mind that this is rather ugly and it kills the script, you should always aim to use Response classes mentioned above.

Related

PHP: How to do partial content downloads from data that is not a file. E.G. data stored inside of a variable pulled from AWS S3 Bucket

In the code below: In safari where it does partial downloading. I keep getting the error:
Failed to load resource: Plug-in handled load
I get inside of the if (isset($_SERVER['HTTP_RANGE'])) branch. Error_Log Verifies this.
I checked in google chrome to make sure the resource was actually being grabbed and it indeed is being grabbed and google chrome downloads the whole file inside of the else statement of the previously mentioned if statement.
What do I need to do to correct the headers? The stream is always grabbed in 1024 bytes or less due to the "->read(1024)".
$s3 = S3Client::factory([
'version' => 'latest',
'region' => 'A Region',
'credentials' => [
'key' => "A KEY",
'secret' => "A SECRET",
]
]);
//echo "Step 4\n\n";
$bucket = "bucket";
$image_path = $_GET['video_path'];
try {
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $image_path
]);
header("Content-Type: {$result['ContentType']}");
$fileSize = $result['ContentLength'];
if (isset($_SERVER['HTTP_RANGE'])){ // do it for any device that supports byte-ranges not only iPhone
// echo "partial";
error_log("Error message We are good on safari in HTTP_RANGE\n", 3, "error_log");
// Read the body off of the underlying stream in chunks
header('HTTP/1.1 206 Partial Content');
header('Accept-Ranges: bytes');
$start = 0;
//$theLengthOfTheNextData = mb_strlen($data, '8bit');
header("Content-Range: bytes 0-1023/$fileSize");
header("Content-Length: 1024");
while ($data = $result['Body']->read(1024))
{
//$theLengthOfTheNextData = mb_strlen($data, '8bit');
//it starts at zero
//$end = $start + $theLengthOfTheNextData - 1;
//$theLengthOfTheNextData = mb_strlen(data);
//echo "Length: $theLengthOfTheNextData\n\n";
//header("Content-Range: bytes $start-$end/$fileSize");
//header("Content-Length: $theLengthOfTheNextData");
//$start = $start + $theLengthOfTheNextData;
// header("Content-Range: bytes $start-$end/$size");
set_time_limit(0);
echo $data;
// Flush the buffer immediately
//#ob_flush();
flush();
}
//header("Content-Length: $fileSize");
//rangeDownload($result['Body'],$fileSize);
}
else {
// echo "Just entire thing";
error_log("Error message ---- NOT good on safari in HTTP_RANGE\n", 4, "error_log");
header("Content-Length: ".$fileSize);
echo $result['Body'];
}
//echo "Step 7\n\n";
//no need to cast as string
// Cast as a string
// $body = (string)$result->get('Body');
//below should actually work too
// $bodyAsString = (string) $result['Body'];
} catch (S3Exception $e) {
echo $e;
//echo "Step 6B\n\n";
//grabbing the file was unsuccessful
$successfulMove = 0;
}
If anyone is looking for the answer to this. You will need to utilize "registerStreamWrapper" in the S3 functions.
Reference Link
You will need to scroll down too " tedchou12 " comment. It works!!!

Upload File in chunks to URL Endpoint using Guzzle PHP

I want to upload files in chunks to a URL endpoint using guzzle.
I should be able to provide the Content-Range and Content-Length headers.
Using php I know I can split using
define('CHUNK_SIZE', 1024*1024); // Size (in bytes) of chunk
function readfile_chunked($filename, $retbytes = TRUE) {
$buffer = '';
$cnt = 0;
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, CHUNK_SIZE);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
How Do I achieve sending the files in chunk using guzzle, if possible using guzzle streams?
This method allows you to transfer large files using guzzle streams:
use GuzzleHttp\Psr7;
use GuzzleHttp\Client;
use GuzzleHttp\Psr7\Request;
$resource = fopen($pathname, 'r');
$stream = Psr7\stream_for($resource);
$client = new Client();
$request = new Request(
'POST',
$api,
[],
new Psr7\MultipartStream(
[
[
'name' => 'bigfile',
'contents' => $stream,
],
]
)
);
$response = $client->send($request);
Just use multipart body type as it's described in the documentation. cURL then handles the file reading internally, you don't need to so implement chunked read by yourself. Also all required headers will be configured by Guzzle.

AWS S3 How to read a .gz object without downloading PHP

My S3 contains .gz objects that contain JSON within. I simply want to access this JSON without actually downloading objects to a file.
$iterator = $client->getIterator('ListObjects', array(
'Bucket' => $bucket
));
foreach ($iterator as $object) {
$object = $object['Key'];
$result = $client->getObject(array(
'Bucket' => $bucket,
'Key' => $object
));
echo $result['Body'] . "\n";
}
When I run the above in the shell it outputs gibberish on the echo line. What's the correct way to simply retrieve the contents of the .gz object and save to a variable?
Thank you
You can use stream wrapper like this.
$client->registerStreamWrapper();
if ($stream = fopen('s3://bucket/key.gz', 'r')) {
// While the stream is still open
while (!feof($stream)) {
// Read 1024 bytes from the stream
$d = fread($stream, 1024);
echo zlib_decode($d);
}
// Be sure to close the stream resource when you're done with it
fclose($stream);
}
If you are sending it to a browser you dont need to zlib_decode it, just set a header:
header('Content-Encoding: gzip');

How can I download file from s3 in php?

How can I download file from s3 in php. The following code not working for me...
Here is my code
upload file should work correct
try {
$s3->putObject([
'Bucket' => $config['s3']['bucket'],//'eliuserfiles',
'Key' => "uploads/{$name}",
'Body' => fopen($tmp_name, 'r+'),
//'SourceFile' => $pathToFile,
'ACL' => 'public-read',
]);
download gives me error
$objInfo = $s3->get_object_headers($config['s3']['bucket'], "uploads/{$name}");
$obj = $s3->get_object($config['s3']['bucket'], "uploads/{$name}");
header('Content-type: ' . $objInfo->header['_info']['content_type']);
echo $obj->body;
error
PHP Catchable fatal error: Argument 2 passed to Aws\\AwsClient::getCommand() must be of the type array, string given, called in /php_workspace/s3_php/vendor/aws/aws-sdk-php/src/AwsClient.php on line 167 and defined in /php_workspace/s3_php/vendor/aws/aws-sdk-php/src/AwsClient.php on line 211, referer: http://localhost/upload.php
Simple way to do this:
include Amazon S3 PHP Class in you're project.
instantiate the class:
1. OO method (e,g; $s3->getObject(...)):
$s3 = new S3($awsAccessKey, $awsSecretKey);
2. Statically (e,g; S3::getObject(...)):
S3::setAuth($awsAccessKey, $awsSecretKey);
Then get Objects:
// Return an entire object buffer:
$object = S3::getObject($bucket, $uri));
var_dump($object);
Usually, the most efficient way to do this is to save the object to a file or resource
<?php
// To save it to a file (unbuffered write stream):
if (($object = S3::getObject($bucket, $uri, "/tmp/savefile.txt")) !== false) {
print_r($object);
}
// To write it to a resource (unbuffered write stream):
$fp = fopen("/tmp/savefile.txt", "wb");
if (($object = S3::getObject($bucket, $uri, $fp)) !== false) {
print_r($object);
}
?>
S3 Class -With Examples
S3 Class -Documentation
You can try this :
$bucket= "bucket-name";
$filetodownload = "name-of-the-file";
$resultbool = $s3->doesObjectExist ($bucket, $filetodownload );
if ($resultbool) {
$result = $client->getObject ( [
'Bucket' => $bucket,
'Key' => $filetodownload
] );
}
else
{
echo "file not found";die;
}
header ( "Content-Type: {$result['ContentType']}" );
header ( "Content-Disposition: attachment; filename=" . $filetodownload );
header ('Pragma: public');
echo $result ['Body'];
die ();

phpleague flysystem read and write to large file on server

I am using flysystem with IRON IO queue and I am attempting to run a DB query that will be taking ~1.8 million records and while doing 5000 at at time. Here is the error message I am receiving with file sizes of 50+ MB:
PHP Fatal error: Allowed memory size of ########## bytes exhausted
Here are the steps I would like to take:
1) Get the data
2) Turn it into a CSV appropriate string (i.e. implode(',', $dataArray) . "\r\n")
3) Get the file from the server (in this case S3)
4) Read that files' contents and append this new string to it and re-write that content to the S3 file
Here is a brief run down of the code I have:
public function fire($job, $data)
{
// First set the headers and write the initial file to server
$this->filesystem->write($this->filename, implode(',', $this->setHeaders($parameters)) . "\r\n", [
'visibility' => 'public',
'mimetype' => 'text/csv',
]);
// Loop to get new sets of data
$offset = 0;
while ($this->exportResult) {
$this->exportResult = $this->getData($parameters, $offset);
if ($this->exportResult) {
$this->writeToFile($this->exportResult);
$offset += 5000;
}
}
}
private function writeToFile($contentToBeAdded = '')
{
$content = $this->filesystem->read($this->filename);
// Append new data
$content .= $contentToBeAdded;
$this->filesystem->update($this->filename, $content, [
'visibility' => 'public'
]);
}
I'm assuming this is NOT the most efficient? I am going off of these docs:
PHPLeague Flysystem
If anyone can point me in a more appropriate direction, that would be awesome!
Flysystem supports read/write/update stream
Please check latest API https://flysystem.thephpleague.com/api/
$stream = fopen('/path/to/database.backup', 'r+');
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Using write you can also directly set the visibility
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream, [
'visibility' => AdapterInterface::VISIBILITY_PRIVATE
]);
if (is_resource($stream)) {
fclose($stream);
}
// Or update a file with stream contents
$filesystem->updateStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Retrieve a read-stream
$stream = $filesystem->readStream('something/is/here.ext');
$contents = stream_get_contents($stream);
fclose($stream);
// Create or overwrite using a stream.
$putStream = tmpfile();
fwrite($putStream, $contents);
rewind($putStream);
$filesystem->putStream('somewhere/here.txt', $putStream);
if (is_resource($putStream)) {
fclose($putStream);
}
If you are working with S3, I would use the AWS SDK for PHP directly to solve this particular problem. Appending to a file is actually very easy using the SDK's S3 streamwrapper, and doesn't force you to read the entire file into memory.
$s3 = \Aws\S3\S3Client::factory($clientConfig);
$s3->registerStreamWrapper();
$appendHandle = fopen("s3://{$bucket}/{$key}", 'a');
fwrite($appendHandle, $data);
fclose($appendHandle);

Categories