Upload File in chunks to URL Endpoint using Guzzle PHP - php

I want to upload files in chunks to a URL endpoint using guzzle.
I should be able to provide the Content-Range and Content-Length headers.
Using php I know I can split using
define('CHUNK_SIZE', 1024*1024); // Size (in bytes) of chunk
function readfile_chunked($filename, $retbytes = TRUE) {
$buffer = '';
$cnt = 0;
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, CHUNK_SIZE);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
How Do I achieve sending the files in chunk using guzzle, if possible using guzzle streams?

This method allows you to transfer large files using guzzle streams:
use GuzzleHttp\Psr7;
use GuzzleHttp\Client;
use GuzzleHttp\Psr7\Request;
$resource = fopen($pathname, 'r');
$stream = Psr7\stream_for($resource);
$client = new Client();
$request = new Request(
'POST',
$api,
[],
new Psr7\MultipartStream(
[
[
'name' => 'bigfile',
'contents' => $stream,
],
]
)
);
$response = $client->send($request);

Just use multipart body type as it's described in the documentation. cURL then handles the file reading internally, you don't need to so implement chunked read by yourself. Also all required headers will be configured by Guzzle.

Related

How to stream an large file from S3 to a laravel view

I have this mostly working but having a tough time finalizing it.
For now I have a simple route:
Route::get('file/{id}/', 'FileController#fileStream')->name('file');
this route connects to an action in the FileController:
public function fileStream($id){
$audio = \App\Audio::where('id', $id)->first();
$client = S3Client::factory([
'credentials' => [
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
],
'region' => env('S3REGION'),
'version' => 'latest',
]);
// Register the stream wrapper from an S3Client object
$client->registerStreamWrapper();
if ($stream = fopen('s3://[bucket_name]/'. $audio->audio_url, 'r')) {
while (!feof($stream)) {
echo fread($stream, 1024);
}
fclose($stream);
}
}
This works to the browser: if I go to a url: /file/1 it looks up the right file, and in a clean browser window I get:
And then in my view I am trying to output the audio like:
<audio>
<source src="{{ url('file', ['id' => $section->id]) }}" type="{{ $section->audio_mime_type}}"></audio>
</audio>
But no player is getting output to the screen.
TIA
You should use Laravel Streamed response
return response()->streamDownload(function () use ($audio) {
if ($stream = fopen('s3://[bucket_name]/'. $audio->audio_url, 'r')) {
while (!feof($stream)) {
echo fread($stream, 1024);
flush();
}
fclose($stream);
}
}, 'file-name.ext');
//Get Url from s3 using
$fileUrl = \Storage::disk('s3')->url($filePath);
$fileName = 'name_of_file.extension';
//Set headers
header('Content-Description: File Transfer');
header('Content-Disposition: attachment; filename='.$fileName);
if (!($stream = fopen($response, 'r'))) {
throw new \Exception('Could not open stream for reading file:
['.$fileName.']');
}
// Check if the stream has more data to read
while (!feof($stream)) {
// Read 1024 bytes from the stream
echo fread($stream, 1024);
}
// Be sure to close the stream resource when you're done with it
fclose($stream);
Use Laravel/Symfony Response class. Echoing the response could not be setting the right headers.
Even if the headers are set up correctly, you are relying on the echo in the controller action, therefore you should do exit(0); at the end of the controller. Bear in mind that this is rather ugly and it kills the script, you should always aim to use Response classes mentioned above.

CSV export with StreamedResponse

I have to create a CSV export. I found this awesome service and I tried to adapt it to my case. This works very well on my own computer (windows 7, wamp 2.5 not modified) but when I put it on the server (Ubuntu 14.04.4 LTS, PHP 5.5.9), this doesn't work anymore. It seems that it tries to export an infinite flux of data and it might crash the server. I work on Symfony 3.1.1. Here is the code of my class :
public function exportCsv($affairesQuery, $em)
{
$response = new StreamedResponse(function() use($affairesQuery, $em) {
$results = $affairesQuery->iterate();
$handle = fopen('php://output', 'r+');
fputs($handle, $bom =( chr(0xEF) . chr(0xBB) . chr(0xBF) ));
fputcsv($handle, [
// My long list of headers
],
";");
while (false !== ($row = $results->next())) {
fputcsv($handle, $row[0]->toArray(), ";");
$em->detach($row[0]);
}
fclose($handle);
});
$response->setStatusCode(200);
$response->headers->set('Content-Type', 'application/force-download');
$response->headers->set('Content-Disposition','attachment; filename="Planning du '.date("Y-m-d h:i:s").".csv");
return $response;
}
UPDATE :
I tried to make goodby/csv working in my case, as said in comment below, but I got exactly the same issue. Which is very strange, because this isn't the same spirit.
public function exportCsv($affairesQuery, $em)
{
$stmt = $this->dbConnection->prepare($affairesQuery->getQuery()->getSQL());
$n = 1;
foreach($affairesQuery->getQuery()->getParameters()->toArray() as $value)
{
$stmt->bindValue($n, $value->getValue());
$n++;
}
$response = new StreamedResponse();
$response->setStatusCode(200);
$response->headers->set('Content-Type', 'text/csv');
$response->headers->set('Content-Disposition','attachment; filename="Planning du '.date("Y-m-d h:i:s").".csv");
$response->setCallback(function() use($stmt) {
$config = new ExporterConfig();
$exporter = new Exporter($config);
$exporter->export('php://output', new PdoCollection($stmt->getIterator()));
});
$response->sendContent();
return $response;
}

phpleague flysystem read and write to large file on server

I am using flysystem with IRON IO queue and I am attempting to run a DB query that will be taking ~1.8 million records and while doing 5000 at at time. Here is the error message I am receiving with file sizes of 50+ MB:
PHP Fatal error: Allowed memory size of ########## bytes exhausted
Here are the steps I would like to take:
1) Get the data
2) Turn it into a CSV appropriate string (i.e. implode(',', $dataArray) . "\r\n")
3) Get the file from the server (in this case S3)
4) Read that files' contents and append this new string to it and re-write that content to the S3 file
Here is a brief run down of the code I have:
public function fire($job, $data)
{
// First set the headers and write the initial file to server
$this->filesystem->write($this->filename, implode(',', $this->setHeaders($parameters)) . "\r\n", [
'visibility' => 'public',
'mimetype' => 'text/csv',
]);
// Loop to get new sets of data
$offset = 0;
while ($this->exportResult) {
$this->exportResult = $this->getData($parameters, $offset);
if ($this->exportResult) {
$this->writeToFile($this->exportResult);
$offset += 5000;
}
}
}
private function writeToFile($contentToBeAdded = '')
{
$content = $this->filesystem->read($this->filename);
// Append new data
$content .= $contentToBeAdded;
$this->filesystem->update($this->filename, $content, [
'visibility' => 'public'
]);
}
I'm assuming this is NOT the most efficient? I am going off of these docs:
PHPLeague Flysystem
If anyone can point me in a more appropriate direction, that would be awesome!
Flysystem supports read/write/update stream
Please check latest API https://flysystem.thephpleague.com/api/
$stream = fopen('/path/to/database.backup', 'r+');
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Using write you can also directly set the visibility
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream, [
'visibility' => AdapterInterface::VISIBILITY_PRIVATE
]);
if (is_resource($stream)) {
fclose($stream);
}
// Or update a file with stream contents
$filesystem->updateStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Retrieve a read-stream
$stream = $filesystem->readStream('something/is/here.ext');
$contents = stream_get_contents($stream);
fclose($stream);
// Create or overwrite using a stream.
$putStream = tmpfile();
fwrite($putStream, $contents);
rewind($putStream);
$filesystem->putStream('somewhere/here.txt', $putStream);
if (is_resource($putStream)) {
fclose($putStream);
}
If you are working with S3, I would use the AWS SDK for PHP directly to solve this particular problem. Appending to a file is actually very easy using the SDK's S3 streamwrapper, and doesn't force you to read the entire file into memory.
$s3 = \Aws\S3\S3Client::factory($clientConfig);
$s3->registerStreamWrapper();
$appendHandle = fopen("s3://{$bucket}/{$key}", 'a');
fwrite($appendHandle, $data);
fclose($appendHandle);

StreamedResponse not working

I have a lengthy import process and I am trying to use the new StreamedResponse available in Symfony 2.1 to report some feedback about the task to the user but the response is not being streamed (I get all the content at once at the end of processing). This is my code in my controller:
$em = $this->getDoctrine()->getEntityManager();
$response = new StreamedResponse();
$response->setCallback(function () use ($em) {
$file = fopen(sys_get_temp_dir().'/categories.txt', 'r');
$lineNum = 0;
while ($line = fgets ($file)) {
$category = new Category();
$fields = explode("\t",$line);
$category->setFullId($fields[0]);
$category->setName($fields[2]);
$category->setFullName($fields[4]);
$em->persist($category);
if ($lineNum%100==0) {
echo 'Processing Line: '.$lineNum.'<br>';
flush();
$em->flush();
}
$lineNum++;
}
fclose($file);
});
return $response;
Any idea what may be wrong?
OK, I found it: you need to call both ob_flush() and flush().

Download rapidshare file using rapidshare api in php

I am trying to download a rapidshare file using its "download" subroutine as a free user. The following is the code that I use to get response from the subroutine.
function rs_download($params)
{
$url = "http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=download&fileid=".$params['fileid']."&filename=".$params['filename'];
$reply = #file_get_contents($url);
if(!$reply)
{
return false;
}
$result_arr = array();
$result_keys = array(0=> 'hostname', 1=>'dlauth', 2=>'countdown_time', 3=>'md5hex');
if( preg_match("/DL:(.*)/", $reply, $reply_matches) )
{
$reply_altered = $reply_matches[1];
}
else
{
return false;
}
foreach( explode(',', $reply_altered) as $index => $value )
{
$result_arr[ $result_keys[$index] ] = $value;
}
return $result_arr;
}
For instance; trying to download this...
http://rapidshare.com/files/440817141/AutoRun__live-down.com_Champ.rar
I pass the fileid(440817141) and filename(AutoRun__live-down.com_Champ.rar) to rs_download(...) and I get a response just as rapidshare's api doc says.
The rapidshare api doc (see "sub=download") says call the server hostname with the download authentication string but I couldn't figure out what form the url should take.
Any suggestions?, I tried
$download_url = "http://$the-hostname/$the-dlauth-string/files/$fileid/$filename"
and a couple other variations of the above, nothing worked.
I use curl to download the file, like the following;
$cr = curl_init();
$fp = fopen ("d:/downloaded_files/file1.rar", "w");
// set curl options
$curl_options = array(
CURLOPT_URL => $download_url
,CURLOPT_FILE => $fp
,CURLOPT_HEADER => false
,CURLOPT_CONNECTTIMEOUT => 0
,CURLOPT_FOLLOWLOCATION => true
);
curl_setopt_array($cr, $curl_options);
curl_exec($cr);
curl_close($cr);
fclose($fp);
The above curl code doesn't seem to work, nothing gets downloaded. Probably its the download url that is incorrect.
Also tried this format for the download url:
"http://rs$serverid$shorthost.rapidshare.com/files/$fileid/$filename"
With this curl writes a file entry but that is all it does(writes a 0/1 kb file).
Here is the code that I use to get the serverid, shorthost, among a few other values from rapidshare.
function rs_checkfile($params)
{
$url = "http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=checkfiles_v1&files=".$params['fileids']."&filenames=".$params['filenames'];
// the response from rapishare would a string something like:
// 440817141,AutoRun__live-down.com_Champ.rar,47768,20,1,l3,0
$reply = #file_get_contents($url);
if(!$reply)
{
return false;
}
$result_arr = array();
$result_keys = array(0=> 'file_id', 1=>'file_name', 2=>'file_size', 3=>'server_id', 4=>'file_status', 5=>'short_host'
, 6=>'md5');
foreach( explode(',', $reply) as $index => $value )
{
$result_arr[ $result_keys[$index] ] = $value;
}
return $result_arr;
}
rs_checkfile(...) takes comma seperated fileids and filenames(no commas if calling for a single file)
Thanks in advance for any suggestions.
You start by requesting ?sub=download&fileid=X&filename=Y, and it returns $hostname,$dlauth,$countdown,$md5hex.. since you're a free user you have to delay for $countdown seconds, and then call ?sub=download&fileid=X&filename=Y&dlauth=Z to perform the download.
There's a working implementation in python here that would probably answer any of your other questions.

Categories