I have written a Symfony2(PHP MVC Framework) script to download a zip file from the server. But the file download stops in the midway. I have increased the max_execution_time in apache configuration. Still the problem is persisting.
Do anyone have the quick fix for this?
Thanks in advance.
It seems like you may have an issue with a large file (downloading an archive of videos). You should use a StreamedResponse. This way, you don't have to store the entire contents of your file in memory, it will just stream to the client. The way you are currently doing it makes the file load into memory before it can start to download. You can see why this could be a problem. Here is a simple example of how you can stream a file to the client:
$path = "//usr/www/users/jjdqlo/Wellness/web/yoga_videos/archive.zip";
return new StreamedResponse(
function () use ($path) { // first param is a callback, where you do the readfile()
readfile($path);
},
200, // second param is the http status code
array( // third param is an array of header settings
'Content-Disposition' => 'attachment;filename="archive.zip"',
'Content-Type' => 'application/zip'
)
);
Give this a shot. Assuming the problem is because of file size, this should solve the issue.
Related
I've been searching through the web quite a bit now, found several possible solutions, but none of them worked. Some say it's due to php.ini settings, some say it's due to the method I am using from the SDK. I'm a bit stuck here. I've tested it quite thoroughly, and with the current code I have, I am able to download a file from my S3 bucket without problems or corruptions, however it's ALWAYS limited to 64 megabytes.
Is there some way to up this limit? Or increment the download?
When I try to download a file over 64 megabytes the page cannot be reached. Sometimes it might actually download the file anyway (while it says cannot be reached), but only exactly 64 megabytes.
try {
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
set_time_limit(0);
header('Content-Description: File Transfer');
header("Content-Type: {$result['ContentType']}; charset=utf-8");
header("Content-Disposition: attachment; filename=".$filename);
echo $result['Body'];
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}
I've set my memory_limit to no limit (0). I've also tried to set the memory limit to about 64 megabytes, but still no dice.
Tried tinkering with post_max_size etc, but still nothing. I'm not sure if the problem relies on my apache/php setup, the EC2 I'm running, or S3 SDK limitations.
The EC2 instance I'm running is a t2.xlarge, running: Ubuntu 20.04 LTS (Focal Fossa) LAMP Stack - Linux Apache MySQL/MariaDB PHP
Some of the things I've found with similar issues (The first one I've tried without luck):
Download large files from s3 via php <-- This link has solution
Max execution time out error when tried to download large object(2gb) from s3 bucket to window server using php
The bottom link I don't really understand, apparently the solution should be to increment the download (according to online Thomas), but I'm not sure how that would work. How would I combine the data, and how would I keep downloading from where I left off? I'm missing an example of how to work that solution. The OP of that post asked the same question.
The presigned request solution from OP from the first link, did solve my problem. However it's a different "way of downloading", before in my code we were using the SDK to download via the SDK, now we are using the SDK to create basically a download link. The 64 megabyte issue still persists. I can use this, but if anyone has a solution for how to download via the SDK over 64 megabytes, please let me know!
My solution:
$cmd = $s3->getCommand('GetObject', [
'Bucket' => $bucket,
'Key' => $keyname,
'ResponseContentDisposition' => 'attachment; filename="'.$filename.'"'
]);
$request = $s3->createPresignedRequest($cmd, '+15 min');
$presignedUrl = (string)$request->getUri();
And then basically just open that URL anywhere in HTML, JS etc. and it will begin to download the file.
the scenario is:
Download a file that is generate directly onto php://output write into it.
Without varnish the behavior is the file is downloaded properly while the server writing on the buffer.
with varnish the client is waiting until the whole file is generated and then download the file.
Is there a particular configuration for varnish to accomplish the start immediatly download the file instead of waiting the full generated file?
I already try to pass the URL (Varnish rule to skip the caching mechanism) where the file is generated, but writing into the buffer it doesn't make sense, doesn it?
EDIT
from a perspective PHP view, It open a filestream on php://output and write into that stream
$out = fopen( 'php://output', 'w' );
fputcsv( $out, $whatever ); // or fwrite
I found the solution in varnish configuration:
It is enough to do not cache that specific url (and http verbs)
something like this:
if ((req.url ~ "/url") && (req.url == "POST")) {
return(pipe);
}
I am using the SabreDAV PHP library to connect to a WebDAV server and download some files but it is taking forever to download a 1MB file and I have to download up to 1GB files from that server. I looked at this link http://code.google.com/p/sabredav/wiki/WorkingWithLargeFiles but it is not helpful because it's telling me that I will get a stream when I do a GET but it is not the case.
Here is my code:
$settings = array(
'baseUri' => 'file url',
'userName' => 'user',
'password' => 'pwd'
);
$client = new \Sabre\DAV\Client($settings);
$response = $client->request('GET');
response is an array with a 'body' key that contains the content of the file. What am I doing wrong? I only need the file for read only. How can I can read through the file line by line as quick as possible?
Thanks in advance.
If its taking too long just to download a 1MB file, then I think its not SabreDAV problem but a problem with your server or network, or perhaps the remote server.
The google code link you mentioned just lists a way if you want to transfer very large files, for that you will have to use the stream and fopen way they mentioned, but I think I was able to transfer 1GB files without using that way and just normally when I last used it with OwnCloud.
If you have a VPS/Dedi server, open ssh and use wget command to test the speed and time it takes to download that remote file from WebDAV, if its same as what its taking with SabreDAV, then its a server/network problem and not SabreDAV, else, its a problem with Sabre or your code.
Sorry but I donot have any code to post to help you since the problem itself is not clear and there can be more than 10 things causing it.
PS: You need to increase php limits for execution time, max file upload and max post size too relatively
I have a PHP site with a lot of media files and users need to be able to download multiple files at a time as a .zip. I'm trying to use ZipStream to serve the zips on the fly with "store" compression so I don't actually have to create a zip on the server, since some of the files are huge and it's prohibitively slow to compress them all.
This works great and the resulting files can be opened by every zip program I've tried with no errors except for OS X's default unzipping program, Archive Utility. You double click the .zip file and Archive Utility decides it doesn't look a real zip and instead compresses into a .cpgz file.
Using unzip or ditto in the OS X terminal or StuffIt Expander unzips the file with no problem but I need the default program (Archive Utility) to work for the sake of our users.
What sort of things (flags, etc.) in otherwise acceptable zip files can trip Archive Utility into thinking a file isn't a valid zip?
I've read this question, which seems to describe a similar issue but I don't have any of the general purpose bitfield bits set so it's not the third bit issue and I'm pretty sure I have valid crc-32's because when I don't, WinRAR throws a fit.
I'm happy to post some code or a link to a "bad" zip file if it would help but I'm pretty much just using ZipStream, forcing it into "large file mode" and using "store" as the compression method.
Edit - I've tried the "deflate" compression algorithm as well and get the same results so I don't think it's the "store". It's also worth pointing out that I'm pulling down the files one a time from a storage server and sending them out as they arrive so a solution that requires all the files to be downloaded before sending anything isn't going to be viable (extreme example is 5GB+ of 20MB files. User can't wait for all 5GB to transfer to zipping server before their download starts or they'll think it's broken)
Here's a 140 byte, "store" compressed, test zip file that exhibits this behavior: http://teknocowboys.com/test.zip
The problem was in the "version needed to extract" field, which I found by doing a hex diff on a file created by ZipStream vs a file created by Info-zip and going through the differences, trying to resolve them.
ZipStream by default sets it to 0x0603. Info-zip sets it to 0x000A. Zip files with the former value don't seem to open in Archive Utility. Perhaps it doesn't support the features at that version?
Forcing the "version needed to extract" to 0x000A made the generated files open as well in Archive Utility as they do everywhere else.
Edit: Another cause of this issue is if the zip file was downloaded using Safari (user agent version >= 537) and you under-reported the file size when you sent out your Content-Length header.
The solution we employ is to detect Safari >= 537 server side and if that's what you're using, we determine the difference between the Content-Length size and the actual size (how you do this depends on your specific application) and after calling $zipStream->finish(), we echo chr(0) to reach the correct length. The resulting file is technically malformed and any comment you put in the zip won't be displayed, but all zip programs will be able to open it and extract the files.
IE requires the same hack if you're misreporting your Content-Length but instead of downloading a file that doesn't work, it just won't finish downloading and throws a "download interrupted".
use ob_clean(); and flush();
Example :
$file = __UPLOAD_PATH . $projectname . '/' . $fileName;
$zipname = "watherver.zip"
$zip = new ZipArchive();
$zip_full_path_name = __UPLOAD_PATH . $projectname . '/' . $zipname;
$zip->open($zip_full_path_name, ZIPARCHIVE::CREATE);
$zip->addFile($file); // Adding one file for testing
$zip->close();
if(file_exists($zip_full_path_name)){
header('Content-type: application/zip');
header('Content-Disposition: attachment; filename="'.$zipname.'"');
ob_clean();
flush();
readfile($zip_full_path_name);
unlink($zip_full_path_name);
}
I've had this exact issue but with a different cause.
In my case the php generated zip would open from the command line, but not via finder in OSX.
I had made the mistake of allowing some HTML content into the output buffer prior to creating the zip file and sending that back as the response.
<some html></....>
<?php
// Output a zip file...
The command line unzip program was evidently tolerant of this but the Mac unarchive function was not.
No idea. If the external ZipString class doesn't work, try another option. The PHP ZipArchive extension won't help you, since it doesn't support streaming but only ever writes to files.
But you could try the standard Info-zip utility. It can be invoked from within PHP like this:
#header("Content-Type: archive/zip");
passthru("zip -0 -q -r - *.*");
That would lead to an uncompressed zip file directly send back to the client.
If that doesn't help, then the MacOS zip frontend probably doesn't like uncompressed stuff. Remove the -0 flag then.
The InfoZip commandline tool I'm using, both on Windows and Linux, uses version 20 for the zip's "version needed to extract" field. This is needed on PHP as well, as the default compression is the Deflate algorithm. Thus the "version needed to extract" field should really be 0x0014. If you alter the "(6 << 8) +3" code in the referenced ZipStream class to just "20", you should get a valid Zip file across platforms.
The author is basically telling you that the zip file was created in OS/2 using the HPFS file system, and the Zip version needed predates InfoZip 1.0. Not many implementations know what to do about that one any longer ;)
For those using ZipStream in Symfony, here's your solution: https://stackoverflow.com/a/44706446/136151
use Symfony\Component\HttpFoundation\StreamedResponse;
use Aws\S3\S3Client;
use ZipStream;
//...
/**
* #Route("/zipstream", name="zipstream")
*/
public function zipStreamAction()
{
//test file on s3
$s3keys = array(
"ziptestfolder/file1.txt"
);
$s3Client = $this->get('app.amazon.s3'); //s3client service
$s3Client->registerStreamWrapper(); //required
$response = new StreamedResponse(function() use($s3keys, $s3Client)
{
// Define suitable options for ZipStream Archive.
$opt = array(
'comment' => 'test zip file.',
'content_type' => 'application/octet-stream'
);
//initialise zipstream with output zip filename and options.
$zip = new ZipStream\ZipStream('test.zip', $opt);
//loop keys useful for multiple files
foreach ($s3keys as $key) {
// Get the file name in S3 key so we can save it to the zip
//file using the same name.
$fileName = basename($key);
//concatenate s3path.
$bucket = 'bucketname';
$s3path = "s3://" . $bucket . "/" . $key;
//addFileFromStream
if ($streamRead = fopen($s3path, 'r')) {
$zip->addFileFromStream($fileName, $streamRead);
} else {
die('Could not open stream for reading');
}
}
$zip->finish();
});
return $response;
}
If your controller action response is not a StreamedResponse, you are likely going to get a corrupted zip containing html as I found out.
It's an old question but I leave what it worked for me just in case it helps someone else.
When setting the options you need set Zero header to true and enable zip 64 to false (this will limit the archive to archive to 4 Gb though):
$options->setZeroHeader(true);
$opt->setEnableZip64(false)
Everything else as described by Forer.
Solution found on https://github.com/maennchen/ZipStream-PHP/issues/71
So I am trying to serve large files via a PHP script, they are not in a web accessible directory, so this is the best way I can figure to provide access to them.
The only way I could think of off the bat to serve this file is by loading it into memory (fopen, fread, ect.), setting the header data to the proper MIME type, and then just echoing the entire contents of the file.
The problem with this is, I have to load these ~700MB files into memory all at once, and keep the entire thing there till the download is finished. It would be nice if I could stream in the parts that I need as they are downloading.
Any ideas?
You don't need to read the whole thing - just enter a loop reading it in, say, 32Kb chunks and sending it as output. Better yet, use fpassthru which does much the same thing for you....
$name = 'mybigfile.zip';
$fp = fopen($name, 'rb');
// send the right headers
header("Content-Type: application/zip");
header("Content-Length: " . filesize($name));
// dump the file and stop the script
fpassthru($fp);
exit;
even less lines if you use readfile, which doesn't need the fopen call...
$name = 'mybigfile.zip';
// send the right headers
header("Content-Type: application/zip");
header("Content-Length: " . filesize($name));
// dump the file and stop the script
readfile($name);
exit;
If you want to get even cuter, you can support the Content-Range header which lets clients request a particular byte range of your file. This is particularly useful for serving PDF files to Adobe Acrobat, which just requests the chunks of the file it needs to render the current page. It's a bit involved, but see this for an example.
The best way to send big files with php is the X-Sendfile header. It allows the webserver to serve files much faster through zero-copy mechanisms like sendfile(2). It is supported by lighttpd and apache with a plugin.
Example:
$file = "/absolute/path/to/file"; // can be protected by .htaccess
header('X-Sendfile: '.$file);
header('Content-type: application/octet-stream');
header('Content-Disposition: attachment; filename="'.basename($file).'"');
// other headers ...
exit;
The server reads the X-Sendfile header and sends out the file.
While fpassthru() has been my first choice in the past, the PHP manual actually recommends* using readfile() instead, if you are just dumping the file as-is to the client.
* "If you just want to dump the contents of a file to the output buffer, without first modifying it or seeking to a particular offset, you may want to use the readfile(), which saves you the fopen() call." —PHP manual
If your files are not accessible by the web server because the path is not in your web serving directory (htdocs) then you can make a symbolic link (symlink) to that folder in your web serving directory to avoid passing all traffic trough php.
You can do something like this
ln -s /home/files/big_files_folder /home/www/htdocs
Using php for serving static files is a lot slower, if you have high traffic, memory consumption will be very large and it may not handle a big number of requests.
Have a look at fpassthru(). In more recent versions of PHP this should serve the files without keeping them in memory, as this comment states.
Strange, neither fpassthru() nor readfile() did it for me, always had a memory error.
I resorted to use passthru() without the 'f':
$name = 'mybigfile.zip';
// send the right headers
header("Content-Type: application/zip");
header("Content-Length: " . filesize($name));
// dump the file and stop the script
passthru('/bin/cat '.$filename);
exit;
this execs 'cat' Unix command and send its output to the browser.
comment for slim: the reason you just don't put a symlink to somewhere is webspace is SECURITY.
One of benefits of fpassthru() is that this function can work not only with files but any valid handle. Socket for example.
And readfile() must be a little faster, cause of using OS caching mechanism, if possible (as like as file_get_contents()).
One more tip. fpassthru() hold handle open, until client gets content (which may require quite a long time on slow connect), and so you must use some locking mechanism if parallel writes to this file possible.
The Python answers are all good. But is there any reason you can't make a web accessible directory containing symbolic links to the actual files? It may take some extra server configuration, but it ought to work.
If you want to do it right, PHP alone can't do it. You would want to serve the file by using Nginx's X-Accel-Redirect (Recommended) or Apache's X-Sendfile, which are built exactly for this purpose.
I will include in this answer some text found on this article.
Why not serve the files with PHP:
Done naively, the file is read into memory and then served. If the
files are large, this could cause your server to run out of memory.
Caching headers are often not set correctly. This causes web browsers
to re-download the file multiple times even if it hasn't changed.
Support for HEAD requests and range requests is typically not
automatically supported. If the files are large, serving such files
ties up a worker process or thread. This can lead to starvation if
there are limited workers available. Increasing the number of workers
can cause your server to run out of memory.
NGINX handles all of these things properly. So let's handle permission checks in the application and let NGINX serve the actual file. This is where internal redirects come in. The idea is simple: you can configure a location entry as usual when serving regular files.
Add this to your nginx server block:
location /protected_files/ {
internal;
alias /var/www/my_folder_with_protected_files/;
}
In your project, require the HTTP Foundation package:
composer require symfony/http-foundation
Serve the files in PHP using Nginx:
use Symfony\Component\HttpFoundation\BinaryFileResponse;
$real_path = '/var/www/my_folder_with_protected_files/foo.pdf';
$x_accel_redirect_path = '/protected_files/foo.pdf';
BinaryFileResponse::trustXSendfileTypeHeader();
$response = new BinaryFileResponse( $real_path );
$response->headers->set( 'X-Accel-Redirect', $accel_file );
$response->sendHeaders();
exit;
This should be the basic you need to get started.
Here's a more complete example serving an Inline PDF:
use Symfony\Component\HttpFoundation\BinaryFileResponse;
use Symfony\Component\HttpFoundation\File\File;
use Symfony\Component\HttpFoundation\ResponseHeaderBag;
$real_path = '/var/www/my_folder_with_protected_files/foo.pdf';
$x_accel_redirect_path = '/protected_files/foo.pdf';
$file = new File( $file_path );
BinaryFileResponse::trustXSendfileTypeHeader();
$response = new BinaryFileResponse( $file_path );
$response->setImmutable( true );
$response->setPublic();
$response->setAutoEtag();
$response->setAutoLastModified();
$response->headers->set( 'Content-Type', 'application/pdf' );
$response->headers->set( 'Content-Length', $file->getSize() );
$response->headers->set( 'X-Sendfile-Type', 'X-Accel-Redirect' );
$response->headers->set( 'X-Accel-Redirect', $accel_file );
$response->headers->set( 'X-Accel-Expires', 60 * 60 * 24 * 90 ); // 90 days
$response->headers->set( 'X-Accel-Limit-Rate', 10485760 ); // 10mb/s
$response->headers->set( 'X-Accel-Buffering', 'yes' );
$response->setContentDisposition( ResponseHeaderBag::DISPOSITION_INLINE, basename( $file_path ) ); // view in browser. Change to DISPOSITION_ATTACHMENT to download
$response->sendHeaders();
exit;