For the past 3 months my site has been using a PHP file handler in combination with htaccess.
Users accessing the uploads folder of the site would be redirected to the handler as such:
RewriteRule ^(.+)\.*$ downloader.php?f=%{REQUEST_FILENAME} [L]
The purpose of the file handler is pseudo coded below, followed by actual code.
//Check if file exists and user is downloading from uploads directory; True.
//Check against a file type white list and set the mime type(); $ctype = mime type;
header("Pragma: public"); // required
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false); // required for certain browsers
header("Content-Type: $ctype");
header("Content-Disposition: attachment; filename=\"".basename($filename)."\";" );
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".filesize($filename));
readfile("$filename");
As of yesterday, the handler started returning garbled files, unreadable images, and had to be bypassed. I'm wondering what settings could have gone awry to cause this.
-EDIT-
Problem found, but not resolved. Including a path to a php library I was using for integrating with Wordpress was corrupting the files. Removing that block of code solves the corruption issue but leaves the files accessible without the desired authentication.
#include_once($_SERVER['DOCUMENT_ROOT'].'/wp-blog-header.php');
if(!is_user_logged_in())
{
auth_redirect(); //Kicks the user to a login page.
}
//resume download script
Maybe more tests will reveal the problem...
if ( !isset($filename) ) {
die('parameter "filename" not set');
}
else if ( !file_exists($filename) ) {
die('file does not exist');
}
else if ( !is_readable($filename) ) {
die('file not readable');
}
else if ( false===($size=filesize($filename)) ) {
die('stat failed');
}
else if ( headers_sent() || ob_get_length()>0) {
die('something already sent output.');
}
else {
$basename = basename($filename);
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false); // required for certain browsers
header("Content-Type: $ctype");
header("Content-Disposition: attachment; filename=\"".$basename."\";" );
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".$size);
readfile($filename);
}
How are the files corrupted? Truncated? 0-byte? Completely different content? Random sections replaced with garbage?
Is it possible the server's PHP memory limit has been lowered? readfile() will buffer the whole file in memory before outputing it. Therefore a 40meg file will fail is the memory limit is 39.9999, kind of thing.
For streaming a file to the user, it's best to NOT use php's own "dump file to browser" functions, as they're all subject to the memory limit. It's best to do an fopen/fwrite/fclose loop and spit the file out in small manageable chunks (4k, 16k, etc...).
Related
I can't seem to figure this out and I know it's something simple. I am building the back-end to a very basic content management system. For this specific piece, I am just trying to create a PHP link that allows for a file (the client's CV) to be downloaded.
MY PROBLEM:
When the link to download the file is clicked, instead of the browser prompting you to choose a local directory to save the file to - it simply displays the file and a bunch of symbols before and after the document's contents (I am assuming this is the file's opening and closing exif data for an application to decipher).
How could I go about forcing the browser to prompt the user for a "Save As..." box?
<?php
require("connect.php");
$query = "SELECT * FROM kb_info LIMIT 1";
$result = mysql_query($query, $link);
while ($row = mysql_fetch_array($result)) {
$file_extension = end(explode(".", $row["extension"]));
if ($file_extension == doc) {
header('Content-disposition: attachment; filename='.$row["extension"]);
header('Content-type: application/doc');
header ("Content-Length: ".filesize($row["extension"]));
readfile($row["extension"]);
exit;
}
if ($file_extension == docx) {
header('Content-disposition: attachment; filename='.$row["extension"]);
header('Content-type: application/docx');
header ("Content-Length: ".filesize($row["extension"]));
readfile($row["extension"]);
exit;
}
if ($file_extension == pdf) {
header('Content-disposition: attachment; filename='.$row["extension"]);
header('Content-type: application/pdf');
header ("Content-Length: ".filesize($row["extension"]));
readfile($row["extension"]);
exit;
}
}
?>
Many thanks,
Joshie
I think the problem can be that there is some whitespace somewhere in the PHP files, which causes that the headers are not sent correctly and therefore you see the whole output.
I would suggest the followings steps:
check the "connect.php" and look for empty lines/spaces at the begining/ending of the file and remove them
adapt you php files that way, that you leave out the ending tag ?> at the end of the file - that way you do not get empty lines at the end of the file
if the above are not enough you need to check your apache and php error log and/or set up error loging, so you see also warnings - that you you would be informed if the headers are not sent correctly or if there is some other error
Headers I use for download:
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-type: application/force-download");
header("Content-Disposition: attachment; filename=".$file);
header("Content-Type: application/octet-stream");
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".$bytes."");
The builtin browser of my ebook-reader (Sony PRS-T1) somehow doesn't like to download .epub files.
Normally it opens .epub files as if they were text-files.
With this php-download-script I managed to force the browser to download files I store on my server:
<?php
$path = $_GET['path'];
$mimeType = $_GET['mimeType'];
if(!file_exists($path)) {
// File doesn't exist, output error
die('file not found');
} else {
$size = filesize($path);
$file = basename($path);
// Set headers
header("Pragma: public"); // required
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false); // required for certain browsers
header("Content-Description: File Transfer");
header("Content-Disposition: attachment; filename=\"$file\"");
header("Content-Type: $mimeType");
header("Content-Transfer-Encoding: binary");
header("Content-Length: $size");
// Read the file from disk
readfile($path);
}
exit();
?>
Now, the PRS-T1 would download the file but for some reason I don't understand it will change the file extension from .epub to .htm - this is weird.
But it seems like there is a way to do it right: when I download a .epub file from readbeam.com it works just like expected (I found this hint at http://www.mobileread.com/forums/showthread.php?t=163466).
What is it, that makes the difference betweeen their configuration and mine?
Here's what I found out using firebug:
http://tinypic.com/r/vzzkzp/5
http://tinypic.com/r/2h7pbth/5
Your Content-Type header doesn't match the one from readbeam.
application/epub zip != application/epub+zip
The + is probably being seen by PHP as a space since it seems you are passing it via $_GET.
What I am trying to implement is force download of a file through PHP.
There are 2 issues that I am facing, and I've broken my head thinking about what is possibly going wrong here.
Whenever I try to download the file using IE, the download gets interrupted about midway i.e. say my file is 1024Kb in size, around 500Kb is when the download stops as I get an error message in IE say 'Download was interrupted'
The other issue that I encounter frequently (but not always) is that the downloaded file (which is actually a zip file) gets corrupted at times for some reason! The file on the server is alright - if I download it directly from there, no issues at all. However, if I download using my PHP script and then try to unzip the file - Windows says 'Invalid or corrupt file...'
I really need some help on this.
Following is the block of code downloading the file:-
$fileName = "./SomeDir/Somefile.zip";
$fsize = filesize($fileName);
set_time_limit(0);
// required for IE, otherwise Content-Disposition may be ignored
if(ini_get('zlib.output_compression'))
ini_set('zlib.output_compression', 'Off');
// set headers
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: public");
header("Content-Description: File Transfer");
header('Content-type: application/zip');
header("Content-Disposition: attachment; filename=\"".basename($fileName)."\";");
header("Content-Transfer-Encoding: binary");
header('Accept-Ranges: bytes');
header("Content-Length: " . $fsize);
// download
$file = #fopen($fileName,"rb");
if ($file) {
while(!feof($file)) {
print(fread($file, 1024*8));
flush();
if (connection_status()!=0) {
#fclose($file);
die();
}
}
#fclose($file);
}
Try this
<?php
$file = "./SomeDir/Somefile.zip";
set_time_limit(0);
$name = "Somefile.zip";
// Print headers and print to output.
header('Cache-Control:');
header('Pragma:');
header("Content-type: application/zip");
header("Content-Length: ". filesize($file));
header("Content-Disposition: attachment; filename=\"{$name}\"");
readfile($file);
?>
I think that your content-length reports more bytes than the amount really transferred.
Check in the server's log. The server could gzip the content and when it closes the connection the client is still waiting for the remaining declared bytes.
I'm using the following the PHP script to download a 20mb file:
(filepath & filename are set earlier in the script)
$fullPath = $filepath.$filename;
if ($fd = fopen($fullPath, "r")) {
// http headers for zip downloads
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"".$filename."\"");
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".filesize($fullPath));
while(!feof($fd)) {
echo(fread($fd, 1024*8));
flush();
if (connection_status()!=0) {
fclose($fd);
die();
}
}
}
fclose($fd);
exit();
It works fine if the download finishes, but if the download is canceled by the user, and they click on the link to re-download, the server is completely unresponsive for several minutes, and then the download will begin. It seems like it is waiting for the script to time out...but isn't the "if (connection_status()!=0)..." part supposed to kil the script?
Any help is much appreciated! Thank you in advance!
I think you're over-engineering your solution somewhat. Try using readfile() instead of your fopen/fread loop.
Better yet, unless there's a compelling reason why you need PHP to mediate the file transfer, don't use a PHP script at all and simply provide a direct link to the file in question.
The best code is always the code you don't have to write.
I'm using a script to download video, but it take lot of time to download. Are there any processes or other scripts that could help me?
// set headers
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-Type: $mtype");
header("Content-Disposition: attachment; filename=\"$asfname\"");
header("Content-Transfer-Encoding: binary");
header("Content-Length: " . $fsize);
// download
// #readfile($file_path);
$file = #fopen($file_path,"rb");
if ($file) {
while(!feof($file)) {
print(fread($file, 1024*100));
flush();
if (connection_status()!=0) {
#fclose($file);
die();
}
}
#fclose($file);
}
Using the readfile() function (as you originally had) will allow you to spool directly from the file to output, rather than using a chunking loop and printing as you're doing. So why have you chosen to do this chunk loop?
As above, readfile() is one way.
The other, even more preferred method depends on your webserver. NginX, Lighttpd and there's also a module for Apache, allows you to pass a header with a filepath/name to the server, and it will send the file directly from the server itself, and so not need to use PHP resources to do it. If thats not possible, then readfile() is the best you probably have - if you can't just give someone a direct URL to download it.