I have "Files Downloading Center" for large files (100MB - 2GB).
I'm using PHP.
My problem is when forcing files to download by using php headers the server memory consumed very much, although I make chunks from file when download process, that is mean when 5 users download large file at the same time the server will stop to work.
How to make users to download large files form my server without any problem.
For example, if i use header("location : path/to/files/2GB.zip");, the problem finish. but this is what i don't need because i don't need to give users direct link to the files for security.
What is solution ?
You could store the files outside of your web path, then include the files at the time of download, using the header function to deploy it to the user. This is a little over-killie, but it works:
$getFiles = fread($FILEHANDLE,$FILESIZE);
header("Content-Transfer-Encoding: Binary");
header("Content-length: ".$FILESIZE."");
header("Content-type: ".$FILETYPE."");
header('Content-Disposition: attachment; filename="'.$FILENAME.'"');
echo $getFiles;
This consumes more memory as you're reading the file on the server before you transfer it, but your downloaders will never know where the files live. YMMV with very large files for obvious reasons.
Related
following issue:
I have a large file on my server (~2GB).
A user who is logged in to my site can download this file from my server.
Unfortunately my server is not that strong. When many user downloading this file simultaneous they will all have very poor dl speed.
So I uploaded the file to google drive and generated a direct download link:
http://googledrive.com/host/[FILE_ID]
My code:
<?php
$remoteFile = 'http://googledrive.com/host/[FILE_ID]';
$filename = basename($remoteFile);
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"$filename\"");
echo file_get_contents($remoteFile);
?>
My question: is file_get_contents() really bypassing the dl or is this file going thru my server? If so, that makes no sense :[ ]
Generate a unique google drive share link after every download, so that only your authenticated users can download, and links may not be used more than once.
I'm using the following to force download of MP3 files:
http://www.aaronfagan.ca/blog/2014/how-to-use-php-to-force-a-file-download/
Basically using PHP lines to force a download
<?php
if ($_GET['id']) {
$file = $_GET['id'];
header("Content-Description: File Transfer");
header("Content-Type: application/octet-stream");
header('Content-Disposition: attachment; filename="'.basename($file).'"');
header("Content-Length: ".filesize($file));
readfile($file);
}
else {
header('Location: http://www.mywebsite.com/error/');
}
?>
Am I correct to understand that anyone that knows how it works could basically download any files on any website with this?
For example, if I place that file in the root of mywebsite.com, anyone with knowledge could use a link like the following to download any file anywhere?:
http://www.mywebsite.com/download.php?id=http://www.anywebsite/files/file.pdf
Or would it only work on my website?
The files I want users to be able to download are MP3 files, would there be a way to "restrict" the type of files the "download.php" would process? so this way the "Content-Type" be set to something for only MP3 files, this way the "hack" would be restricted?
For example if I place that file in the root of mywebsite.com, anyone
with knowledge could use a link like the following to download any
file anywhere?:
http://www.mywebsite.com/download.php?id=http://www.anywebsite/files/file.pdf
If permissions open for http://www.anywebsite/files/file.pdf (it means you can open/download file.pdf with browser) you can download it remotly with your script (but as I now basename uses for local paths),
but usually permissions denied for direct download (you can close permissions too).
Also if you want you can add captcha to your download method to disable grab
Thanks.
Your code works only on your website.
For serving resources from other servers you can use this script Resource-Proxy.
Good Luck
I have written a small download portal and I use application/octet-stream to download the files.
function fu($filename)
{
header("Content-Type: application/octet-stream");
$save_as_name = basename($filename);
header("Content-Disposition: attachment; filename=\"$save_as_name\"");
readfile($filename);
}
When I am downloading a large file it is not possible to browse through the directory tree until the download has finished.
Is there any chance to do this in parallel?
You're probably using sessions. While you have a session open in Window A which is busy serving the download Window B will not be able to get any pages because the PHP process serving A still has the session data open/locked and B is waiting on that lock to be released.
The simple solution is to call session_write_close() at some point before you call readfile(). This will commit the session to disk on the server, close it, and release the lock so other PHP processes can pick it back up.
I have a php file that acts as a gatekeeper for all the files I want people to download, who ahve sufficient privilages.
The code I use throw the file to the user is
header('Content-Description: File Transfer');
header('Content-Type: application/octet-stream');
header("Content-disposition: attachment; filename=\"".$public_filename."\"");
header("Content-Transfer-Encoding: Binary");
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
header("Content-length: ".$f_filesize);
readfile($file_path);
Most files are fairly large.... 400mb-10GB.
What would be a good way to do this, and keep the true locations + filenames secret, so people cant just link to the files directly, but HAVE to link thru my download.php?file=ID gatekeeper?
Thanks
EDIT: Im not asking how to do user authentication, all that is done. Im just asking if my way of doing it, is a good idea on a large scale. Seems like it could cause memory problems if I keep reading 10GB files.
Ok, having php send files of around 400Mb–10Gb is not good. You need to somehow let whatever webserver you're using actually serve the files.
This really comes down to how secure you need it to be. The easiest solution that comes to mind (but far from the most secure) is using symbolic links with long random names that link to the original file. After a certain time the symbolic links expire and are removed. Each user get their own symbolic link (or "token") to the file they're downloading. I'm not sure how this plays out in Windows-environment, but on unix it's fairly straightforward anyway.
Here's some pseudo code:
if($user->isAllowedToDownload($file)){
$token = md5($user->name . $file->name . time() . $someGoodRandomValue);
symlink($file, $download_path . $token);
header("Location: $download_url$token");
}
Then you need a cron job that cleans out old symbolic links. You also need to make sure the webserver is set to follow symbolic links, preferably only for that folder where these download tokens are created.
So when the user maybe requests domain.com/download?file=bigfile.mp4 a symbolic link is created in the webservers public space that points to the real file outside the webservers public space. The user gets redirected to maybe domain.com/getFile/ab739babec890103bdbca72 which in turn causes the webserver to serve the file. Now it's very hard for users to try and guess what an URL is for a file, and that's the "security".
You're already doing that - the $public_filename is what you want it called, the readfile($file_path) part is the file - it's location isn't made public. Past that, it could be above the document root.
Put the files somewhere that is not accessible via HTTP.
Create a database table of file IDs with file paths.
Link to the files via file ID (as you noted above, download.php?fileID=0000).
???
Profit.
As someone who did this previously (many years ago), you need to consider the memory impact this will have on your server. The readfile function was not available then, so it is possible you may not need to do anything special for memory considerations.
You'll want to somehow authenticate them (an HTML form, HTTP basic auth, whatever), then set a session flag, which your download.php script can check. Note that this doesn't prevent people from downloading the file, then distributing it themselves.
You should configure your web server so the real files are not directly accessible.
It's not going to cause memory problems per se. readfile does not read the file into memory. However, using PHP will create overhead. You can eliminate some of this delay by using X-Sendfile.
Your method will cause memory problems, however it is possible to read and output the file in chunks. You will need to use flush() function after you echo each chunk of file. You can also make resuming downloads to work with a little more effort. Still this is an CPU hungry approach.
The easier and better solution is to use "x-sendfile" header tag supported by both apache and lighttpd through their modules. All you'll have to do is just specify file name in your header, similar to this:
header('X-Sendfile: filename-on-your-file-system');
Link for lighttpd:
http://redmine.lighttpd.net/projects/lighttpd/wiki/X-LIGHTTPD-send-file
Firstly: I'm a lowly web designer who knows just enough PHP to be dangerous and just enough about server administration to be, well, nothing. I probably won't understand you unless you're very clear!
The setup: I've set up a website where the client uploads files to a specific directory, and those files are made available, through php, for download by users. The files are generally executable files over 50MB. The client does not want them zipped, as they feel their users aren't savvy enough to unzip them. I'm using the php below to force a download dialogue box and hide the directory where the files are located.
It's Linux server, if that makes a difference.
The problem: There is a certain file that becomes corrupt after the user tries to download it. It is an executable file, but when it's clicked on, a blank DOS window opens up. The original file, prior to download opens perfectly. There are several other similar files that go through the same exact download procedure, and all of those work just fine.
Things I've tried: I've tried uploading the file zipped, then unzipping it on the server to make sure it wasn't becoming corrupt during upload, and no luck.
I've also compared the binary code of the original file to the downloaded file that doesn't work, and they're exactly the same (so the php isn't accidentally inserting anything extra into the file).
Could it be an issue with the headers in my downloadFile function? I really am not sure how to troubleshoot this one…
This is the download php, if it's relevant ($filenamereplace is defined elsewhere):
downloadFile("../DIRECTORY/files/$filenamereplace","$filenamereplace");
function downloadFile($file,$filename){
if(file_exists($file)) {
header('Content-Description: File Transfer');
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename="'.$filename.'"');
header('Content-Transfer-Encoding: binary');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
header('Content-Length: ' . filesize($file));
# flush();
readfile($file);
exit;
}
}
ETA Additonal Info:
- Tests for working/non-working files have been done on the same machine
- If it makes any difference, the original file has a custom icon. After download, the file has a generic blank document icon.
Additonal Info: I THINK THIS ONE'S IMPORTANT!
I just tried downloading the file directly (to bypass the download link that triggers the download function above). If I download the file by just going to its url and downloading it that way, the downloaded file WORKS. So I'm thinking it must have something to do with the download function. But what??
3/17 MAJOR CORRECTION —AND RESOLVED—
So I woke up this morning and it dawned on me that maybe I was comparing the files wrong. (I had re-saved them as binary text, and then compared them. I didn't realize the comparison program would take and compare actual exe files). This morning I tried comparing the actual exe files and there is a difference. There was one line of php code that was being injected into the first line of the file. I adjusted the php, and the problem was fixed. (It was from the if/else statement that defined teh $filenamereplace variable in the code I'd cited). Thanks again for all your help, and sorry for misleading you in insisting that the files' contents were identical!
"I've also compared the binary code of the original file to the downloaded file that doesn't work, and their exactly the same (so the php isn't accidentally inserting anything extra into the file)."
If that's really true, then the problem must be in how the exe is started after it has been downloaded. It should certainly not be a problem with your PHP code.
Perhaps they were corrupted on upload. This can happen if you transfer them via FTP in ASCII mode instead of BINARY.