Okay, so i am trying to start hosting my own file sharing site, i currently am using WAMP server to have server properties with apache, PHP, mysql, etc. I have my root folder located in a 2TB hard drive and i would like to list folders/files in another hard drive. But when i use the dir function it does not link to the actual files it just lists them. I would like to allow people to download the files in my HDD to there client computers. Any ideas on how to work this out and link to the actual files?
You can't link to files that are not in your website directly.
What you can do is have all the links point to a download script that takes a parameter which points to the file. That script can then make it work in memory.
Here's an example I found on the web here:
http://www.finalwebsites.com/forums/topic/php-file-download
// place this code inside a php file and call it f.e. "download.php"
$path = $_SERVER['DOCUMENT_ROOT'] . "/path2file/"; // change the path to fit your websites document structure
$fullPath = $path . $_GET['download_file'];
if ($fd = fopen ($fullPath, "r"))
{
$fsize = filesize($fullPath);
$path_parts = pathinfo($fullPath);
$ext = strtolower($path_parts["extension"]);
switch ($ext)
{
case "pdf":
header("Content-type: application/pdf"); // add here more headers for diff. extensions
header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\""); // use 'attachment' to force a download
break;
default:
header("Content-type: application/octet-stream");
header("Content-Disposition: filename=\"".$path_parts["basename"]."\"");
break;
}
header("Content-length: $fsize");
header("Cache-control: private"); //use this to open files directly
while(!feof($fd))
{
$buffer = fread($fd, 2048);
echo $buffer;
}
}
fclose ($fd);
exit;
// example: place this kind of link into the document where the file download is offered:
// Download here
In linux i would create a symbolic link from the external hard drive into you webroot folder, but under windows it looks like you need to create a junction directory.
Have a read of this, it explains how to create it.
http://www.howtogeek.com/howto/windows-vista/using-symlinks-in-windows-vista/
Your directory structure should end up looking like this
webroot
|- php files
|- externalfiles (dir junction to ext hard drive)
|- sharedfile1
|- sharedfile2
Related
I have found a lot of information about zipping files, but don't know how to download it. I have the following code:
$zip = new ZipArchive;
$zip->open('myzip.zip', ZipArchive::CREATE);
foreach (glob($directory."/*") as $file) {
$zip->addFile($file);
unlink($file);
}
$zip->close();
header('Content-Type: application/zip');
header("Content-Disposition: attachment; filename='myzip.zip'");
header('Content-Length: ' . filesize($zip));
header("Location: myzip.zip");
This code creates the zip file of files that exists in my directory. The file gets created, but it gets created in a random directory that I do not want it to be in. I want to create the zip file of all the files in the directory I specify, have it downloaded in the browser, and then delete the zip file and all the files in my web server. Basically, I want the zip file to be temporary.
Right now the code runs, but nothing is being downloaded and the file is being created in a random directory.
I'm trying to load an appcast xml file from a php file, The xml contains a file path used to display a changelog.html & gives a download path to a file. Normally, I don't want browsers to be able to access this stuff, so, the xml file, changelog & the file are all together in folder above the web root directory. The php file is inside of the webroot.
Here's my php code:
$filename = "./home/myaccountusername/folder1/folder2/appcast.xml";
if (!file_exists ($filename)) throw new Exception("File not found");
// Set the content type to xml
header('Content-Type: text/xml');
header('Content-Disposition: attachment; filename="' . basename($filename) . '"');
header('Content-Length: ' . filesize($filename));
// Tell the user-agent (Stacks) not to cache the file
header('Expires: 0');
header('Cache-Control: must-revalidate');
header('Pragma: public');
// Flush the output buffer
ob_clean();
flush();
// Read the data and send the file
//throw new Exception("FileName: " . $filename);
readfile($filename);
exit;
It's throwing exception: File not found
How do I write the paths to look above the webroot & when passed to my final display interface, allows the changelog.html to be opened & the file to be downloaded?
Note: I've tried the beginning of the path to be /home... and home... & even ../../../folder1/ ...
Is this possible to set up in php? Can't figure it out.
Update1:
Here's a tree of server:
- public_html
- appcastsecure
- productfolder
- appcast.xml
- changelog.html
- productzipfile.zip
- company_folder
- secureappcast
- appcastfile.php (start here for pathing)
I'm using DIR in appcastfile.php which gives path to appcast.xml:
$filename = /home/userdir/public_html/company_folder/secureappcast/../../appcastsecure/productfolder/appcast.xml
My problem is appcast.xml is pushed to clients on their server, so I can't figure out how to set up pathing in appcast.xml so it will point to changelog & productzipfile on my server (all are outside of public_html)
Firstly, the one in your example is trying to access the a folder relative to the current directory...
$filename = "./home/myaccountusername/folder1/folder2/appcast.xml";
The first dot shouldn't be there anyway.
If you want to access a file that is relative to a directory, you can use __DIR__, which gives the directory of the script your currently in. This then depends on where the file your trying to access is relative to the current script. So
$filename = __DIR__."/../appcast.xml";
Is a file in the directory above the one containing this script. You will have to adjust this depending on your particular requirements.
I'm trying to hide my pdf files from users but I want them to be downloadable.
Heres my file structure
+-- index.php
+-- download.php
+-- content
+-- .htaccess
+-- files
+-- pdf.pdf
+-- pdf2.pdf
I tried to block users access to the content folder with .htaccess.
deny from all
But when i download pdf file with this
//download.php
header('Content-Type: application/pdf');
$file = "http://localhost/content/files/pdf2.pdf";
header("Content-Transfer-Encoding: Binary");
header("Content-disposition: attachment; filename=\"" . basename($file) . "\"");
Browser can't load it.
I can't figure out another way to do this
Users can upload files to the site and decide a price for it.
And when you have paid for it you can download it. There will be mysql query before downloading to check if user have bought it
Your download.php file is just setting some headers to tell the browser to download the file. It doesn't actually write the content of the file in the response. You just have to add the following line of code to the end of download.php:
readfile($_SERVER['DOCUMENT_ROOT'] . "/content/files/pdf2.pdf");
NOTE: As gview mentioned in the comments, the proper way to do this would be to move the files outside the document root so they cannot be accessed regardless of your per-directory htaccess file. You can still use my solution, but instead of using $_SERVER['DOCUMENT_ROOT'], you would put the server path. Like this:
readfile("/server/path/to/content/files/pdf2.pdf");
I am using Martin Barker's code/answer from ( PHP to protect PDF and DOC ) almost verbatum, only difference is the file I am protecting is in my user folder above the public_html folder
Folder structure
/users/websupport
/public_html
File to download is at:
/users/websupport/FileToDownload.pdf
The download.php file is at
/public_html/download.php
but Firefox tells me it cannot find the file at Firefox can't find the file at download.php.
I have verified that the file is there via ftp.
If placing the file outside the webroot do I need to add something to the sites .htaccess ? Just not sure where I am going wrong with this. Below is the code within download.php
//check users is loged in and valid for download if not redirect them out
// YOU NEED TO ADD CODE HERE FOR THAT CHECK
// array of support file types for download script and there mimetype
$mimeTypes = array(
'doc' => 'application/msword',
'pdf' => 'application/pdf',
);
// set the file here (best of using a $_GET[])
$file = "../users/websupport/2011cv.pdf";
// gets the extension of the file to be loaded for searching array above
$ext = explode('.', $file);
$ext = end($ext);
// gets the file name to send to the browser to force download of file
$fileName = explode("/", $file);
$fileName = end($fileName);
// opens the file for reading and sends headers to browser
$fp = fopen($file,"r") ;
header("Content-Type: ".$mimeTypes[$ext]);
header('Content-Disposition: attachment; filename="'.$fileName.'"');
// reads file and send the raw code to browser
while (! feof($fp)) {
$buff = fread($fp,4096);
echo $buff;
}
// closes file after whe have finished reading it
fclose($fp);
Make sure the user your php script runs as has read access to that directory.
On php embedded in apache on most debian derivatives, the user will be 'www-data'.
I had the same issue recently where readfile() and fpassthru() just would not work on my server.
What I ended up doing was creating symlinks for the files as needed and passing those to the user. You can learn how to create symlinks here.
I used
exec("ln -s source_file_full_path full_path_to_fake_file");
if you wanted your user to have a link like 'http://somesite.com/folder/fake_file.pdf' then the full path would be to where 'folder' lives on your server and you would include 'fake_file.pdf' in your fake file path.
then to expire the links I made another call to find all of the symlinks with a creation date older than x minutes. You can see how to do that in this answer. (That could be a cron job to ensure they expire on time.)
I have downloaded and added this very simple, one file, php web file explorer system(called Indexer) to my XAMPP server.
My XAMMP server is on my C: drive, but I want Indexer to display a directory on my G: drive. But when I change (what I think are) the right configuration variables, it doesn't work properly.
Here is the code I think is to do with the problem:
// configuration
$Root = realpath("G:/test");
$AllowDownload = TRUE;
$WebServerPath = dirname("G:/test");
and later on in the code...
elseif ($AllowDownload) {
echo "".$item["name"]."";
}
This is what happens: The script does correctly display the contents of the "test" directory on the G: drive, but when I click the filename, to download/view the file, the link is broken because the php constructs the link wrong (I suppose).
The link looks like this: http://localhostg//[name of file].
Would you know how to solve this problem?
This script works perfectly if I change the configuration variables so it displays the contents of a relative subdirectory. And it also says $Root variable can be located outside the webserver root.
Also, even though clicking the link doesn't work, right-clicking and selecting "Save Target As" allows me to save/download the file.
(Feel free to ask if you need more information) :)
Your web server can not see the files outside the DocRoot, so it can not serve the files via the browser with direct links. You need to print their contents into the browser with readfile() with the headers properly set.
To make this work, you need to change the configuration in indexer.php:
// this way it works with accentuated letters in Windows
$Root = utf8_decode("G:\test"); // define the directory the index should be created for (can also be located outside the webserver root)
$AllowDownload = TRUE; // enclose file items with the anchor-tag (only makes sense when the files are in the webserver root)
// you need to place download.php in the same directory as indexer.php
$WebServerPath = dirname($_SERVER['SCRIPT_NAME']) . "/download.php?path="; // path where the indexed files can be accessed via a http URL (only required when $AllowDownload is TRUE)
And you have to place a new file called download.php in the same directory as indexer.php, with this content:
<?php
// it must be the same as in indexer.php
$Root = utf8_decode("G:\test");
function checkFileIsInsideRootDirectory($path, $root_directory) {
$realpath = realpath($path);
if (!file_exists($realpath))
die("File is not readable: " . $path);
// detects insecure path with for example /../ in it
if (strpos($realpath, $root_directory) === false || strpos($realpath, $root_directory) > 0)
die("Download from outside of the specified root directory is not allowed!");
}
function forceDownload($path) {
$realpath = realpath($path);
if (!is_readable($realpath))
die("File is not readable: " . $path);
$savename = (basename($path));
header("Pragmaes: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private", false);
header("Content-type: application/force-download");
header("Content-Transfer-Encoding: Binary");
header("Content-length: " . filesize($path));
header("Content-disposition: attachment; filename=\"$savename\"");
readfile("$path");
exit;
}
if (!isset($_GET['path']))
die("Path not specified!");
$fullPath = $Root . $_GET['path'];
checkFileIsInsideRootDirectory($fullPath, $Root);
forceDownload($fullPath);
You have to change your apache configuration. The problem is not the php script, the problem is the webserver (which is not able to serve files outside web root, unless you configure it to).
Try something like this in your apache configuration:
Alias /testalias "G:/test"
<Directory "G:/test">
Options Indexes FollowSymLinks MultiViews ExecCGI
AllowOverride All
Order allow,deny
Allow from all
</Directory>
This tells Apache to serve files from G:/test when you access http://localhost/testalias
Then change your script configuration like that:
$WebServerPath = dirname("testalias");
and it should work!
Let's take a look at that script:
$Root = realpath("."); // define the directory the index should be created for (can also be located outside the webserver root)
$AllowDownload = TRUE; // enclose file items with the anchor-tag (only makes sense when the files are in the webserver root)
$WebServerPath = dirname(getenv("SCRIPT_NAME")); // path where the indexed files can be accessed via a http URL (only required when $AllowDownload is TRUE)
Notice "only makes sense when the files are in the webserver root" and "path where the indexed files can be accessed via a http URL". Which indicates that this script was not designed to be able to download files that are outside the web server root dir.
However, you could modify this script to be able to do that in the way that styu has noted in his answer. You could then send your changes to the author of the script.
BTW, I tested this on my own server.