What I am trying to achieve is to serve files from an S3 bucket for multiple websites without showing the real path of the items. The files are stored in the S3 bucket like: bucket/website
Example:
domain: domain.com
static: static.domain.com
I do not want to create S3 buckets for every website, so I want to store all the files in a bucket folder and serve them from there with a script.
I currently got:
<?php
$path = "domain";
$file = "filename";
// Save a copy of the file
file_put_contents($file, fopen($path . $file, 'r'));
// Set the content type and output the contents
header('Content-Type: ' . mime_content_type($file));
readfile($file);
// Delete the file
unlink($file);
?>
but it's not that clean since I save the file and then output it. Saving the file could potentially cause a mess due to the fact that the websites might have files with the same name.
I'm also lacking the .htaccess file to make the proper rewrites so it would appear that you got the file from static.domain.com/file and not a static.domain.com/script.
Any suggestions on how to realize this would be great! Many thanks in advance!
Configure Apache to proxy the request to S3. Something like:
RewriteRule /path/to/static/files/(.*) http://my.bucket.s3.amazon.com/$1 [P]
This means a request to your web server for /path/to/static/files/foo.jpg will cause your web server to fetch the file from my.bucket.s3.amazon.com/foo.jpg and serve it as if it came from your web server directly.
However, note that this somewhat negates the point of S3 for the purposes of offloading traffic. It just offloads the storage, which is fine, but your web server still needs to handle every single request, and in fact has some overhead in terms of bandwidth due to having to fetch it from another server. You should at the very least look into caching a copy on-server, and/or using a CDN in front of all this.
Related
how can I delete a file using an url path ?
I have
$file_with_path = "http://www.myweb.com/uploads/audio.mp3";
if (file_exists($file_with_path)) {
unlink($file_with_path);
}
I don't use "/uploads/audio.mp3" or similar directory paths due some reasons.
thanks in advance !!
unlink tells the operating system to delete a given file. The OS identifies files by file system path - it does not interact with URLs in any way. URLs are translated to file system paths by the web server, which is an entirely different piece of software. While theoretically there is a way to tell a web server to delete the file (by sending a HTTP DELETE request), no web server is going to honor that - it would be way too insecure. It is relatively easy to control who can access the file system; it is very hard to control who can send requests to the web server.
In short, you will have to figure out what the file system path for the file is, and use unlink (and file_exists) with that path.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been going through the file uploads through various tutorials and sources, found that uploading into root or any folder which is web accessible is a security issue and is advisable to keep the upload folder outside the root.
Now, if someone is on shared hosting server like Godaddy, the user will not be having access outside the root folder.
And if really nothing can be done, how these open source software like Wordpress, Joomla, Drupal keep their uploads securely, and almost very much sure about the security?
The thing is what all has to be taken care, to save data securely on web when the condition is that, we only have option to keep our files within root.
Few checklist which i know for secure file uploads when you are forced to keep your files within the public accessible area are as follows:-
Functions to Check Uploaded File Size and Type.
While storing files rename the file to some random names and track the filename through database, md5 and sha1 is great.
Disable Script Execution with .htaccess.
This is an example for calling the uploaded files:-
// this is just example only to show how we can get the files
$imgfile = $rsPhoto['photo']; // or value from database
list($width, $height, $type, $attr) = getimagesize($imgfile);
switch ($type)
{
case 1: $im = imagecreatefromgif($imgfile);
header("Content-type: image/gif");
break;
case 2:
$im = imagecreatefromjpeg($imgfile);
header("Content-type: image/jpeg");
break;
case 3:
$im = imagecreatefrompng($imgfile);
header("Content-type: image/png");
break;
}
This is an example, it is not about saving few image files and retrieving it, the data's as we all know categorized in crucial elements of any business success.. so when such kinds of critical and important data has to be handled, what all options we have to make things perfect and secure as possible?
References:-
Implementing Secure File Upload in PHP
EDIT 1:
Is it a good idea to permanently redirect the domain to a sub folder of your domain...
So that your root is / but after redirection your root is /main-website/.
So if i keep my upload folder in /upload/, i think it will be assumed as outside the web accessible/public area...??
Hence my domain www.xyz.com points to /main-website/, And the upload folder is out of the scope of this domain...
Just a thought came to my mind so putting it up
Thanks.
I will assume that the uploaded files must be World readable. I will assume too that the file can be of any type: pdf, images (pdf, png, ico), documents (docx, xls), etc
The best solution here (generic that not just applies for PHP projects, but web projects in general) is to add a layer of complexity or a layer of redirection: You will save the file with a custom name self generated in the server, and use a BBDD to store the original file name.
Ok, let's see it by example. Imagine we have this directories:
/ -> Root. Its World Readable
/files -> Where we will store your files. World Readable too.
So, when I upload a file to your site named "foo.png", you will save it to "/files" directory, but you must change the name of my file to a auto generated one[1]. Let's say "1234asd". You must write in a BBDD a new record with the old file name ("foo.png") with the new auto-generated one. So... now, ¿how can I download my file if I don't know the new name?
Well, you must create "/file.php" in you server, that will accept GET parameter called "filename". Then I can do: "/file.php?filename=foo.png" and your code will do the follow:
Search in the BBDD if the file "foo.png" exists. If exists, get the name of the real filename. If not exist just return a 404 HTTP Code.
Then read the file from the direcory "/files/1234asd" and return it to the user
This way, any kind of files can be uploaded to you server (*.php files too), and you are secured, because the files are not accessed directly, but throw you PHP code (file.php).
This offers additional advantages like being able to check if a user have permission to read the file he is requesting (by implementing some kind of simple authentication). If the file is not found instead of return a ugly 404 HTTP code (that it's the correct stuff to do in that case) you can return a custom error message saying "ooops! 404 - The file you request is not available" or something like that.
Note: this solution applies to a files from 0 to maybe.... 10MB-20MB. If you are working with larger files, this solution is not optimal, and another approach should be take.
[1] http://php.net/manual/en/function.uniqid.php
Here's a link to (the google cached text only version) an article that is useful in helping secure wordpress.
http://webcache.googleusercontent.com/search?q=cache:V5RddpaOH4IJ:gerry.ws/2008/10/152/setup-and-secure-your-wordpress-upload-directory.html&hl=en&gl=au&strip=1
(i've linked to the google cache version becuase their site makes my chrome/firefox lock up, text only doesn't).
Basically you put your uploads in a location that only the app can access it (above or outside the web location) and then:
limiting the mimetypes of file that can be uploaded (and validating the files to make sure they don't contain known buffer overruns, exploits like exif poisining, embedded executables, etc)
make sure you aren't allowing parent paths
make sure that your upload path calculation is run server side not through some sort of hidden form field etc
make sure the execute access of your server platform (e.g. php/apache) won't execute in that location
make sure that only the web server (e.g. apache) account has rights to write to the location
make sure your scripts validate the data being posted in the upload
see also: http://codex.wordpress.org/Hardening_WordPress
Beside the methods to check file while upload it to your server like: check extension/mimetype, limit upload size... I have more tips bellow.
When I use shared hosting : turn off php execution in upload folder, you can use htaccess with php_flag=off.
When I have dedicated server: Setup a storage without any script(php) execution and upload file via ftp.
While you may not have access to any other folder outside root, typically on shared hosting you have access to your home directory.
I have tested on a server I have, and I can store files on /home/myuser which may be outside webroot (typically located in /home/youruser/www).
Also, you can use /tmp which is writtable for everyone, but if you do so, do not store sensitive data there, as it is readable by all users of hoster.
so files are uploaded to a non web accessible directory on my server, but i want to provide a URL or some for of download access to these files. Below is my attempt, but it isn't working.
$destination = $_SERVER["DOCUMENT_ROOT"] . "/../Uploads/" . $random;
mkdir($destination);
move_uploaded_file($temp, $destination."/".$name);
$final = $server."/".$destination."/".$name;
**$yourfile = readfile('$final');**
and i then echo our $yourfile:
<?php echo $yourfile; ?>
elsewhere.
I either get a failed to open stream, or a huge long string. Is there any solution to just download the file on request via URL?
EDIT: I want to keep the directory non web accessible.
readfile outputs the content directly, it does not return it. Alternatively read the manual page on file_get_contents.
readfile('$final'); is never going to succeed. Unless the file literally had the "$final" name. Double quotes or no quotes.
Your question has been answered a few hundred times already. There's no need for you to post your issue four times in a row.
PHP display/download directory files outside the webserver root
How to serve documents from outside the web root using PHP?
Display all images from outside web root folder using PHP
https://stackoverflow.com/search?q=php%20readfile%20from%20outside%20docroot
Here's a few questions I cannot find in search:
When adding CDN services to your website, do you still maintain/create local dynamic files on your origin server and point the CDN to that location, set a http rule, and have them upload it automatically if they aren't hosting it yet?
Let's say I have an avatar upload form on my origin server and after a cropping function, do I set the save image to the local directory or to the CDN?
The other question I have is if you save files locally first and wait for the CDN to pull them, how do you code for the page to know the difference? Do you use some thing like
// $filename = 'images/image.jpg';
function static_file($filename) {
$cdnfilepath = 'http://cdndomain.com/';
if (fopen($cdnfilepath.$filename, "r")) {
return $cdnfilepath.$filename;
} else {
return $filename;
}
}
Or, do you just PUT every dynamically created file that you would like the CDN to host directly to the CDN?
If anyone knows a good tutorial on this that would helpful. Sorry if any of this has been covered but I having been searching with no clear answers...
Sometimes there's no straight-forward way of uploading directly to your CDN.
For example with AWS you have to PUT the file, which means it still has to be uploaded to your server temporarily. What I do is upload the files to a temp directory then have a cron script run that PUT's the files onto AWS, so as not to cause the upload process to take any longer for the end-user.
I am looking for some input on something I have been thinking about for a long time. It is a very general problem, maybe there are solutions out there I haven't thought of yet.
I have a PHP-based CMS.
For each page created in the CMS, the user can upload assets (Files to download, Images, etc.)
Those assets are stored in a directory, let's call it "/myproject/assets", on a per-page basis (1 subdirectory = 1 page, e.g. "/myproject/assets/page19283")
The user can "un-publish" (hide) pages in the CMS. When a page is hidden, and somebody tries to access it because they have memorized the URL or they come from Google or something, they get a "Not found" message.
However, the assets are still available. I want to protect those as well, so that when the user un-publishes a page, they can trust it is completely gone. (Very important on judicial troubles like court orders to take content down ... Things like that can happen).
The most obvious way is to store all assets in a secure directory (= not accessible by the web server), and use a PHP "front gate" that passes the files through after checking. When a project needs to be watertight this is the way I currently go, but I don't like it because the PHP interpreter runs for every tiny image, script, and stylesheet on the site. I would like have a faster way.
.htaccess protection (Deny from all or similar) is not perfect because the CMS is supposed to be portable and able to run in a shared environment. I would like it to even run on IIS and other web servers.
The best way I can think of right now is moving the particular page's asset directory to a secure location when it is un-published, and move it back when it's published. However, the admin user needs to be able to see the page even when it's un-published, so I would have to work around the fact that I have to serve those assets from the secure directory.
Can anybody think of a way that allows direct Apache access to the files (=no passing through a PHP script) but still controlling access using PHP? I can't.
I would also consider a simple .htaccess solution that is likely to run on most shared environments.
Anything sensitive should be stored in a secure area like you suggested.
if your website is located at /var/www/public_html
You put the assets outside the web accessible area in /var/www/assets
PHP can call for a download or you can feed the files through PHP depending on your need.
If you kept the HTML in the CMS DB, that would leave only non-sensitive images & CSS.
If you absolutely have to turn on and off all access to all materials, I think your best bet might be symlinks. Keep -everything- in a non-web-accessible area, and sym link each folder of assets into the web area. This way, if you need to lock people out completely, just remove the symlink rather than removing all files.
I don't like it, but it is the only thing I can think of that fits your crtieria.
I'd just prevent hotlinking of any non-HTML file, so all the "assets" stuff is accessible only from the HTML page. Removing (or protecting) the page just removes everything without having to mess up the whole file system.
Use X-Sendfile
The best and most efficient way is using X-Sendfile. However, before using X-Sendfile you will need to install and configure it on your webserver.
The method on how to do this will depend on the web server you are using, so look up instructions for your specific server. It should only be a few steps to implement. Once implemented don't forget to restart your web server.
Once X-Sendfile has been installed, your PHP script will simply need to check for a logged in user and then supply the file. A very simple example using Sessions can be seen below:
session_start();
if (empty($_SESSION['user_id'])){
exit;
}
$file = "/path/to/secret/file.zip";
$download_name = basename($file);
header("X-Sendfile: $file");
header("Content-type: application/octet-stream");
header('Content-Disposition: attachment; filename="' . $download_name . '"');
Important note:
If you are wanting to serve the file from another webpage such as an image src value you will need to make sure you sanitize your filename. You do not want anyone overriding your script and using ".." etc. to access any file on your system.
Therefore, if you have code that looks like this:
<img src="myscript.php?file=myfile.jpg">
Then you will want to do something like this:
session_start();
if (empty($_SESSION['user_id'])){
exit;
}
$file = preg_replace('/[^-a-zA-Z0-9_\.]/', '', $_GET['file']);
$download_name = basename($file);
header("X-Sendfile: $file");
header("Content-type: application/octet-stream");
header('Content-Disposition: attachment; filename="' . $download_name . '"');
EDIT: How about a hybrid for the administrative interface? In the ACP you could access via the PHP method to, basically, send all file requests to the PHP authing file, but for public, you can use HTTP AUTH/htaccess to determine the availability of the result. this gives you the performance on the public side, but the protection on the ACP side.
OLD MESSAGE:
.htaccess is compatible with most Apache and IIS<7 environments (using various ISAPI modules) when using mod_rewrite type operations. The only exception is IIS7 + the new Rewrite module which uses the web.config file. HOWEVER, I'd be willing to be that you could efficiently generate/alter the web.config file for this instance instead of using .htaccess.
Given that, you could set up redirects using the rewrite method and redirect to your custom 404 Page (that hopefully sends the proper 404 header). It is not 100% appropriate because the actual asset should be the one giving a 403 header, but... it works.
This is the route I would go unless you want to properly create HTTP AUTH setups for every server platform. Plus, if you do it right, you could make your system extendable to allow other types in the future by you or your users (including a php based option if they wanted to do it).
I'm assuming the 'page' is being generated by PHP and the 'assets' should not require PHP. (Let me know if I got that wrong.)
You can rename the assets folder. For example, rename '/myproject/assets/page19283' to '/myproject/assets/page19283-hidden'. This will break all old, memorized links. When you generate the page for admin users that can see it, you just write the urls using the new folder name. After all, you know whether the page is 'hidden' or not. The assets can be accessed directly if you know the 'secret' url.
For additional security, rename the folder with a bunch of random text and store that in your page table (wherever you store the hidden flag): '/myproject/assets/page19283-78dbf76B&76daz1920bfisd6g&dsag'. This will make it much harder to guess at the hidden url.
Just prepend or append a GUID to the page name in the database and the resource directory in the filesystem. The admin will still be able to view it from the admin interface because the link will be updated but the GUID effectively makes the page undiscoverable by an outside user or search engine.
Store your information in a directory outside the web root (i.e.: one directory outside of public_html or htdocs). Then, use the readfile operator in a php script to proxy the files out when requested. readfile(...) basically takes a single parameter--the path to a file--and prints the contents of that file.
This way, you can create a barrier where if a visitor requests information that's hidden behind the proxy, even if they "memorized" the URL, you can turn them down with a 404 or a 403.
I would go with implementing a gateway. You set a .htaccess file to the /assets/ URL pointing to a gateway.php script that will deny if both the credentials are not valid and this particular file is not published or show it.
I'm a little confused. Do you need to protect also the stylesheet files and images? Perhaps moving this folder is the best alternative.