This may sound like a pretty basic question, but I'm a bit stumped on what constitutes as "reading" a file, and "executing" a file.
For example:
User 1 buys a ticket from an online website, and wants to view the ticket (as a jpeg), which is displayed on the website.
Is this considered to be "reading" the folder? or is this actually executing the folder?
If the user permissions is set to "read only," that means the user CAN access the file via an action on the website(in this case, an image of their purchased ticket), but cannot access the file via direct url link right? Such as, www.exampletickets.com/user1/tickets
Folder Permissions:
Execute -> Actually enter that folder but not be
able to read it's contents, see what files are located there.
Read -> Be Able To Read Folder Contents
Write -> Edit folders data. delete or create new files/folders inside it and etc
File Permissions:
Execute -> if it's script like index.php run it to get data from it
Read -> if it's text file like index.html or index.php be able to read it
Write -> ability to change its data
As for security, this permissions are only an issue when your server is accessible by other (not from your team) users and this was mainly happening when people where using hosting services where they were not getting dedicated operating system but there was one operating system and all the users where uploading their data there. So if not correctly secured, they could view and edit each others source codes.
Today as usual you get dedicated server, with more security tools and operating system which is accessible only by you and no one else (virtualization).
So you don't need to worry that someone will view or change your data as you are the only one who has access to that server.
The webserver (apache, nginx,...) will serve any image files them by reading them, not executing them - same for any other files - regardless if accessed directly or not.
Also, the Linux file permission is given from the machine itself - here, the user will be the user running the webserver instance - usually a linux user named like "www-data". So it does not have anything related to your website's user.
For more information (what are the perfect file and directory permission for your websites ?) -> https://serverfault.com/questions/345833/what-are-perfect-unix-permissions-for-usual-web-project-directories
Related
I'm looking for a secure way to allow users to upload files (e.g. PDF) allowing for future access (by that user) while denying access to anyone else.
The user is authenticated using a standard account-creation/login process and their credentials are held in session (using Linux/Apache/MySql/Php).
Questions:
Where should the files be held?
1) I could create a directory for each user (upon account creation) and make the directory name a salted hash. Would that be secure way to do it?
OR
2) Should I put the uploaded files in a location on the server outside the webroot and move the files to a temp location for display to the user? (then destroy that file and that temp location after the user is done with it).
(assuming choice #1 above) I would plan to create an .htaccess file for each directory with the following:
order deny,allow
deny from all
Would that be sufficient security for the given directory or is more needed?
Platform:
* Shared server using LAMP stack (PHP 7.0+)
Update:
I found a good discussion regarding this issue:
Arguments for and against putting files outside of webroot
Seems the argument against using .htaccess to protect webroot files is:
update (cont)
so far the only argument I found against using .htaccess is:
Imagine your server defaults for a virtual host are: no PHP, no
.htaccess, allow from all (hardly unusual in a production
environment). If your configuration is somehow reset during a routine
operation – like, say, a panel update – everything will revert to its
default state, and you're exposed.
you didn't directly state it, but since you're mentioning .htaccess, i assume the protocol you want to do file transfers over is http, using php and apache?
Where should the files be held?
have a folder dedicated to user-folders, make sure that folder is outside of the web-root, and give each individual user their own dedicated folder inside the user-folders folder.
1) I could create a directory for each user (upon account creation) and make the directory name a salted hash. Would that be secure way to do it?
you said The user is authenticated using a standard account-creation/login process and their credentials are held in session., i assume each user has their own dedicated id? usually a SQL PRIMARY KEY unique id? if yes, that id is already guaranteed to be unique, there is no need to have the directory name be a salted hash, just have the directory name be the user id, that will make everything easier (implementation, debugging, maintenance, disk-usage-logistics)
and.. have a php script that does file indexing and file transferring, and when people request to download a file, make sure to have strict file validation, you need to validate that the file requested is indeed located inside a folder that the requestee actually has permission to access the requested file, otherwise you'll get hackers requesting other people's files, or hackers requesting /users-folder/user_1387/../../../../etc/passwd or similar (you can use realpath() + strpos() for such validation).
.and because Apache is MUCH better at file transfer than php is, once a file has been requested for download AND has been validated, you should use X-Sendfile or similar to actually do the file transfer, apache will be able to transfer the file much more efficiently than php.. (and if for whatever reason you can't use X-Sendfile, then check the php passthru() function, but it gets more complex if you intend to support streaming/content-range/stuff like that, which is difficult and inefficient to do in PHP but easy for apache)
Need a bit of clarification on this.
I have a folder in my web server that will contain sensitive information that no one should be able to read. My script currently does this:
makes the folder with 0777 permission and places an image in that folder
I have a second script that does this:
pulls that image from that specific folder, and shows it to the user
However, right now if the user knew the exact name of the parent folder, they can just type it in their browser and see all the images contained in that folder, like: www.testsite/test/images
What file permission can I use instead of 0777, that will allow these two scripst to write in and read in to the folder, WITHOUT allowing anyone to view the contents of the folder when typing it in their browser?
If I understand your problem correctly, you're worried about a user typing in /test/images/ into the URL bar, and seeing the directory listing containing your secret file.
Setting a chmod of 000 would mean that neither of your scripts (nor you) would be able to access the folder.
In my opinion, you'd be far better off using .htaccess with deny from all. This will make it so that you cannot 'open' any file in that folder, though you can still include them in PHP.
Alternatively, you may opt for creating an index.php in your /images/ folder, and setting an automatic redirect with header('Location: /'). This way a user wouldn't be able to see the directory listing.
Hope this helps! :)
I have a REALLY strange thing happening! When I view a file (within the "Program Files (x86)" folder tree) in my file manager it has one content, but when I retrieve it through PHP CLI script using file_get_contents() it has different content (with some additional lines I added through the script earlier) - except if I run the CLI script in a prompt with admin rights, then I see the same content. How on earth is it possible that the same file can have different content based on the permissions of the user accessing the file? Is that really possible, and if so where can I find more information on how it works? I've never heard of such a thing in my 25+ years of computing and programming experience...
I have quatro-checked that the path is the same and checked in all kinds of ways that there isn't something else playing a trick on me - but I just can't find any possible explanations!
I'm running Windows 10.
32-bit applications that do not have a requestedExecutionLevel node in their manifest are assumed to be UAC-unaware and if they try to write to a privileged location in the file system or registry (when the process is not elevated) the write operation is virtualized. Virtualized files are stored in %LocalAppData%\VirtualStore.
Manually delete the file in the virtual store and then edit the ACL/security of the file if you need to write to it from your script as a standard user...
So I created a couple of directories and files with FTP, thus the owner is the username I use to login to the server. Now I'd like to allow users of the website to upload images to those directories. Unfortunately for the website to store images, it should be owned by Apache. How can I fix this? I've been reading around on this but can't directly find an answer.
I don't have SSH, so I guess all command-line-things are not applicable for me.
Edit
I tried to then make the folders again using apache, but now ofcourse I can't write any files using ftp into those directories.
Provided that at least the one directory is writeable by the apache user (lets call this directory 'writeabledir', it may be your root dir '/'), you must delete the folders you created via ftp and create a php script to create the directories you need.
If for example you want a directory called users and inside it another directory called upload
Create file makedirs.php on your server.
<?php
$oldumask = umask(0);
mkdir("writeabledir/users/upload",0777,true); // or even 01777 so you get the sticky bit set
umask($oldumask);
?>
Now run your makedirs.php once, by calling your.serv.er/makedirs.php on your browser
EDIT:
If you don't want to delete and recreate your directories,you could always try to change file permissions from ftp.
For exampe with FileZilla, just right click on the desired folder and set permissions to 777. If your ftp user does not have permission to do this, then there is no other way, except from asking an administrator to do this for you.
EDIT2:
Added umask to ensure that folders created by apache are writeable by everyone. (taken from http://us3.php.net/manual/en/function.mkdir.php#1207 )
Friend looks I work in php, some versions change the way of solution, however the most common is already that you want to store it would be necessary to create a database and import it to esu code that also serves to some images you want to come place, plus the wisest thing to do and you create a database with the fields necessary for its realization, import, put in a file directory of your schedule, you also advise using aptana Studio 3 greatly facilitates the creation of codes among many things and low xampp it already comes with apache integrated in one place will help you a lot any questions on installation just look at youtube he will describe
In a web application, I want to create a folder for each www-data user and give write permissions just on that folder, and just to that user.
AFTER VALIDATION I can do:
mkdir($file->getPath().mt_rand(0,100000),0700);
This will create a new directory with a random name, in the path $file->getPath() with all permissions to the owner user. But it would give permissions to all www-data users.
If I create a chroot jail I have to copy all files again for each user, because I should create many jails (one for user).
I'm getting crazy with that and don't find out the solution.
If I understand your question right, your problem begins with the structure of the linux permission/user framework. Thus, the Apache process owning user is the one that is creating dirs and files when it is running your script.
If you need user separation for scripts, e.g.: you have different directories for different (virtual) hosts on your server and you don't want the script of one host is acting on data of a different host on the same (apache) server, then you should use 'mpm_itk_module' instead of the more common 'mpm-prefork' apache.
Using this you can go and define the user/group that apache is using when it executes any scripts and e.g. creates directories just by this command for each virtual host entry in the httpd.conf:
<IfModule mpm_itk_module>
AssignUserId USER GROUP
</IfModule>
If you really want to create different directories from ONE script execution, you need the apache process to be owned by root.root and then the script needs to set the permissions and owners for each directoy the way you want.
But it is never a good idea to run even the best scripts on a webserver as root, as you might fail to think of any risk.
The user/right separation by vhosts seems to be a much saver way in my view.
Another point - PHP only - is suPHP -> http://www.suphp.org
EDIT:
Ok, I had a look at your site and even if I can't speak spanish, it looks like you have just one website, acting for different users coming allways thru this webpage. So where is the need for user separation on linux filesystem permissions? You can restrict everything by your application with no need for filesystem users. Even if you give e.g. additional ftp access - restrict it e.g. with proftpd it has its own chroot mech for different users.
You should have to care about filesystem rights only if you can't control who is executing what. Thats a common problem on a multidomain host which you could solve with the mpm_itk_module I mentioned.
Maybe you should describe your situation a little bit more?
EDIT 2:
As suggest in the coment, if you ONLY use apache to give the users access to the files for uploading/manipulation, then just put the files outside(!) the documentroot tree of apache and create simple database to know which file is owned by which user:
user a | file parentdir/filename
This could be an easy table and your php code gives a list to the user from the database which file he is able to see/manipulate and your code does the work as intended by the user action.
As long as you don't give the user access to the files by other services (ftp, ssh, etc.) there is NO need to work with linux user rights at all. Just take care to place the files outside the documentroot of the server so that only your php code has access to the files by the rights of the apache user of your server.
EDIT 3:
Haha, now finally I got your problem after I read a similar post of you: (How can an Apache user write files when having permissions to do it?)
In this case (with REALLY anonymous users on your webpage) you have NO chance to solve this at all. Every visitor is handled as the same one without authentication. And as I assumed in my last EDIT and commented in the similar post: no need to handle with linux file permissions at all.
YOUR SOLUTION ;) :
You need to do the file manipulation in one sessions with session-ids while the user is visiting your page. So your code needs to handle the relation between the visitor (the session-id) and the file he uploaded with this session-id. Using a session-id that is valid as long the visitor is online is the best way to do this. And again - no need for filesystem permissions.... ;)
The second way is with authed users as suggested before: Create a db table with users/passwords to login the webpage (not the server) and another table that holds the user/file relations. Than, after he logs into the webpage, work again with sessions to allow the user accessing/manipulate already uploaded files.
I can that you run apache with mod_php. So then it means that your PHP instance work under apache instance and have apache USER and GROUP. You can create folder and can change owner of this folder but owner must be user in you system (not apache or same virtual user).
But you can store in every directory file for example ".permitions" and put inthat file virtual owner. Next you need filter every write (delete,rename, etc...) attempt to this directory and compare your virtual user and user that stored in .permitions file.
Sample Class (not full but it is more than enough to understand idea):
class UserDirs {
private $path='/home/vusers';
public function mkdir($user){
$d = $this->path.'/'.md5($user);
mkdir($d);
file_put_contents($d."/.owner",$user);
}
public function checkOwner($user, $dirname){
$f = $dirname."/.owner";
$virtual_owner = file_get_contents($f);
return $user === $virtual_owner;
}
}
$d = new UserDirs()
$d->mkdir("foo","bar");
echo $d->checkOwner("foo1","bar") === true ? "OK":"FAIL";
echo $d->checkOwner("foo","bar") === true ? "OK":"FAIL";
You can encapsulate all that you need in this class to work with UserDirs and extend class depending your requirement.
Your users do not have system accounts. It probably is not feasible to create those accounts either. Therefore, I'd recommend managing all of this via the Web UI.
Continue to create your directories as you are. The permissions are fine. Your user interface needs to change though, to only show that user's directory or files. I assume you have a database associated with this page. Associate the usernames and the randomly generated directory name with the user. If someone attempts to go to the direct path and they are NOT the user associated with that directory, kick them back to the login screen.
To illustrate, I created an account named test and was presumably given a unique directory. If I log out, I should not be able to visit that directory because your code would see that
I'm not logged in and therefore don't have access to that directory
If I were to login as test2 and visit the directory of test, your code should see that
I'm not the owner of the directory being visited and should therefore be redirected as appropriate.
You need to add a function that checks the directory the user is visiting and compare it to the directory associated with the user. If they two match, allow them to proceed. If they don't match, redirect the user.