Say we have a server with pure LAMP where we created 2 websites. One inside /var/www/site1, another one inside /var/www/site2
Apache's www-data group has read access to both directories.
And that causes a problem: /var/www/site1/grab.php (which runs through http://site1.com/grab.php) can easily read some private data from /var/www/site2/config.php by using some
file_get_contents("/var/www/site2/config.php");
code inside /var/www/site1/grab.php.
Is there Apache-layer way to prevent this?
Without using suexec or another additional installations - just with configuration tuning.
I understand that if www-data has read access to both directories and site1.com is run by www-data, it is clear why it has an access to /var/www/site2/config.php.
But may be there is some easy way to restrict any up directory for each website and forbid them to access any path outside from their DocumentRoot directory.
Related
THE SITUATION
I have multiple folders in my /var/www/ directory.
Users are created that have control over a specific directory... /var/www/app1 belongs to app1:app1 (www-data is a member of the app1 group).
This works fine for what I want.
THE PROBLEM
If the app1 user uploads a PHP script that changes the file/folder permissions for something in app2s directory structure, the Apache process (as there's only one installed on the server) will be more than happy to run it, as it has the necessary permissions to access both /var/www/app1 and /var/www/app2 folders and files.
EDIT:
To the best of my knowledge, something like, /var/www/app1/includes/hack.php:
<?php
chmod("/var/www/app2", 777);
?>
The Apache process (owned by www-data) will run this, as it has permissions to change both /var/www/app1 and /var/www/app2 directories. The user app1 will then be able to cd /var/www/app2, rm -rf /var/www/app2, etc., which is obviously not good.
THE QUESTION
How can I avoid this cross-contamination of the Apache process? Can I instruct Apache to only run PHP scripts that affect the files/folders that reside within the relevant vHost root directory and below?
While open_basedir would help, there are several ways of bypassing this constraint. While you could break a lot of functionality in php to close off all the backdoors, a better solution would be to stop executing the php as a user whom has access to all the files. To do that, you need to use php-fpm with a separate process pool/uid/gid for each vhost.
You should still have a separate uid for the php execution from the uid owning the files with a common group allowing a default read only access to the files.
You also need to have separate storage directories for session data.
A more elaborate mechanism would be to use something like Apache traffic server in front of a container-per owner with each site running on its own instance of Apache - much better isolation, but technically demanding and somewhat more resource intensive.
Bear in mind, if you are using mariadb or similar, that the DBMS can also read and write arbitrary files (SELECT INTO OUTFILE.../LOAD DATA INFILE)
UPDATE
Rather than the effort of maintaining separate containers, better isolation could be achieved with less effort by setting the home directory of the php-fpm uid appX to the base directory of the vhost (which should contain, not be, the document_root - see below) and use apparmor to constrain access to the common files (e.g .so libs) and #{HOME}. Hence each /var/www/appX might contain:
.htaccess
.user.ini
data/ (writeable by fpm-appX)
html/ (the document root)
include/
sessions/ (writeable by fpm-appX)
You should add an open_basedir directive to each site's vhost file. The open_basedir directive limits the directories that a site can access.
You can read more about open_basedir here.
So I have a user foo and the group www-data When the user creates a file/directory manually the permissions are:
foo:www-data --> rwx:r-x
And the user can then do what they like with that file/directory.
But when I use PHP to create a file or directory the permissions generated are
www-data:www-data --> rwx:r-x
Which then doesn't allow the user to do what they like with that file/directory.
So I have two options:
I have thought about adding the user foo to the group www-data but I have multiple virtual hosts and I don't want them to be able to edit each other's virtual domains (if that is even possible?!)
I have also thought about when creating the folder using PHP I will set the permissions to 777 but that seems like a big 'no no' (is it?)
What should I do?!
What you actually want to do is run different virtual hosts as different users. This is a link to some helpful answers for nginx:https://serverfault.com/questions/370820/user-per-virtual-host-in-nginx
The same concepts apply to Apache.
Edit:
The answers weren't clear to me when I read them, and there's a lot of info in the comments. The second answer by #Ricalsin is very informative and had a link that I used. Be sure to restart php-fps and nginx!
You could use phps chown() function to change the owner after creating the directory. See http://php.net/manual/en/function.chown.php. But you may need special privileges to successfully use the function, which may introduce a security issue.
Another option is the use of suEXEC in combination with SuexecUserGroup if you are running php-fpm (via FCGI instead of mod-php). In your virtual host file you will have to assign a user and a group to a virtual host via
SuexecUserGroup exampleuser examplegroup
With this directive activated all new directories and files created by php will have the specified user/group combination. Thats how a known webhost from germany does this, see the uberspace documentation (german only). More general information can be found here.
Using this solution you would avoid many security risks automatically. No need for 777 permissions, no possibility to see each others files etc.
Because with this approach, every user will have his own php instance running, its very easy and secure to allow them to use different php interpreter versions or own php.ini files.
I have a fresh, unmodified install of Apache on CentOS 7. I notice that when I look at the folder permissions for /var/www/html it and it's content is owned by apache. When a file is created, however, its owner and group is Apache.
Though html is owned by root:root, should all of the contents be owned by apache:apache? or [user]:apache with that user belonging to the Apache group? How should I go about this?
Edit:
Another question - do I want to change this? I do not have a very good understanding of file ownership in Linux systems but it seems with this configuration that the newly created files (apache:apache) are prevented from taking action against files that already exist (root:root). This should prevent PHP hacks from being able to manipulate any existing files, right? Is this just the illusion of security though?
Check your /etc/httpd/conf/httpd.conf file and search for user and group [e.g. User apache Group apache]. Those are the owners by default. In your website there is no need too add write permissions for files and folders assigned to user:group, but you can set readable by owner and others in order to be accessible via web.
Updated answer:
The main reason DirectoryRoot (/var/www/html) owned by root is security. You can leave root as owner of files and set group to apache. Regarding security you make sure apache group has read-only access to files [-> One first meassure]. The security is not an illusion. While files are owned by root and do not have rw access from others, it is hard for external attackers to gain write access to files [because this is the most common way to hijack a site].
I think this is not too specific, in fact I think it's a very common problem. But I didn't find any threads giving a solution to the following.
I want to run website1 and website2 on a server.
Each virtual host has its own Document in /var/www/website1/web/ and /var/www/website2/web/
Permissions are set like this: rwx rwx r-x
Owner of both folders is: somedeveloperuser
Group of both folders is: www-data
I want PHP scripts in /var/www/website1/web/ to have write access to /var/www/website1/web/
EDITED:
What could be a good practice to forbid a script in /var/www/website1/web/ write access to /var/www/website2/web/ or any other folder grouped by www-data?
I am asking because I don't want any site, which could be in some way insecure to harm other website folders.
Thank you for reading and helping!
TL:DR Will permission 444 on a folder restrict access for a web user and browser?
I have a webserver with a root catalog that is accessible from the web. I can't access any folders higher up in the hiearchy, I only have control of the root folder and down.
Lets say I have the following folder structure:
includes
includes\database.php
admin
admin\index.php
I want the includes folder to follow these rules:
Accessible from within the server, so "admin/index.php" can include "includes/database.php"
Accessible via FTP, so I can access and edit the folder and files.
NOT Accessible by a user from the web.
Can I solve this by setting "includes" and all subfolders/files with permission 444? If so, is there a known way to bypass this access-rule or is it safe to use?
If you want to keep assets safe from web access you need to move them outside of your web root. Typically one level below your web root is used. This way they are still accessible via FTP and your code but not to web requests.