I can't write files in my apache webserver directory. I'm using CentOS 7. I'm trying to execute git commands from a PHP file using exec:
exec("/usr/local/bin/git -C ../myRepo fetch 2>&1", $output, $exec_return_value);
By printing the output I see that the error manifests as:
error: cannot open .git/FETCH_HEAD: Permission denied
I discovered (with help from this blog: http://jondavidjohn.com/git-pull-from-a-php-script-not-so-simple/) that this is "expected" because Apache will execute using a different user that what I use via ssh. So I need to make sure my permissions allow that user to have write access.
I determined that my Apache installation executes as a user named "apache" by printing the output of this in my php file:
exec("whoami", $debugOutput, $debugRetVal);
I also set up SSH as recommended in that same link. But unfortunately I was still getting the same error in PHP file:
error: cannot open .git/FETCH_HEAD: Permission denied
The thing that really threw me is that I don't have any permissions issues when I run as apache from my ssh session using:
sudo -u apache /usr/local/bin/git -C ../myRepo fetch
The user "apache" seemingly has different permissions when it's executing from PHP. Can someone help me figure this out?
The behavior I was seeing where the user "apache" seemed to have different permissions when executing from PHP vs executing from my SSH, was the result of SELinux. When SELinux is in its "enforcing" mode (which it is by default in CentOS 7) it essentially "layers" an access protection scheme on top of the standard linux user permissions. In a simplistic sense, the user "apache" actually did have different permissions when it was executing out of the httpd process.
To get more detailed the html directory that apache "serves" has an SELinux context type of "httpd_sys_content_t". So my git repo was inheriting that same context. But the way the SELinux security policy works is that httpd only has read access to the context type "httpd_sys_content_t". In order for httpd to have write access to the files I needed to change the context to "httpd_sys_rw_content_t". I did this using:
sudo chcon -R -t httpd_sys_rw_content_t ../myRepo
This finally solved my original error! Unfortunately it surfaced a new error that the link from the question also mentioned:
array(5) { [0]=> string(68) "ssh: connect to host PRIVATEGITHOST: Permission denied" [1]=> string(45) "fatal: Could not read from remote repository." [2]=> string(0) "" [3]=> string(51) "Please make sure you have the correct access rights" [4]=> string(26) "and the repository exists." }
But as I mentioned in my question I already set up an ssh key for the "apache" user as the blog suggested. So this was actually a different error. An error once again originating from SELinux.
By default there is an SELinux boolean called "httdp_can_network_connect" that is set to off. This prevents httpd from making external connections of it's own. Which is what you want if all you're doing is serving the content that exists on your server. For my purposes I needed to set that boolean to on using:
sudo setsebool httpd_can_network_connect on
And after 2 days of pulling my hair out. I can fianlly do a "git fetch" from a php file.
I debugged the final error with the boolean using the "audit2allow" tool which looks at some SELinux logs and tells you in more or less plain english how to solve the issue.
sudo audit2allow -a
OUTPUT:
#============= httpd_t ==============
#!!!! This avc can be allowed using one of the these booleans:
# nis_enabled, httpd_can_network_connect
allow httpd_t unreserved_port_t:tcp_socket name_connect;
If anyone is having SELinux issues relating to apache/httpd I highly recommend looking at that tool.
For the sake of completeness, I wanted to point out that changing the file context of my repository as I did above is a "temporary change" and will not persist through a system reset. In order to permanently change the context type I used:
sudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/myRepo(/.*)?"
Similarly I used this to permanently change the boolean:
sudo setsebool -P httpd_can_network_connect on
Related
I changed my server; the old one ran Centos 8, the new runs Ubuntu 20.4. Now my php scripts have a problem with permissions -- why?
Example:
current user: root
script was executed under user: nobody
Message: fopen(/tmp/RebuildCat_sequence.cnt): failed to open stream: Permission denied
Actually this file is owned by nobody, so I should not get this warning at all:
CS-1 01:47:22 :/tmp# ls -latr /tmp/RebuildCat_sequence.cnt
--wxrwxrwT+ 1 nobody nogroup 480 Oct 23 00:02 /tmp/RebuildCat_sequence.cnt
This php file is called by cron (root), as a rule, and as I noticed that some similar cron triggered files are owned by systemd-timesync, I issued
setfacl -m u:systemd-timesync:rwX /tmp
setfacl -m u:nobody:rwX /tmp
and even
setfacl -R -m u:nobody:rwX /tmp
setfacl -R -m u:systemd-timesync:rwX /tmp
to no avail. How do I understand this?
In my understanding both users systemd-timesync and nobody should be able to read and write files in /tmp without problem due to acl. I think someone has to educate me here.
/tmp resides on /dev/md2
CS-1 01:32:34 :/tmp# tune2fs -l /dev/md2 | grep "Default mount options:"
Default mount options: user_xattr acl
No problem here, I guess.
CS-1 01:46:09 :/tmp# getfacl /tmp
getfacl: Removing leading '/' from absolute path names
# file: tmp
# owner: root
# group: root
# flags: --t
user::rwx
user:systemd-timesync:rwx
user:nobody:rwx
group::rwx
mask::rwx
other::rwx
Looks OK to me, as well. Clueless.
Addendum
OK, I set chmod 777 /tmp, but still I get a PHP error fopen(/tmp/RebuildCat_sequence.cnt): failed to open stream: Permission denied, but file_put_contents works with 777 -- how do I understand this? Why does fopen throw an error, but file_put_contents does not?
Alas, chmod 777 /tmp in the console is not the answer. It is much more complicated than that. So I wasn't silly at all.
How to reproduce?
In my case, I have 2 docker PHP containers writing to a number of case specific log files, one of them using apache, the other nginx. Both have a mapping /tmp:/tmp, so we can look at a file from the host or from inside a container.
From inside the containers, the owner is apache:apache if created in the apache container or nginx:nginx in the nginx container, but from the host's perspective it is systemd-timesync:systemd-journal for the same file in both cases.
Of course, the apache container does not have a user nginx and vice versa (neither does the host). So if my PHP script wants to write to a file created by the other container, I have said permission problem.
Fix
The remedy is easy, if at creation time the owner is changed to nobody and permissions to 666.
If not, we cannot change the owner from the other container and get failed to open stream: Permission denied. See https://serverfault.com/questions/772227/chmod-not-working-correctly-in-docker (also see the discussion about changing permissions from host via system call below).
So I wrote a wrapper function str_to_file to add this manipulation:
function str_to_file($str, $filename= "/tmp/tmp.txt", $mode="a+") {
if (!file_exists($filename)){
touch($filename); # create file
chmod($filename, 0666); # the owner can chmod
chown($filename, 'nobody'); # the owner can chown
} # if (!file_exists($filename))
$mode = $mode == 'w'
? LOCK_EX
: FILE_APPEND | LOCK_EX;
file_put_contents($filename, $str . PHP_EOL, $mode);
} # str_to_file
This works for both the apache and nginx containers.
Explanation
How did I get into this mess anyway?
It looks like either I didn't mess it up when on CentOS, so in this case most probably it has nothing to do with my switch to Ubuntu. I cannot remember what I did but I don't remember I had this kind of permission problem on CentOS.
Or CentOS handles files created in a container differently than Ubuntu. I am sure I never saw a user systemd-timesync:systemd-journal on CentOS. Unfortunately I cannot investigate without too much effort what user CentOS would substitute from the host point of view and which consequences this has.
Oh wait, I remember I do have another server running PHP and nginx on CentOS. Creating a file in /tmp from within the container reveals that the owner from both inside the container and from the host is nobody. So there is a significant difference between both Linux versions.
What's more, this insight explains why I experienced this permission problem only after the switch from CentOS to Ubuntu, to which I was forced due to the fact that my provider does not offer CentOS anymore for obvious reasons.
By the way, I first tried the CentOS substitutes AlmaLinux and RockyLinux with bad results of different nature which finally forced me to use Ubuntu. This switch in turn revealed lots of problems, this one being the last, which cost me several weeks so far. I hope that this nightmare ends now.
Acting from cron, the apache version is and was used exclusively, so no problem here.
Testing from the browser lately, I switched by chance and for no particular reason from the apache version to the nginx version and vice versa, creating those described problems.
Actually I don't know why the containers write on behalf of those users. The user of the container is root in both cases, as is standard with docker. I guess it is the browser engine which introduces these users.
Interestingly, when trying to change permissions and ownership via system call after creation, I failed, if I remember correctly, although the user then should be root and root should be capable of doing that.
It turns out that on apache the system user [system('whoami')] is not root but apache, but the file owner is apache where as on nginx the system user is nobody and the file owner is nginx. So changing permissions with system should work on apache [system("chmod 0666 $filename"); system("chown nobody:nobody $filename");], but not on nginx. Alas, it does not work on apache either.
Quick check in apache container:
/tmp # whoami
root
/tmp # ls -la *6_0.ins
-rw-r--r-- 1 apache apache 494 Nov 14 18:25 tmp_test_t6_0.ins
/tmp # su apache chmod 0666 tmp_test_t6_0.ins
This account is not available
Sorry, I can't understand this. And no idea about ACL.
apache vs. nginx
Why do I use both web server versions in the first place?
Well, the process I invoke processes random data which may take a long time, so chances are that nginx times out. This is a well-known nginx feature.
Despite all my studies and obvious instructions, I could not manage nginx to behave. Geez!
Finally, as an appropriate workaround, I introduced apache, which does not have this problem. For decent runtimes it makes no difference, of course.
Well, it looks like I was kind of silly. I'm sure I double checked permissions on /tmp in WinSCP (1777), but I am unsure if I did it in the console.
Now I issued chmod 777 /tmp in the console and there it is! This problem is gone. chmod 666 /tmp reintroduces the problem. chmod 676 /tmp is OK as well.
I googled quite a bit to educate myself about this topic, but frankly I don't understand it. In particular why did I have the problem in the first place and why did ACL not solve the problem? I'd appreciate some enlightenment.
Hoping someone can help me out here. Trying to run any command using exec() returns 126 and displays the same error message. I've narrowed it down to this pretty minimal test case.
root#test:~ $ sudo -u asterisk php -r 'exec("ls /", $out, $result); var_dump($result);'
sh: /bin/ls: Permission denied
int(126)
root#test:~ $ sudo -u asterisk ls /
bin boot dev etc home lib lib64 lost+found media mnt opt proc root sbin selinux srv sys tmp usr var
root#test:~ $ su -lc 'php -r '\''exec("ls /", $out, $result); var_dump($result);'\' asterisk
This account is currently not available.
SELinux and PHP safe mode are not enabled
permissions are fine on /, /bin/, and /bin/ls
asterisk is a system user created with this command: adduser -d /var/lib/asterisk -M -r -s /sbin/nologin asterisk
it works fine via Apache, which runs as this user
Every attempt to run any command returns permission denied and 126 as $?. The PHP config is pretty much as it shipped (Scientific Linux 6.7, PHP 5.4 via Remi package.)
Would appreciate some assistance (preferably the kind that would require some arcane knowledge, not the kind that means I missed something blindingly obvious!)
Edit: I can get it to work using su if I give the user a login shell:
root#test:~ $ usermod -s /bin/bash asterisk
root#test:~ $ su -c 'php -r '\''exec("ls /", $out, $result); var_dump($result);'\' asterisk
int(0)
However, this isn't my code so changing all the use of sudo to su is not likely to happen. Also, there shouldn't be anything stopping PHP from running this without a login shell.
You probably have enabled sudo option NOEXEC.
When this option is active, you can run command with high privilege, but cannot spawn other commands. This is (AFAIK) required to avoid an exploiter to gain a shell. Since you are using the asterisk user, this also makes much sense.
In your case, PHP command is granted the execution as asterisk user, but when it tries to spawn with exec, the command cannot be executed and it returns 126.
EDIT (as in comment below)
Adding this line to sudoers will solve this issue:
root ALL = (ALL) EXEC: ALL
Your account doesn't have permission to run bash commands.
As you know int(126) return the status of the executed command. From the bash man page:
If a command is found but is not executable, the return status is 126.
Try running ls directly from your asterisk user to see if it works.
If it doesn't work then check the permissions on your asterisk user and see if you have the necessary permissions. If you don't have the permissions, just use chmod to give your asterisk user permission. You should also try and create a new user and see if this command works with that user.
Edit: Since your asterisk account does not have a shell, you cannot execute shell commands from it.
Coming back to provide another answer to my own question a couple of years later. As the accepted answer supposed, I had set this in my file:
Defaults noexec
And I fixed this by overriding it for the root user.
But a better solution would be to apply the defaults only to the targeted user:
Defaults:admin noexec
This way the setting would not have affected the asterisk user I was having problems with in my question!
I have been trying to execute a script using shell_exec() function in php:
I've written the following lines of code:
$command = "bash /path/to/my/script/ funciton_name() 2>&1";
echo shell_exec($command);
Inside the shell script I'm doing:
sudo rsync -avvc /source/path /destination/path
On executing this on the browser, I get the following error message:
sudo: no tty present and no askpass program specified
When I execute the same shell script on my server, it executes fine.
When I went through similar questions posted on this forum, I realised that I had to add the NOPASSWD line on my server which I found out has already been added in the following format:
User_Alias NOBODY=nobody,apache
NOBODY ALL=(ALL) NOPASSWD : /path/to/my/script
Also when I do:
echo shell_exec("whois");
I get the output as:
apache
Any assistance in overcoming this problem would be of great help.
sudo will require a TTY, even if you have set up it up to be passwordless, unless you explicitly do not require it. But as #Cfreak pointed out, it would be much better (simpler and safer) to avoid sudo by setting correct access rights (read it before continuing) in the first place.
rsync itself will not require root permissions on a sanely configured *nix install. To verify this, you can check that type -a rsync doesn't print anything weird like rsync is aliased to `sudo rsync' and that ls -l $(which rsync) prints sensible permissions (at least rx for everyone).
Well, this error is so know but in my case I could not mitigate it in my side. I have migrated a laravel-4 installation to another server and for the first time accessed I get this error:
file_put_contents(/var/www/html/MyApp/app/storage/meta/services.json): failed to open stream: Permission denied
I have followed different googled aswers as below
https://stackoverflow.com/a/17971549/1424329
Can't make Laravel 4 to work on localhost
http://laravel.io/forum/05-08-2014-failed-to-open-stream-permission-denied
Laravel 4: Failed to open stream: Permission denied
However, any of them could not fix my problem. I also tried cleaning the cache and dump autoclass command:
php artisan cache:clear
chmod -R 777 app/storage
composer dump-autoload
Also, I have thought that the webserver process might be considered in the problem, so I seek for its user like this:
$ ps -ef|grep httpd
apache 11978 11976 0 11:14 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND
Then, I added apache to the directory group owner and the problems persists.
I do not know what else to do, I am going insane because dancing naked under full moon neither fixed the problem.
I have discovered the cause of this problem. Looks like selinux does not allow to httpd service (apache web server) write in my app folder. So, I did:
setsebool -P httpd_unified 1
Now everthing is working fine!
Happens to me some times, but I just delete it and Laravel recreates it. As far as I know, this is just a cache list of services and can be safely removed.
I have recently installed FC13 and am attempting to write a mechanism in my PHP code that caches gathered data into a specific directory (for our purposes here, let's call it /var/www/html/_php_resources/cache).
I copy my files over to the /var/www/html directory and then run chown -R apache:apache /var/www/html/* and chmod a+w /var/www/html/_php_resources/cache on the new data. For right now I am just using the global write permission for convenience. I will tweak the permissions later.
When I attempt to use the chmod or mkdir PHP functions I wind up with:
Warning: chmod(): Permission denied in /var/www/html/_include/php/CacheInit.php
or
Warning: mkdir(): Permission denied in /var/www/html/_include/php/CacheInit.php
Now, when I disable SELinux everything works just fine. The problem is that I would prefer not to disable SELinux and actually get the permissions set up correctly so that I can port it over to servers where someone does not have such explicit control.
As an example: my personal site host allows me to set read/write permissions on directories but will not allow for SELinux policy changes.
FYI:
uname -r = 2.6.34.7-56.fc13
*php -version * = PHP 5.3.3
rpm -qa | grep httpd = httpd-2.2.16-1.fc13
Does anyone have any suggestions?
I had the same problem, trying to mkdir from php. Not so much information on google but this is what I found and I guess this is the correct solution. One have to label the dir in which apache should create directories.
Label should be "httpd_sys_script_rw_t" and I found that info here: http://docs.fedoraproject.org/en-US/Fedora_Core/5/html/SELinux_FAQ/index.html#id672528
Here's how to label the dir: chcon -R -t httpd_sys_script_rw_t <dir>
Reference somewhere here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/rhlcommon-chapter-0017.html
Hope this help someone out there.