I'm debugging my PHP app on CentOS7 using Apache.
My application is a Web GUI to manage the Torque batch system and I used the qmgr, which is a command line tool provided by Torque to do the management work.
Because only the root user can execute the qmgr and the Apache server cannot be running as root user, I have written a C program as a wrapper for anyone to execute commands as root user.
But the PHP application always give the following output:
socket_connect_unix failed: 15137
qmgr: cannot connect to server (errno=15137) could not connect to trqauthd
This means the PHP app cannot raise a socket connection to connect the Torque server.
Here is some additional information:
The command called by the PHP application can be executed correctly in the shell
The same PHP app can be executed correctly on a CentOS6 server with Apache
SELinux and the firewall are disabled
I have tried the two versions (5.1 and 4.10) of Torque, the result is the same
Apache and PHP are used with the default RPM's of CentOS7.
I thought there are some new security limits that maybe influence Apache on the CentOS7 server.
Please give me some suggestions, thank you!
I had the exact same problem.
The cause is that newer Apache.httpd versions default to having the systemd property PrivateTmp set to true. This causes the httpd service to see a private /tmp directory that is actually mapped to some other location in the file system, instead of the real /tmp directory. PHP, running in the Apache process, has the same /tmp directory as the Apache service, and so do any processes forked from PHP (e.g. using exec or system etc). So when PHP calls qsub (etc), that too will see the private /tmp directory.
This causes the error you mentioned because qsub internally uses the unix socket /tmp/trqauthd-unix to communicate with trqauthd. But qsub sees the "fake"/private /tmp directory instead of the real one, so it doesn't find the socket.
This explains why the command works when you run it manually in a console--in that case, qsub sees the real /tmp directory, as opposed to the private one it sees when forked from PHP (running the Apache service).
One solution is to simply change the PrivateTmp property in the file httpd.service from true to false. You can find this file under the /etc/systemd directory. The subfolder it is in probably depends on the linux distribution, so use the find command to locate it:
find /etc/systemd -name httpd.service
This really helped me!
I have been struggling a lot having a php script using exec()-command. For some reason I got permission denied. Having tried vary many things, including running my scripts in shell as the www-data user, but with no success, this was finally the solution to my problem.
BTW, for Ubuntu the apache service config file is located at cat /etc/systemd/system/multi-user.target.wants/apache2.service
I'm trying to make a hook on bitbucket, that executes a php file, and this file executes the pull command:
shell_exec('/usr/local/cpanel/3rdparty/bin/git pull');
The pull command works fine on the SSH console, but the PHP returns the error:
Permission denied (publickey). fatal: Could not read from remote
repository.
Please make sure you have the correct access rights and the repository
exists.
The command --version shows the path to git is right, whoiami returns the same user on both, so I don't know if it is a permission issue.
What can be going wrong?
Edit: An additional issue: the alias I added for git don't work on PHP, only the full path as above. Via terminal it works just fine. Maybe it's the same reason why the key don't work in php.
Edit 2: $PATH is different on both.
When you run this command within a PHP script you are not running the command as yourself:
shell_exec('/usr/local/cpanel/3rdparty/bin/git pull');
The reason it works from the terminal console is you run the command as yourself from the console. But on a web server, you are not the user running the command. Remember: When you run PHP on a web server, it is a an Apache module. Meaning the web server user—which could be www-data, root or even apache on some systems—is running the PHP script which then runs the shell_exec command.
So it would never work as you have it setup. Perhaps you can kludge something together that would allow a key-pair to be used by the web server for these purposes, but that seems like a security risk waiting to happen.
I am trying to run php script via execute shell in Jenkins but it seems i am missing something.
here is my command in execute shell of Jenkins
#!/usr/bin/php
php /home/admin/reports/test.php"
I am not getting any error in console output.
and when i try these commands:
#!/bin/bash
php /home/admin/reports/test.php"
then I get error which says failed to open stream: Permission denied in /home/admin/reports/test.php" .
Jenkins, with default installation, will be run under jenkins user. That user has no access to your /home/admin home directory. The Permission denied is pretty self-explanatory.
Either give jenkins user read access to /home/admin (not recommended)
Or place the file into /home/jenkins directory (not best solution either)
Or better yet, have the file available in shared location accessible by both, preferably the job's workspace that is populated through the SCM (recommended).
I've already taken a look at both of these:
PHP: mount USB device
Error on mount through php "exec"
But, my problem appears to be different.
I have built an extensive library that's used to call Linux CLI tools. It's built around proc_open, it's family and POSIX.
I'm using it to successfully execute all (until I hit this mount/umount bug) CLI tools.
Now, I'm building a RAID setup routine, that involves partprobe, parted - rm, mklabel, mkpart, mdadm - stop, zero-superblock, create, dd, mkfs and ultimately mount/umount.
There are actually two graceful routines, one for assembling the RAID, the other one for disassembly.
As the title says, the problem relies in mount and umount. The other tools and their commands listed above execute successfully.
Environment
Arch Linux - Linux stone 3.11.6-1-ARCH #1 SMP PREEMPT Fri Oct 18 23:22:36 CEST 2013 x86_64 GNU/Linux.
The Arch is running with systemd - might be that is somehow affecting the mounting.
An Apache web server (latest), that runs mod_php (latest). Apache is run as http:http.
http is in wheel group, and wheels are sudoers - %wheel ALL=(ALL) NOPASSWD: ALL.
Please, don't start the webserver being given a full root capabilities discussion - the unit is a NAS, it's running a custom WebOS, and it's meant for intranet only. Even if there are hacking attempts - those will, most probably, break the whole system and that's not healthy for the customer. The NAS is a storage for Mobotix IP cameras, it runs a load of dependent services and the units are already deployed in over 30 objects with no issues. In short, the webserver is not serving a web, but an OS.
Before writing, I added, for a quick test, http explicitly to sudoers - http ALL=(ALL) NOPASSWD: ALL - didn't work.
Problem
The last command run in the RAID assembly process is mount /dev/md/stone\:supershare /mnt/supershare, which returns with an exit code of 0.
Performing a subsequent mount results in:
mount: /dev/md127 is already mounted or /mnt/supershare busy
/dev/md127 is already mounted on /mnt/supershare
with an exit code of 32. So, the array is mounted somewhere.
Performing an umount /dev/md/stone\:supershare afterwards the above mount, returns with an exit code of 0. Performing an subsequent umount results in:
umount: /dev/md/stone:supershare: not mounted
The commands above are auto-run with sudo.
So, it's mounted successfully and unmounted sucessfully, but... I'm logged in as root on TTY0, running lsblk after having performed the mount operation, yet, I do not see the mountpoint:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 55.9G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part [SWAP]
├─sda3 8:3 0 12G 0 part /
└─sda4 8:4 0 16.6G 0 part /home
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 899M 0 part
└─md127 9:127 0 1.8G 0 raid0
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 899M 0 part
└─md127 9:127 0 1.8G 0 raid0
Attempting the same mount command from TTY0 mounts it successfully (lsblk displays after).
If I mount it with my CLI tool, then run mount -l and lsblk also with the CLI tool, the mountpoint is visible.
Running immediately both commands from TTY0 as root, do not display the mountpoint.
Rebooting, to reset all mounts (not automounted), then, mounting from TTY0 and running lsblk from TTY0 displays the mountpoint.
Then, running lsblk with CLI tool, displays the mountpoint.
Then, running umount with CLI tool, exit code 0 - unmounted.
Running lsblk with CLI tool again, does not display the mountpoint.
Running lsblk from TTY0, still does display the mountpoint.
It appears that when the mount/umount is run with my CLI tool, it executes the commands privately for the sudo session runner.
umounting after TTY0 has mounted, does unmount it, but again - privately.
Logging in from TTY0 as http and running lsblk after having mounted the RAID from CLI tool, the mountpoint is not displayed. This kind of negates the "executes privately for the sudo session runner".
I've also found a material in IBM's:
The mount command uses the real user ID, not the effective user ID, to determine if the user has appropriate access. System group members can issue device mounts, provided they have write access to the mount point and those mounts specified in the /etc/file systems file. Users with root user authority can issue any mount command.
I hope I've explained good enough and not too confusing, I also hope that you guys will be able to help me catch the issue here.
Update (2013-10-28)
I attempted a test with the CLI tool outside web context, a simple PHP file, that'd I exec with root and a custom user.
In both scenarios, the mounting and unmounting was successful. So, it must be something with Apache executing the commands, though, I don't understand why do other commands work.
Question
What is causing the issue, and how do I overcome it?
In short, the hassle has been resolved.
It was the Apache's corresponding systemd service, that had PrivateTmp=true directive. Apparently, the directive executes the process with a new file system namespace.
This question, while attempting to debug and fix the issue spawned a numerous other posts around the internet.
https://unix.stackexchange.com/questions/97897/sudo-mount-from-webserver-apache-by-mod-php-result-not-visible-by-root
https://bbs.archlinux.org/viewtopic.php?id=172072
https://unix.stackexchange.com/questions/98182/a-process-run-as-root-when-performing-mount-is-mounting-for-self-how-to-ma/98191#98191
Each derived from stuff I've learn in the process.
I started with getting deeper information about mount working on EUID. Soon, I found out that my simple sudo call is actually not executing with EUID 0. That led me to multiple queries on how to do so, that in return spawned command syntax like sudo -i 'su' -c 'mount /dev/sdb1 /mnt/firstone' and other derivatives.
Having no success with the solution, I looked further.
I started to think of trying to add the entry to /etc/fstab, that led me to loads of permission issues. Also, sudo and my CLI tool proved to be incomplete for the task. Lets bring the big weapons - lets compile Apache with -DBIG_SECURITY_HOLE, also known as, give Apache the possibility to be run as root.
Lets append entry to the tab, lets attempt to mount... and... fail!
After numerous tests, queries and what not, I stumbled upon per process mount that led me here and opened the dimension of namespaces to me.
Okay, that explains everything - checking /proc/<pid>/mounts validates it, now, lets gnaw deeper and see how to overcome it.
Again, after numerous attempts and no success, I started posting questions based around my fresh knowledge of namespaces. Narrowing the questions down and becoming more technical (at least I think I did), that eventually led to a user hiciu who pointed me into systemd direction, specifically, Apaches service - PrivateTmp.
Voila! ...apparently systemd can enforce new namespaces.
I had the same strange behavior of apache and spent more than 3 days without any working solution. Then luckyly I found this post and as you described PrivateTmp caused the issue. In my case, I tried to mount a drive from php:
<?php
...
exec("sudo mount /dev/sda1 /mnt/drive", $output, $ret);
...
?>
When I ran above code from web browser, exec function return 0 (success) and I was even abble to list thru mapped drive within the code:
exec("ls /mnt/drive", $o, $r);
foreach ($o as $line){
echo $line.'<BR>';
}
But when I tried search for mapped drive from cli, I cannot see it. I tried everything, include change permissions, change php.ini etc. Nothing help. Until now, changing
PrivateTmp=false
in
/lib/systemd/system/apache2.service
does the trick. Thank you very much for sharing!
It was searching for this and is looks like this behavior is implemented and detectable from php via chroot:
system('ischroot;echo $?');
gives 0 with the setting PrivateTmp=true (saying 'you are in a chroot') and 1 with PrivateTemp=false.
I want to do SVN update easier - with calling PHP script.
I created PHP script:
$cmd = "svn update https://___/svn/website /var/www/html/website/ 2>&1";
exec($cmd, $out);
As the user running the script is apache (not root), I get some permission errors.
If I change the owner of every directory to apache (or chrown everything to 777) I have another problem. Because I use https protocol user apache should permanently accept certificate of the svn server. I tried to do "su - apache" and accept certificate but OS says that "apache" is not valid user. I also dont know how could I accept certificate with exec() function.
Any idea? How can I make svn update-ing easier?
Is the error telling you that the user isn't a valid svn user? If apache is the user running httpd, you should be able to su to it. This is the script I use:
/usr/bin/svn --config-dir=/home/user/.subversion --username=svnuser --password=svnpass update
once the password is saved you can remove it from the command. Again, make sure the user/pass above is a valid SVN user.
Lately I've actually migrated to using Hudson for svn updates as you can schedule it as well as run manually and do a bunch of other tasks, plus you can view the svn logs for each commit as well as any console errors.
Why not use php svn functions instead of (insecure) exec?
http://www.php.net/manual/en/function.svn-auth-set-parameter.php has good examples for authentification options.
Use getent apache on the shell. This will return the shell of apache. Most likely, it is /bin/nologin or /bin/false. Change this to /bin/bash. You'll also need to specify the home directory and create it on the file system.
UPDATE: getent apache will actually return the entry in the /etc/passwd file for the apache user. The last token in this string is the shell.