Different answer from same script depending on caller (php exec() vs. console) - php

I run Bash-Scripts from PHP (5.4) with root permissions through a binary wrapper (see [Execute root commands via PHP), which works perfectly fine except for the following example. Additionally, I am using zfs-on-linux on CentOS7.
I prepared 2 simple example Bash-Scripts:
test_zfsadd:
#!/bin/bash
#ARGS=1
err="$(zfs create $1 2>&1 > /dev/null)"
if [ $? -ne 0 ]
then
echo $err
exit 1
fi
echo "OK"
exit 0
test_zfspart:
#!/bin/bash
#ARGS=1
msg="$(zfs get mounted $1 -H | awk '{print $3}')"
echo $msg
exit 0
When I call the according binaries from PHP with e. g.
<?php
$partition = 'raid1/testpart';
$ret = shell_exec("/path/test_zfsadd_bin $partition");
echo "<p>Return value: $ret</p>\n";
$ret = shell_exec("/path/test_zfspart_bin $partition");
echo "<p>Is mounted: $ret</p>\n";
the output is:
Return value: OK
Is mounted: yes
This looks good, but when I call 'test_zfspart_bin raid1/testpart' directly from console, I get the correct result which is
no
(means that the partition is NOT mounted, checked in /proc/mounts). So I get 2 different answers from the same script depending somehow on the context. I first thought it has something to do with the SUID-Bit, but calling the script in console with an unprivileged user works fine. If I try (as root)
zfs mount raid1/testpart
in console I get
filesystem 'raid1/testpart' is already mounted
cannot mount 'raid1/testpart': mountpoint or dataset is busy
which is weird. I also can't destroy the 'partition' from console, this works only from PHP. On the other hand, if I create a partition as root directly from bash and try to delete it via PHP, it doesn't work either. Looks like the partitions are somehow separated from each other by context. Everything gets synchronized again if I do
systemctl restart httpd
I think apache or PHP is keeping the zfs system busy to some extend, but I have absolutely no clue why and how. Any explanation or some workaround is much appreciated.

I figured it out myself in the meantime.. The problem was not the apache process itself, it was how it is started by systemd. There is an option called 'PrivateTmp', which is set to 'true' in the httpd service file by default (at least in CentOS7). The man page says
PrivateTmp=
Takes a boolean argument. If true sets up a new file system
namespace for the executed processes and mounts a private /tmp
directory inside it, that is not shared by processes outside of
the namespace. This is useful to secure access to temporary files
of the process, but makes sharing between processes via /tmp
impossible. Defaults to false.
This explains it all I think. The newly created zfs partition is mounted in this 'virtual' file system and is therefore invisible to the rest of the system, what is not desired in this case. The apache process is not able to mount or unmount file systems outside its namespace. After disabling the option everything worked as expected.

Related

suPHP and Lazarus console application running into weird shell malfunctions

i do appologize for the title, but couldn't find any other explaination. My company is running a development server with the latest LTS Ubuntu+Apache2+suPHP. To handle it, i am writing a Zend2 and Lazarus application. The web part with Zend runs well.
The problem is the console application written in Lazarus. It runs a couple of classes, to create databases and users, to download frameworks and so on. Also it should run a couple of shell commands for administration purpose (with root permissions). To aquire the rights, i am using a pretty ugly solution, using echo mymagicpassword | sudo -S mymagiccommand.
Here's a snippet:
constructor TRootProcess.Create(AOwner: TComponent);
begin
inherited Create(AOwner);
Options:=[poUsePipes,poWaitOnExit];
Executable:='/bin/sh';
Parameters.Add('-c');
Parameters.Add('echo %pwd% | sudo -S ');
end;
function TRootProcess.ExecuteCommand(command: String): String;
var
str: TStringList;
begin
str:=TStringList.Create;
command:=Copy(Parameters.GetText, 0, Length(Parameters.GetText)-1)+command;
command:=StringReplace(command,'%pwd%','mymagicpassword',[rfReplaceAll]);
Parameters.SetText(PChar(command));
Execute;
str.Clear;
str.LoadFromStream(Output);
Result:=str.Text;
end;
If i run this application by hand, everything runs well. But if i run it from PHP Applicaiton using shell_exec , the whole application runs (even the very last log entries) beside, starting other shell applications (ls, cp mkdir, useradd, chmod and so on)
I have actually no idea, what the problem is, anymore.
I don't get any errors in stdout/stderr, suPHP log or even Apache2 log.
Also running from PHP went well for about a week and apparently stopped working.
Thanks in advance
The problem is not really well described. At the very least, the line with Copy( is wrong, since strings start with index 1, not 0.
The loadfromstream is also not safe. Specially with larger outputs this might not complete. See "TProcess large I/O" in the Lazarus/FPC wiki.
Finally, you spawn new shells. After the command is done, the shell will be destroyed, and the next command will have yet another new shell. So doing "cd" is pretty pointless that way.

(U)Mounting through "exec" with "sudo". The user is a "sudoer" with NOPASSWD

I've already taken a look at both of these:
PHP: mount USB device
Error on mount through php "exec"
But, my problem appears to be different.
I have built an extensive library that's used to call Linux CLI tools. It's built around proc_open, it's family and POSIX.
I'm using it to successfully execute all (until I hit this mount/umount bug) CLI tools.
Now, I'm building a RAID setup routine, that involves partprobe, parted - rm, mklabel, mkpart, mdadm - stop, zero-superblock, create, dd, mkfs and ultimately mount/umount.
There are actually two graceful routines, one for assembling the RAID, the other one for disassembly.
As the title says, the problem relies in mount and umount. The other tools and their commands listed above execute successfully.
Environment
Arch Linux - Linux stone 3.11.6-1-ARCH #1 SMP PREEMPT Fri Oct 18 23:22:36 CEST 2013 x86_64 GNU/Linux.
The Arch is running with systemd - might be that is somehow affecting the mounting.
An Apache web server (latest), that runs mod_php (latest). Apache is run as http:http.
http is in wheel group, and wheels are sudoers - %wheel ALL=(ALL) NOPASSWD: ALL.
Please, don't start the webserver being given a full root capabilities discussion - the unit is a NAS, it's running a custom WebOS, and it's meant for intranet only. Even if there are hacking attempts - those will, most probably, break the whole system and that's not healthy for the customer. The NAS is a storage for Mobotix IP cameras, it runs a load of dependent services and the units are already deployed in over 30 objects with no issues. In short, the webserver is not serving a web, but an OS.
Before writing, I added, for a quick test, http explicitly to sudoers - http ALL=(ALL) NOPASSWD: ALL - didn't work.
Problem
The last command run in the RAID assembly process is mount /dev/md/stone\:supershare /mnt/supershare, which returns with an exit code of 0.
Performing a subsequent mount results in:
mount: /dev/md127 is already mounted or /mnt/supershare busy
/dev/md127 is already mounted on /mnt/supershare
with an exit code of 32. So, the array is mounted somewhere.
Performing an umount /dev/md/stone\:supershare afterwards the above mount, returns with an exit code of 0. Performing an subsequent umount results in:
umount: /dev/md/stone:supershare: not mounted
The commands above are auto-run with sudo.
So, it's mounted successfully and unmounted sucessfully, but... I'm logged in as root on TTY0, running lsblk after having performed the mount operation, yet, I do not see the mountpoint:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 55.9G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part [SWAP]
├─sda3 8:3 0 12G 0 part /
└─sda4 8:4 0 16.6G 0 part /home
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 899M 0 part
└─md127 9:127 0 1.8G 0 raid0
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 899M 0 part
└─md127 9:127 0 1.8G 0 raid0
Attempting the same mount command from TTY0 mounts it successfully (lsblk displays after).
If I mount it with my CLI tool, then run mount -l and lsblk also with the CLI tool, the mountpoint is visible.
Running immediately both commands from TTY0 as root, do not display the mountpoint.
Rebooting, to reset all mounts (not automounted), then, mounting from TTY0 and running lsblk from TTY0 displays the mountpoint.
Then, running lsblk with CLI tool, displays the mountpoint.
Then, running umount with CLI tool, exit code 0 - unmounted.
Running lsblk with CLI tool again, does not display the mountpoint.
Running lsblk from TTY0, still does display the mountpoint.
It appears that when the mount/umount is run with my CLI tool, it executes the commands privately for the sudo session runner.
umounting after TTY0 has mounted, does unmount it, but again - privately.
Logging in from TTY0 as http and running lsblk after having mounted the RAID from CLI tool, the mountpoint is not displayed. This kind of negates the "executes privately for the sudo session runner".
I've also found a material in IBM's:
The mount command uses the real user ID, not the effective user ID, to determine if the user has appropriate access. System group members can issue device mounts, provided they have write access to the mount point and those mounts specified in the /etc/file systems file. Users with root user authority can issue any mount command.
I hope I've explained good enough and not too confusing, I also hope that you guys will be able to help me catch the issue here.
Update (2013-10-28)
I attempted a test with the CLI tool outside web context, a simple PHP file, that'd I exec with root and a custom user.
In both scenarios, the mounting and unmounting was successful. So, it must be something with Apache executing the commands, though, I don't understand why do other commands work.
Question
What is causing the issue, and how do I overcome it?
In short, the hassle has been resolved.
It was the Apache's corresponding systemd service, that had PrivateTmp=true directive. Apparently, the directive executes the process with a new file system namespace.
This question, while attempting to debug and fix the issue spawned a numerous other posts around the internet.
https://unix.stackexchange.com/questions/97897/sudo-mount-from-webserver-apache-by-mod-php-result-not-visible-by-root
https://bbs.archlinux.org/viewtopic.php?id=172072
https://unix.stackexchange.com/questions/98182/a-process-run-as-root-when-performing-mount-is-mounting-for-self-how-to-ma/98191#98191
Each derived from stuff I've learn in the process.
I started with getting deeper information about mount working on EUID. Soon, I found out that my simple sudo call is actually not executing with EUID 0. That led me to multiple queries on how to do so, that in return spawned command syntax like sudo -i 'su' -c 'mount /dev/sdb1 /mnt/firstone' and other derivatives.
Having no success with the solution, I looked further.
I started to think of trying to add the entry to /etc/fstab, that led me to loads of permission issues. Also, sudo and my CLI tool proved to be incomplete for the task. Lets bring the big weapons - lets compile Apache with -DBIG_SECURITY_HOLE, also known as, give Apache the possibility to be run as root.
Lets append entry to the tab, lets attempt to mount... and... fail!
After numerous tests, queries and what not, I stumbled upon per process mount that led me here and opened the dimension of namespaces to me.
Okay, that explains everything - checking /proc/<pid>/mounts validates it, now, lets gnaw deeper and see how to overcome it.
Again, after numerous attempts and no success, I started posting questions based around my fresh knowledge of namespaces. Narrowing the questions down and becoming more technical (at least I think I did), that eventually led to a user hiciu who pointed me into systemd direction, specifically, Apaches service - PrivateTmp.
Voila! ...apparently systemd can enforce new namespaces.
I had the same strange behavior of apache and spent more than 3 days without any working solution. Then luckyly I found this post and as you described PrivateTmp caused the issue. In my case, I tried to mount a drive from php:
<?php
...
exec("sudo mount /dev/sda1 /mnt/drive", $output, $ret);
...
?>
When I ran above code from web browser, exec function return 0 (success) and I was even abble to list thru mapped drive within the code:
exec("ls /mnt/drive", $o, $r);
foreach ($o as $line){
echo $line.'<BR>';
}
But when I tried search for mapped drive from cli, I cannot see it. I tried everything, include change permissions, change php.ini etc. Nothing help. Until now, changing
PrivateTmp=false
in
/lib/systemd/system/apache2.service
does the trick. Thank you very much for sharing!
It was searching for this and is looks like this behavior is implemented and detectable from php via chroot:
system('ischroot;echo $?');
gives 0 with the setting PrivateTmp=true (saying 'you are in a chroot') and 1 with PrivateTemp=false.

PHP exec() not working properly

I am having difficulty with the PHP exec() function. It seems to not be calling certain functions. For instance, the code echo exec('ls'); produces no output whatsoever (it should, there are files in the directory). That main reason this is a problem for me is that I'm trying execute a .jar from a PHP exec() call.
As far as I know I'm calling the java program properly, but I'm not getting any of the output. The .jar can be executed from the command line on the server. (For the record, it's an apache server).
My php for the .jar execute looks like this:
$output = array();
exec('java -jar testJava.jar', $output);
print_r($output);
All I get for output from this exec() call is Array().
I have had success with exec() executing 'whoami' and 'pwd'. I can't figure out why some functions are working and some aren't. I'm not the most experienced person with PHP either, so I'm not too sure how to diagnose the issue. Any and all help would be appreciated.
The reason why you are not able to execute ls is because of permissions.
If you are running the web server as user A , then you can only ls only those directories which have permissions for user A.
You can either change the permission of the directory or you can change the user under which the server is running by changing the httpd.conf file(i am assuming that you are using apache).
If you are changing the permissions of the directory, then make sure that you change permissions of parent directories also.
To change the web server user, follow following steps:
Open the following file:
vi /etc/httpd/conf/httpd.conf
Search for
User apache
Group apache
Change the user and group name. After changing the user and group, restart the server using following command.
/sbin/service httpd restart
Then you will be able to execute all commands which can be run by that user.
EDIT:
The 'User' should be a non-root user in httpd.conf. Apache by default doesnot serve pages when run as root. You have to set user as a non-root user or else you will get error.
If you want to force apache to run as root, then you have to set a environment variable as below:
env CFLAGS=-DBIG_SECURITY_HOLE
Then you have to rebuild apache before you can run it as root.
I have found the issue - SELinux was blocking PHP from accessing certain functions. Putting SELinux into permissive mode has fixed the issues (although, I'd rather not have to leave SELinux in permissive mode; I'd rather find a way of allowing certain functions if I can).
I have a solution:
command runs from console, but not from php via exec/system/passthru.
The issue is the path to command. It works with the absolute path to command
So that:
wkhtmltopdf "htm1Eufn7.htm" "pdfIZrNcb.pdf"
becomes:
/usr/local/bin/wkhtmltopdf "htm1Eufn7.htm" "pdfIZrNcb.pdf"
And now, it's works from php via exec
Where command binary you can see via whereis wkhtmltopdf
Tore my hair out trying to work out why PHP exec works from command line but not from Apache. At the end, I found the following permissions:
***getsebool -a | grep httpd*** ---->
**httpd_setrlimit --> off
httpd_ssi_exec --> off
httpd_sys_script_anon_write --> off**
USE: setsebool -P httpd_ssi_exec 1
SEE: https://linux.die.net/man/8/httpd_selinux
Your problem is not an execution issue but the syntax of the exec command. The second argument is always returned as an array and contains a single line of the output in each index. The return value of the exec function will contain the final line of the commands output. To show the output you can use:
foreach($output as $line) echo "$line\n";
See http://php.net/manual/en/function.exec.php for details. You can also get the command's exit value with a third argument.

bash script execution from php and instantaneous output back to webpage

I have a collection of bash and Perl scripts to
develop a directory structure desired for deployment on linux box
(optionally) export code from svn
build a package from this source
This is working well from terminal. Now my client requests a web interface to this process.
e.g "Create New Package" button on certain page will invoke above steps one by one and return the output to user as script echos, not when the whole scripts executes.
Is it possible to send instantaneous output from bash script to webpage or php script which has invoked it through program execution functions (system, exec, passthru ... or any thing else that suite this process flow) ?
What is elegant why to do this ?
What security precautions I should take while doing such thing (if possible)?
Edit
After some search I have found part of the solution but still not working :
$cmd = 'cat ./password.txt|sudo -S ./setup.sh ';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that the child will read from
);
flush();
$process = proc_open($cmd, $descriptorspec, $pipes, './', array());
echo "<pre>";
if (is_resource($process)) {
while ($s = fgets($pipes[1])) {
print "Message:".$s;
flush();
}
while ($s = fgets($pipes[2])) {
print "Error:".$s;
flush();
}
}
echo "</pre>";
output: (webpage)
Error:WARNING: Improper use of the sudo command could lead to data loss
Error:or the deletion of important system files. Please double-check your
Error:typing when using sudo. Type "man sudo" for more information.
Error:
Error:To proceed, enter your password, or type Ctrl-C to abort.
Error:
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:sudo: 3 incorrect password attempts**
First issue I am having now is to pass sudo passoword
help please !
I would use a kind of master / slave design. The slave would be your perl / bash script, just doing a job (packaging, compiling, exporting code or so), and feed a log entry.
The master would be your php process. So the principle is the following: the master and the slave share a communication channel, and communicate asynchronously from that channel.
You could imagine a database like:
create table tasks ( id INT primary key, status INT, argument VARCHAR(100));
You php page should switch user choice, and filter input:
switch ($_GET['action']) {
case 'export':
$argument = sanitize($_GET['arg']);
add_task('export', $argument);
break;
case '...':
// ...
}
and the add_task function could be something like:
function add_task($action, $arg)
{
return $db->add('tasks', array($action, NEW_TASK, $arg);
}
The slaves could be run via a cron job, and query the database, feeding the progression of the task.
The pro are:
independant systems, ease the evolution.
if a client gets disconnected, the job is never lost
easier to secure
The cons are:
a little more complicated at the beginning.
less reactive, because of the polling time of the slaves running (for instance if they run every 5 minutes)
less output than the direct output of the command
Notice that you can then implement xml-rpc like triggers to run the slaves, rather than using a message passing system.
The simplest approach is to use shell_exec for your purpose. It executes a command via shell and returns the complete output as a string, hence you can display it on your website.
If this doesn't suit your purpose, because maybe you want some responses while waiting for the command to finish, check out the other program execution functions available in php (hint: there are a few good comments on the manual pages).
Keep in mind, when evoking commandline scripts this way, generated output will have the file owner, group and permissions of your webserver (p.e. wwwrun or whatever). If parts of your deployment need a separate owner, group and/or file permissions, you have to manually set them either in your scripts or after invoking shell_exec (chmod, chown and chgrp can deal with this in php).
About security:
Alot of web-based applications put that kind of function into a separate installation directory, and kindly ask you to remove this directory after installing. I even remember some of them shouting at admins quite persistent unless it is removed. This is an easy way preventing this script being invoked by wrong hands at a wrong time. If your application might need it after installing, then you have to put it into an area where only authorized users have access to (p.e. admin area or something similar).
You can use the Comet web application model to do real time updating of the results page.
For the sudo problem, as a quick and dirty solution I'd use restricted set of commands that the web server can execute without password. Example (add in /etc/sudoers):
apache ALL=(ALL) NOPASSWD: /sbin/service myproject *
which would allow apache to run /sbin/service myproject stop, /sbin/service myproject start and so on.
Take a look at Jenkins, it does the web part nicely, you'll only have to add your scripts in the build.
A better solution would be, as suggested by Aif, to separate the logic. One daemon process waiting for tasks and a web application visualizing the results.
Always use escapeshellarg and escapeshellcmd when executing system commands via PHP for security. It would also be suggested to have the user within a chroot'd directory as limited as possible.
You seem to have solved the issue of getting stdout and stdin to be outputted to your webpage by using proc_open. Now the problem looks to be executing the script via sudo.
From a security perspective, having a PHP app run a script as root (via sudo) makes me cringe. Having the root password in password.txt is a pretty huge security hole.
What I would do (if possible) is to make whatever changes necessary so that setup.sh can be run by the unix user that is running Apache. (I'm assuming you're running PHP with Apache.) If setup.sh is going to be executed by the webserver, it should be able to run it without resorting to sudo.
If you really need to execute the script as a different user, you may check into suPHP, which is designed to run PHP scripts as a non-standard user.
Provide automated sudo rights to a specific user using the NOPASSWD option of /etc/sudoers then run the command with the prefix sudo -u THE_SUDO_USER to have the command execute as that user. This prevents the security hole of giving the entire apache user sudo rights, but also allows sudo to be executed on the script from within apache.

Sync local and remote folders using rsync from php script without typing password

How can I sync local and remote folders using rsync from inside a php script without beeing prompted to type a password?
I have already set up a public key to automate the login on remote server for my user.
So this is running without any problem from cli:
rsync -r -a -v -e "ssh -l user" --delete ~/local/file 111.111.11.111:~/remote/;
But, when I try to run the same from a PHP script (on a webpage in my local server):
$c='rsync -r -a -v -e "ssh -l user" --delete ~/local/file 111.111.11.111:~/remote/';
//exec($c,$data);
passthru($c,$data);
print_r($data);
This is what I receive:
255
And no file is uploaded from local to remote server.
Searching on the net I found this clue:
"You could use a combination of BASh and Expect shell code here, but it wouldn't be very secure, because that would automate the root login. Generate keys for nobody or apache (whatever user as which Apache is being executed). Alternatively, install phpSuExec, Suhosin, or suPHP so that the scripts are run as the user for which they were called."
Well, I don't know how to "automate the root login" for PHP, which is running as "apache". Maybe to make the script run as the actual user is a better idea, I don't know... Thanks!
UPDATE:
- As this works fine:
passthru('ssh user#111.111.11.111 | ls',$data);
Returning a list of the home foder, I can be sure, there is no problem with the automatic login. It mast be something with rsync running from the PHP script.
I also have chmod -R 0777 on the local and remote folders just in case. But I didn't get it yet.
UPDATE:
All the problem is related to the fact that "when ran on commandline, ssh use the keyfiles on $HOME/.ssh/, but under PHP, it's ran using Apache's user, so it might not have a $HOME; much less a $HOME/.ssh/id_dsa. So, either you specifically tell it which keyfile to use, or manually create that directory and its contents."
While I cannot get rsync, this is how I got to transfer the file from local to remote:
if($con=ssh2_connect('111.111.11.111',22)) echo 'ok!';
if(ssh2_auth_password($con,'apache','xxxxxx')) echo ' ok!';
if(ssh2_scp_send($con,'localfile','/remotefolder',0755)) echo ' ok!';
Local file needs: 0644
Remote folder needs: 0775
I guess if the solution wouldn't be to run php with the same user bash does...
#Yzmir Ramirez gave this suggestion: "I don't think you want to "copy the key somewhere where apache can get to it" - that's a security violation. Better to change the script to run as a secure user and then setup the .ssh keys for passwordless login between servers.
This is something I have to invest some more time. If somebody know how to do this, please, it would be of great help.
When I set this same thing up in an application of ours, I also ran into 255 errors, and found that they can mean a variety of things; it's not a particularly helpful error code. In fact, even now, after the solution's been running smoothly for well over a year, I still see an occasional 255 show up in the logs.
I will also say that getting it to work can be a bit of a pain, since there are several variables involved. One thing I found extremely helpful was to construct the rsync command in a variable and then log it. This way, I can grab the exact rsync command being used and try to run it manually. You can even su to the apache user and run it from the same directory as your script (or whatever your script is setting as the cwd), which will make it act the same as when it's run programmatically; this makes it far simpler to debug the rsync command since you're not having to deal with the web server. Also, when you run it manually, if it's failing for some unstated reason, add in verbosity flags to crank up the error output.
Below is the code we're using, slightly edited for security. Note that our code actually supports rsync'ing to both local and remote servers, since the destination is fully configurable to allow easy test installations.
try {
if ('' != $config['publishSsh']['to']['remote']):
//we're syncing with a remote server
$rsyncToRemote = escapeshellarg($config['publishSsh']['to']['remote']) . ':';
$rsyncTo = $rsyncToRemote . escapeshellarg($config['publishThemes']['to']['path']);
$keyFile = $config['publishSsh']['to']['keyfile'];
$rsyncSshCommand = "-e \"ssh -o 'BatchMode yes' -o 'StrictHostKeyChecking no' -q -i '$keyFile' -c arcfour\"";
else:
//we're syncing with the local machine
$rsyncTo = escapeshellarg($config['publishThemes']['to']['path']);
$rsyncSshCommand = '';
endif;
$log->Message("Apache running as user: " . trim(`whoami`), GLCLogger::level_debug);
$deployCommand = "
cd /my/themes/folder/; \
rsync \
--verbose \
--delete \
--recursive \
--exclude '.DS_Store' \
--exclude '.svn/' \
--log-format='Action: %o %n' \
--stats \
$rsyncSshCommand \
./ \
$rsyncTo \
2>&1
"; //deployCommand
$log->Message("Deploying with command: \n" . $deployCommand, GLCLogger::level_debug);
exec($deployCommand, $rsyncOutputLines, $returnValue);
$log->Message("rsync status code: $returnValue", GLCLogger::level_debug);
$log->Message("rsync output: \n" . implode("\n", $rsyncOutputLines), GLCLogger::level_debug);
if (0 != $returnValue):
$log->Message("Encountered error while publishing themes: <<<$returnValue>>>");
throw new Exception('rsync error');
endif;
/* ... process output ... */
} catch (Exception $e) {
/* ... handle errors ... */
}
A couple of things to notice about the code:
I'm using exec() so that I can capture the output in a variable. I then parse it so that I can log and report the results in terms of how many files were added, modified, and removed.
I'm combining rsync's standard output and standard error streams and returning both. I'm also capturing and checking the return result code.
I'm logging everything when in debug mode, including the user Apache is running as, the rsync command itself, and the output of the rsync command. This way, it's trivially easy to run the same command as the same user with just a quick copy-and-paste, as I mention above.
If you're having problems with the rsync command, you can adjust its verbosity without impact and see the output in the log.
In my case, I was able to simply point to an appropriate key file and not be overly concerned about security. That said, some thoughts on how to handle this:
Giving Apache access to the file doesn't mean that it has to be in the web directory; you can put the file anywhere that's accessible by the Apache user, even on another network machine. Depending on your other layers of security, this may or may not be an acceptable compromise.
Depending on the data you're copying, you may be able to heavily lock down the permissions of the ssh user on the other machine, so that if someone unscrupulous managed to get in there, they'd have minimal ability to do damage.
You can use suEXEC to run the script as a user other than the Apache user, allowing you to lock access to your key to that other user. Thus, someone compromising the Apache user simply would not have access to the key.
I hope this helps.
When you let the web server run the command, it runs it as its own user ("nobody" or "apache" or similar) and looks for a private key in that user's home directory/.ssh, while the private key you setup for your own user is in "/home/you/.ssh/keyfile", or however you named it.
Make ssh use that specific key with the -i parameter:
ssh -i /home/you/.ssh/keyfile
and it should work
A Simple but In-direct solution to this problem is here, which i am using.
DO NOT run rsync directly from php, it does have issues as well as it can be security risk to do so.
Rather, just prepare two scripts.
In php script, you just have to change value of one variable on filesystem from 0 to 1.
Now, on ther other side, make an rsync script, that will run forever, and will have following logic.
""
If the file has 0 in it, then do not run rsync, if it is 1, then run the rsync, then change its value back to 0 after successful run of rsync;
""
I am doing it as well.

Categories