interactive prompts with proc_open() on psql query - php

I am trying to execute a few PostgreSQL DB commands from a web interface.
I use proc_open() to pipe to the Windows command prompt.
Because psql (and all other postgres command) do not accept the password as an option, I must send the password to the write stream.
The code below causes the browser to hang. Either the resource is not be created, or the password is not being piped properly. Any suggestions are welcome at this point.
$cmd = '""C:\\Program files\\PostgreSQL\\9.0\\bin\\psql.exe" --host localhost --port 5432 -U postgres --dbname $database_name --command "$query""';
$p=proc_open($cmd,
array(array("pipe","r"), array("pipe","w"), array("pipe","w")),
$pipes);
if (is_resource($p)){
fwrite($pipes[0],"mypassword");
fclose($pipes[0]);
proc_terminate($p);
proc_close($p);
}
[You'll notice the crazy double-double-quoting in the command -- this is apparently needed for windows for some reason.]
Work-arounds to this problem are welcome:
I previously tried using system() and exec() but gave up since they don't handle interactive prompt. Is there a better option in php for interactive?
pg_query() is the main command for interacting with the postgres DB, but pg_dump and pg_restore operations are not supported. Is there another way to backup and restore from binary postgres .backup files that can be accomplished with php?

Don't mess around with the password prompt, better put an entry into %APPDATA%\postgresql\pgpass.conf. Format is
hostname:port:database:username:password
Make sure to pick the %APPDATA% of the user running the webserver process.
If you're really set on interacting with the prompt, you could try the Expect library which people often use for such tasks... disclaimer: I've never used it on windows and have no idea how well it works there, or if it really is necessary. Maybe your fwrite is just missing a terminating newline.

As #c-ramseyer suggested, messing around with simulating an interactive prompt via proc_open() was a non-starter. PostgreSQL offers two methods to get around providing the password through the interactive prompt. Method (1) is to provide it as environment variables, as suggested by the other answer. Method (2) is to create a pgpass.conf file in the DB user's %appinfo% directiory. (To find that directory do echo %appinfo% from windows command prompt.) See postgresql for how to make this one-liner conf file. Neither of these methods worked for me, for reasons I still don't understand.
To solve the problem, I had to modify the ph_hda.conf file (PostgreSQL Client Authentication Configuration File) to disable authentication. That file is located in the postgresql/data directory. I commented out the 2 lines of default settings at the bottom and replaced them with
#TYPE DATABASE USER CIDR-ADDRESS METHOD
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
Now if I call postgres from php I include the --no-password option and the sucker works. Note that this solution is not secure, and only makes sense in my case because it is being used for an internal company application, with machines running offline. This method should not be used for a production site, your DB will get hacked. Here's the php code.
$commande_restore = '""'.$postgres_bin_directory.'pg_restore" --host 127.0.0.1 --port 5432 -U postgres --no-password -d '.$database_name.' '.$restore_file.'"';
$this->execute($commande_restore);
function execute($cmd, $env = null){
$proc=proc_open($cmd,array(0=>array('pipe','r'),1=>array('pipe','w'),2=>array('pipe','w')),$pipes, null, $env = null);
//fwrite($pipes[0],$stdin); //add to argument if you want to pass stdin
fclose($pipes[0]);
$stdout=stream_get_contents($pipes[1]); fclose($pipes[1]);
$stderr=stream_get_contents($pipes[2]); fclose($pipes[2]);
$return=proc_close($proc);
return array( 'stdout'=>$stdout, 'stderr'=>$stderr, 'return'=>$return );
}
It took me close to 2 weeks to solve this, so I hope it helps someone.

Related

Is it possible to run PHP exec() but hide params from the process list?

I'd like to have a properly protected PHP web-based tool to run a mysqlcheck for general database table health, but I don't want the password to be visible in the process list. I'd like to run something like this:
$output = shell_exec('mysqlcheck -Ac -uroot -pxxxxx -hhostname');
// strip lines that's OK
echo '<pre>'.preg_replace('/^.+\\sOK$\\n?/m', '', $output).'</pre>';
Unfortunately, with a shell_exec(), I have to include the password in the command line, but am concerned that the password will show up in the process list (ps -A | grep mysqlcheck).
Using mariadb 5.5 on my test machine, mysqlcheck doesn't show the password in the process list, but my production machine isn't running mariadb and running a different OS and I'd like to be on the safe-side and not run these tests on the production side.
Do all versions of mysql also hide the password in the process list? Are my concerns a non-issue?
Yes, since at least MySQL 5.1, the client obscures the password on the command-line.
I found this blog by former MySQL Community Manager Lenz Grimmer from 2009, in which he linked to the relevant code in the MySQL 5.1 source. http://www.lenzg.net/archives/256-Basic-MySQL-Security-Providing-passwords-on-the-command-line.html
You could alternatively not pass the password on the command-line at all, and instead store the user/password credentials in a file which PHP has privileges to read, and then execute the client as:
mysqlcheck --defaults-extra-file=/etc/php.d/mysql-client.cnf
The filename is an example; you can specify any path you want. The point is that most MySQL client tools accept that --defaults-extra-file option. See http://dev.mysql.com/doc/refman/5.6/en/option-file-options.html for more information.
It is a real concern and your OS will be showing it, Just not maybe in the default view.
You could proc_open instead which will allow you to read and write to the stream opened by that file.
mysqlcheck -Ac -uroot -p -hhostname
will prompt for the password which you can write to with the pipes from proc_open

SSH connections with PHP

I'm currently working on a project to make changes to the system with PHP (e.g. change the config file of Nginx / restarting services).
The PHP scripts are running on localhost. In my opinion the best (read: most secure) way is to use SSH to make a connection. I considering one of the following options:
Option 1: store username / password in php session and prompt for sudo
Using phpseclib with a username / password, save these values in a php session and prompt for sudo for every command.
Option 2: use root login
Using phpseclib with the root username and password in the login script. In this case you don't have to ask the user for sudo. (not really a safe solution)
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('root', 'root-password')) {
exit('Login Failed');
}
?>
Option 3: Authenticate using a public key read from a file
Use the PHP SSHlib with a public key to authenticate and place the pubkey outside the www root.
<?php
$connection = ssh2_connect('shell.example.com', 22, array('hostkey' => 'ssh-rsa'));
if (ssh2_auth_pubkey_file($connection, 'username', '/home/username/.ssh/id_rsa.pub', '/home/username/.ssh/id_rsa', 'secret')) {
echo "Public Key Authentication Successful\n";
} else {
die('Public Key Authentication Failed');
}
?>
Option 4: ?
I suggest you to do this in 3 simple steps:
First.
Create another user (for example runner) and make your sensitive data (like user/pass, private key, etc) accessible just for this user. In other words deny your php code to have any access to these data.
Second.
After that create a simple blocking fifo pipe and grant write access to your php user.
Last.
And finally write a simple daemon to read lines from the fifo and execute it for example by ssh command. Run this daemon with runner user.
To execute a command you just need to write your command in the file (fifo pipe). Outputs could be redirected in another pipe or some simple files if needed.
to make fifo use this simple command:
mkfifo "FIFONAME"
The runner daemon would be a simple bash script like this:
#!/bin/bash
while [ 1 ]
do
cmd=$(cat FIFONAME | ( read cmd ; echo $cmd ) )
# Validate the command
ssh 192.168.0.1 "$cmd"
done
After this you can trust your code, if your php code completely hacked, your upstream server is secure yet. In such case, attacker could not access your sensitive data at all. He can send commands to your runner daemon, but if you validate the command properly, there's no worry.
:-)
Method 1
I'd probably use the suid flag. Write a little suid wrapper in C and make sure all commands it executes are predefined and can not be controlled by your php script.
So, you create your wrapper and get the command from ARGV. so a call could look like this:
./wrapper reloadnginx
Wrapper then executes /etc/init.d/nginx reload.
Make sure you chown wrapper to root and chmod +s. It will then spawn the commands as root but since you predefined all the commands your php script can not do anything else as root.
Method 2
Use sudo and set it up for passwordless execution for certain commands. That way you achieve the same effect and only certain applications can be spawned as root. You can't however control the arguments so make sure there is no privilege escalation in these binaries.
You really don't want to give a PHP script full root access.
If you're running on the same host, I would suggest to either directly write the configuration files and (re)start services or to write a wrapper script.
The first option obviously needs a very high privilege level, which I would not recommend to do. However, it will be the most simple to implement. Your other named options with ssh do not help much, as a possible attacker still may easily get root privileges
The second option is way more secure and involves to write a program with high level access, which only takes specified input files, e.g. from a directory. The php-script is merely a frontend for the user and will write said input files and thus only needs very low privileges. That way, you have a separation between your high- and low privileges and therefore mitigate the risk, as it is much easier to secure a program, with which a possible attacker may only work indirectly through text files. This option requires more work, but is a lot more secure.
You can extend option 3 and use SSH Keys without any library
$command = sprintf('ssh -i%s -o "StrictHostKeyChecking no" %s#%s "%s"',
$path, $this->getUsername(), $this->getAddress(), $cmd );
return shell_exec( $command );
I use it quite a lot in my project. You can have a look into SSH adapter I created.
The problem with above is you can't make real time decisions (while connected to a server). If you need real time try PHP extension called SSH2.
ps. I agree with you. SSH seams to be the most secure option.
You can use setcap to allow Nginx to bind to port 80/443 without having to run as root. Nginx only has to run as root to bind on port 80/443 (anything < 1024). setcap is detailed in this answer.
There are a few cavets though. You'll have to chown the log files to the right user (chown -R nginx:nginx /var/log/nginx), and config the pid file to be somewhere else other than /var/run/ (pid /path/to/nginx.pid).
lawl0r provides a good answer for setuid/sudo.
As a last restort you could reload the configuration periodically using a cron job if it has changed.
Either way you don't need SSH when it's on the same system.
Dude that is totally wrong.
You do not want to change any config files you want to create new files and then include them in default config files.
Example for apache you have *.conf files and include directive. It is totally insane to change default config files. That is how I do it.
You do not need ssh or anything like that.
Belive me it is better and safer if you will.

bash script execution from php and instantaneous output back to webpage

I have a collection of bash and Perl scripts to
develop a directory structure desired for deployment on linux box
(optionally) export code from svn
build a package from this source
This is working well from terminal. Now my client requests a web interface to this process.
e.g "Create New Package" button on certain page will invoke above steps one by one and return the output to user as script echos, not when the whole scripts executes.
Is it possible to send instantaneous output from bash script to webpage or php script which has invoked it through program execution functions (system, exec, passthru ... or any thing else that suite this process flow) ?
What is elegant why to do this ?
What security precautions I should take while doing such thing (if possible)?
Edit
After some search I have found part of the solution but still not working :
$cmd = 'cat ./password.txt|sudo -S ./setup.sh ';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that the child will read from
);
flush();
$process = proc_open($cmd, $descriptorspec, $pipes, './', array());
echo "<pre>";
if (is_resource($process)) {
while ($s = fgets($pipes[1])) {
print "Message:".$s;
flush();
}
while ($s = fgets($pipes[2])) {
print "Error:".$s;
flush();
}
}
echo "</pre>";
output: (webpage)
Error:WARNING: Improper use of the sudo command could lead to data loss
Error:or the deletion of important system files. Please double-check your
Error:typing when using sudo. Type "man sudo" for more information.
Error:
Error:To proceed, enter your password, or type Ctrl-C to abort.
Error:
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:sudo: 3 incorrect password attempts**
First issue I am having now is to pass sudo passoword
help please !
I would use a kind of master / slave design. The slave would be your perl / bash script, just doing a job (packaging, compiling, exporting code or so), and feed a log entry.
The master would be your php process. So the principle is the following: the master and the slave share a communication channel, and communicate asynchronously from that channel.
You could imagine a database like:
create table tasks ( id INT primary key, status INT, argument VARCHAR(100));
You php page should switch user choice, and filter input:
switch ($_GET['action']) {
case 'export':
$argument = sanitize($_GET['arg']);
add_task('export', $argument);
break;
case '...':
// ...
}
and the add_task function could be something like:
function add_task($action, $arg)
{
return $db->add('tasks', array($action, NEW_TASK, $arg);
}
The slaves could be run via a cron job, and query the database, feeding the progression of the task.
The pro are:
independant systems, ease the evolution.
if a client gets disconnected, the job is never lost
easier to secure
The cons are:
a little more complicated at the beginning.
less reactive, because of the polling time of the slaves running (for instance if they run every 5 minutes)
less output than the direct output of the command
Notice that you can then implement xml-rpc like triggers to run the slaves, rather than using a message passing system.
The simplest approach is to use shell_exec for your purpose. It executes a command via shell and returns the complete output as a string, hence you can display it on your website.
If this doesn't suit your purpose, because maybe you want some responses while waiting for the command to finish, check out the other program execution functions available in php (hint: there are a few good comments on the manual pages).
Keep in mind, when evoking commandline scripts this way, generated output will have the file owner, group and permissions of your webserver (p.e. wwwrun or whatever). If parts of your deployment need a separate owner, group and/or file permissions, you have to manually set them either in your scripts or after invoking shell_exec (chmod, chown and chgrp can deal with this in php).
About security:
Alot of web-based applications put that kind of function into a separate installation directory, and kindly ask you to remove this directory after installing. I even remember some of them shouting at admins quite persistent unless it is removed. This is an easy way preventing this script being invoked by wrong hands at a wrong time. If your application might need it after installing, then you have to put it into an area where only authorized users have access to (p.e. admin area or something similar).
You can use the Comet web application model to do real time updating of the results page.
For the sudo problem, as a quick and dirty solution I'd use restricted set of commands that the web server can execute without password. Example (add in /etc/sudoers):
apache ALL=(ALL) NOPASSWD: /sbin/service myproject *
which would allow apache to run /sbin/service myproject stop, /sbin/service myproject start and so on.
Take a look at Jenkins, it does the web part nicely, you'll only have to add your scripts in the build.
A better solution would be, as suggested by Aif, to separate the logic. One daemon process waiting for tasks and a web application visualizing the results.
Always use escapeshellarg and escapeshellcmd when executing system commands via PHP for security. It would also be suggested to have the user within a chroot'd directory as limited as possible.
You seem to have solved the issue of getting stdout and stdin to be outputted to your webpage by using proc_open. Now the problem looks to be executing the script via sudo.
From a security perspective, having a PHP app run a script as root (via sudo) makes me cringe. Having the root password in password.txt is a pretty huge security hole.
What I would do (if possible) is to make whatever changes necessary so that setup.sh can be run by the unix user that is running Apache. (I'm assuming you're running PHP with Apache.) If setup.sh is going to be executed by the webserver, it should be able to run it without resorting to sudo.
If you really need to execute the script as a different user, you may check into suPHP, which is designed to run PHP scripts as a non-standard user.
Provide automated sudo rights to a specific user using the NOPASSWD option of /etc/sudoers then run the command with the prefix sudo -u THE_SUDO_USER to have the command execute as that user. This prevents the security hole of giving the entire apache user sudo rights, but also allows sudo to be executed on the script from within apache.

Is it possible to execute the ssh command without creating a .ssh directory?

I'm trying to run a command on a remote server via SSH from a PHP script. Here's the snippet:
$ssh_command = "ssh -F keys/config -o UserKnownHostsFile=keys/known_hosts -i keys/deployment_key -p $ssh_port $r
$git_fetch = "git --git-dir=$remote_path/.git --work-tree=$remote_path fetch 2>&1";
exec("$ssh_command '$git_fetch' 2>&1", $out);
The script works fine if I run it from the command line, because it's running as a user with a regular login shell and their own .ssh directory. When I try to run the script through the Web interface, it fails because SSH can't create its .ssh directory in the Apache user's home directory.
Is it possible to use SSH without the .ssh directory or is there another alternative for running SSH as the Apache user? I'd prefer not to create a .ssh folder for the Apache user.
I'm able to get around this restriction with one small issue/side-effect.
Setting the options UserKnownHostsFile=/dev/null and StrictHostKeyChecking=no, you can trick SSH into not actually storing or requiring verification of a host key.
Setting StrictHostKeyChecking to no allows the connection to the server without first knowing or verifying its key; and using /dev/null for UserKnownHostsFile just reads and writes to nothing so no values are ever read of saved.
The caveat was that SSH still tries to create the .ssh directory and fails, but the failure results in a warning and it will continue with the connection. The warning WILL be included in your output (unless you were to suppress warnings).
Here is an example. Note: For this example I didn't set up any authentication so it will try to use password auth and fail, but since you are using an identity you should be able to connect just fine.
<?php
$ssh_command = "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "
."-p 2223 user#host.localdomain";
exec("$ssh_command ls 2>&1", $out);
var_dump($out);
// output when called from browser running as daemon(1)
array(5) {
[0]=>
string(44) "Could not create directory '/usr/sbin/.ssh'."
[1]=>
string(110) "Warning: Permanently added '[host.localdomain]:2223,[192.168.88.20]:2223' (RSA) to the list of known hosts."
[2]=>
string(36) "Permission denied, please try again."
[3]=>
string(36) "Permission denied, please try again."
[4]=>
string(82) "Received disconnect from 192.168.88.20: 2: Too many authentication failures for user"
}
Your output would most likely only include the first warning about not being able to create the .ssh directory, followed by the warning about permanently adding the host to the list of known hosts (/dev/null), followed by the output from your command; so you would have to check if the first line was this warning, and shift it from the $out array.
Another note: This does open up the possibility of man-in-the-middle attacks or a DNS/IP hack to get you to try to connect to a rogue server.
See this article on SSH Host Key Protection from Symantec.
I'd do it with phpseclib, a pure PHP SSH implementation:
<?php
include('Net/SSH2.php');
$key = new Crypt_RSA();
//$key->setPassword('whatever');
$key->loadKey(file_get_contents('privatekey'));
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('username', $key)) {
exit('Login Failed');
}
echo $ssh->read('username#username:~$');
$ssh->write("ls -la\n");
echo $ssh->read('username#username:~$');
?>
The only other thing I can imagine people referring to when they say PHP SSH library is the PECL extension. I'd personally recommend against using that as it's pretty badly written. The fact that it's hard to install and badly supported aside it requires you provide public and private keys. phpseclib only requires the private key. This makes sense because the private keys normally contain the public key embedded within them. The PECL SSH2 extension requires it be extracted separately, which is silly.
phpseclib also supports a ton more formats than the PECL SSH2 library does. phpseclib supports the PuTTY key format, XML Signatures keys, etc.
The problem is that the first thing that SSH looks for is the user's configuration file. You can use the -F flag to specify the location of your own configuration file, which ought to solve the problem, though I'm rushing out to work right now, so I haven't double-checked this. You'll need to specify the location of other things like the identity files, &c., within it, mind. The file will need to have the correct ownership and permissions or SSH will complain.
That said, using an SSH library like SSH2 is a far superior solution to shelling out if at all possible.
No, you need a .ssh in the user's home when using an OpenSSH client. There's no way to avoid this - it's meant to be a security feature since it contains integral settings. Simply keep an empty dummy directory - the client won't write there - but you are required to have adequate user permissions on it.
Solution:
DON'T DO THIS. I haven't used PHP in years, but in any modern, general purpose programming language, this would be a huge WTF. I noticed one user commneted about using the PHP SSH2 library... this would be the way to go, IMHO.
Jumping out to shell to execute a command should ONLY occur when you're producing a side-effect where the result is inconsequential. If you're relying on this for anything important, or if the result of the command should modify program state in any way, you either need to find an internal library to handle this, or extend the language to do this properly via the appropriate C libraries. Otherwise, you're setting yourself up for a world of hurt.
You can control where ~/.ssh will land using the HOME environment variable. That way you can be sure it won't be a directory Apache serves.

Sync local and remote folders using rsync from php script without typing password

How can I sync local and remote folders using rsync from inside a php script without beeing prompted to type a password?
I have already set up a public key to automate the login on remote server for my user.
So this is running without any problem from cli:
rsync -r -a -v -e "ssh -l user" --delete ~/local/file 111.111.11.111:~/remote/;
But, when I try to run the same from a PHP script (on a webpage in my local server):
$c='rsync -r -a -v -e "ssh -l user" --delete ~/local/file 111.111.11.111:~/remote/';
//exec($c,$data);
passthru($c,$data);
print_r($data);
This is what I receive:
255
And no file is uploaded from local to remote server.
Searching on the net I found this clue:
"You could use a combination of BASh and Expect shell code here, but it wouldn't be very secure, because that would automate the root login. Generate keys for nobody or apache (whatever user as which Apache is being executed). Alternatively, install phpSuExec, Suhosin, or suPHP so that the scripts are run as the user for which they were called."
Well, I don't know how to "automate the root login" for PHP, which is running as "apache". Maybe to make the script run as the actual user is a better idea, I don't know... Thanks!
UPDATE:
- As this works fine:
passthru('ssh user#111.111.11.111 | ls',$data);
Returning a list of the home foder, I can be sure, there is no problem with the automatic login. It mast be something with rsync running from the PHP script.
I also have chmod -R 0777 on the local and remote folders just in case. But I didn't get it yet.
UPDATE:
All the problem is related to the fact that "when ran on commandline, ssh use the keyfiles on $HOME/.ssh/, but under PHP, it's ran using Apache's user, so it might not have a $HOME; much less a $HOME/.ssh/id_dsa. So, either you specifically tell it which keyfile to use, or manually create that directory and its contents."
While I cannot get rsync, this is how I got to transfer the file from local to remote:
if($con=ssh2_connect('111.111.11.111',22)) echo 'ok!';
if(ssh2_auth_password($con,'apache','xxxxxx')) echo ' ok!';
if(ssh2_scp_send($con,'localfile','/remotefolder',0755)) echo ' ok!';
Local file needs: 0644
Remote folder needs: 0775
I guess if the solution wouldn't be to run php with the same user bash does...
#Yzmir Ramirez gave this suggestion: "I don't think you want to "copy the key somewhere where apache can get to it" - that's a security violation. Better to change the script to run as a secure user and then setup the .ssh keys for passwordless login between servers.
This is something I have to invest some more time. If somebody know how to do this, please, it would be of great help.
When I set this same thing up in an application of ours, I also ran into 255 errors, and found that they can mean a variety of things; it's not a particularly helpful error code. In fact, even now, after the solution's been running smoothly for well over a year, I still see an occasional 255 show up in the logs.
I will also say that getting it to work can be a bit of a pain, since there are several variables involved. One thing I found extremely helpful was to construct the rsync command in a variable and then log it. This way, I can grab the exact rsync command being used and try to run it manually. You can even su to the apache user and run it from the same directory as your script (or whatever your script is setting as the cwd), which will make it act the same as when it's run programmatically; this makes it far simpler to debug the rsync command since you're not having to deal with the web server. Also, when you run it manually, if it's failing for some unstated reason, add in verbosity flags to crank up the error output.
Below is the code we're using, slightly edited for security. Note that our code actually supports rsync'ing to both local and remote servers, since the destination is fully configurable to allow easy test installations.
try {
if ('' != $config['publishSsh']['to']['remote']):
//we're syncing with a remote server
$rsyncToRemote = escapeshellarg($config['publishSsh']['to']['remote']) . ':';
$rsyncTo = $rsyncToRemote . escapeshellarg($config['publishThemes']['to']['path']);
$keyFile = $config['publishSsh']['to']['keyfile'];
$rsyncSshCommand = "-e \"ssh -o 'BatchMode yes' -o 'StrictHostKeyChecking no' -q -i '$keyFile' -c arcfour\"";
else:
//we're syncing with the local machine
$rsyncTo = escapeshellarg($config['publishThemes']['to']['path']);
$rsyncSshCommand = '';
endif;
$log->Message("Apache running as user: " . trim(`whoami`), GLCLogger::level_debug);
$deployCommand = "
cd /my/themes/folder/; \
rsync \
--verbose \
--delete \
--recursive \
--exclude '.DS_Store' \
--exclude '.svn/' \
--log-format='Action: %o %n' \
--stats \
$rsyncSshCommand \
./ \
$rsyncTo \
2>&1
"; //deployCommand
$log->Message("Deploying with command: \n" . $deployCommand, GLCLogger::level_debug);
exec($deployCommand, $rsyncOutputLines, $returnValue);
$log->Message("rsync status code: $returnValue", GLCLogger::level_debug);
$log->Message("rsync output: \n" . implode("\n", $rsyncOutputLines), GLCLogger::level_debug);
if (0 != $returnValue):
$log->Message("Encountered error while publishing themes: <<<$returnValue>>>");
throw new Exception('rsync error');
endif;
/* ... process output ... */
} catch (Exception $e) {
/* ... handle errors ... */
}
A couple of things to notice about the code:
I'm using exec() so that I can capture the output in a variable. I then parse it so that I can log and report the results in terms of how many files were added, modified, and removed.
I'm combining rsync's standard output and standard error streams and returning both. I'm also capturing and checking the return result code.
I'm logging everything when in debug mode, including the user Apache is running as, the rsync command itself, and the output of the rsync command. This way, it's trivially easy to run the same command as the same user with just a quick copy-and-paste, as I mention above.
If you're having problems with the rsync command, you can adjust its verbosity without impact and see the output in the log.
In my case, I was able to simply point to an appropriate key file and not be overly concerned about security. That said, some thoughts on how to handle this:
Giving Apache access to the file doesn't mean that it has to be in the web directory; you can put the file anywhere that's accessible by the Apache user, even on another network machine. Depending on your other layers of security, this may or may not be an acceptable compromise.
Depending on the data you're copying, you may be able to heavily lock down the permissions of the ssh user on the other machine, so that if someone unscrupulous managed to get in there, they'd have minimal ability to do damage.
You can use suEXEC to run the script as a user other than the Apache user, allowing you to lock access to your key to that other user. Thus, someone compromising the Apache user simply would not have access to the key.
I hope this helps.
When you let the web server run the command, it runs it as its own user ("nobody" or "apache" or similar) and looks for a private key in that user's home directory/.ssh, while the private key you setup for your own user is in "/home/you/.ssh/keyfile", or however you named it.
Make ssh use that specific key with the -i parameter:
ssh -i /home/you/.ssh/keyfile
and it should work
A Simple but In-direct solution to this problem is here, which i am using.
DO NOT run rsync directly from php, it does have issues as well as it can be security risk to do so.
Rather, just prepare two scripts.
In php script, you just have to change value of one variable on filesystem from 0 to 1.
Now, on ther other side, make an rsync script, that will run forever, and will have following logic.
""
If the file has 0 in it, then do not run rsync, if it is 1, then run the rsync, then change its value back to 0 after successful run of rsync;
""
I am doing it as well.

Categories