SSH connections with PHP - php

I'm currently working on a project to make changes to the system with PHP (e.g. change the config file of Nginx / restarting services).
The PHP scripts are running on localhost. In my opinion the best (read: most secure) way is to use SSH to make a connection. I considering one of the following options:
Option 1: store username / password in php session and prompt for sudo
Using phpseclib with a username / password, save these values in a php session and prompt for sudo for every command.
Option 2: use root login
Using phpseclib with the root username and password in the login script. In this case you don't have to ask the user for sudo. (not really a safe solution)
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('root', 'root-password')) {
exit('Login Failed');
}
?>
Option 3: Authenticate using a public key read from a file
Use the PHP SSHlib with a public key to authenticate and place the pubkey outside the www root.
<?php
$connection = ssh2_connect('shell.example.com', 22, array('hostkey' => 'ssh-rsa'));
if (ssh2_auth_pubkey_file($connection, 'username', '/home/username/.ssh/id_rsa.pub', '/home/username/.ssh/id_rsa', 'secret')) {
echo "Public Key Authentication Successful\n";
} else {
die('Public Key Authentication Failed');
}
?>
Option 4: ?

I suggest you to do this in 3 simple steps:
First.
Create another user (for example runner) and make your sensitive data (like user/pass, private key, etc) accessible just for this user. In other words deny your php code to have any access to these data.
Second.
After that create a simple blocking fifo pipe and grant write access to your php user.
Last.
And finally write a simple daemon to read lines from the fifo and execute it for example by ssh command. Run this daemon with runner user.
To execute a command you just need to write your command in the file (fifo pipe). Outputs could be redirected in another pipe or some simple files if needed.
to make fifo use this simple command:
mkfifo "FIFONAME"
The runner daemon would be a simple bash script like this:
#!/bin/bash
while [ 1 ]
do
cmd=$(cat FIFONAME | ( read cmd ; echo $cmd ) )
# Validate the command
ssh 192.168.0.1 "$cmd"
done
After this you can trust your code, if your php code completely hacked, your upstream server is secure yet. In such case, attacker could not access your sensitive data at all. He can send commands to your runner daemon, but if you validate the command properly, there's no worry.
:-)

Method 1
I'd probably use the suid flag. Write a little suid wrapper in C and make sure all commands it executes are predefined and can not be controlled by your php script.
So, you create your wrapper and get the command from ARGV. so a call could look like this:
./wrapper reloadnginx
Wrapper then executes /etc/init.d/nginx reload.
Make sure you chown wrapper to root and chmod +s. It will then spawn the commands as root but since you predefined all the commands your php script can not do anything else as root.
Method 2
Use sudo and set it up for passwordless execution for certain commands. That way you achieve the same effect and only certain applications can be spawned as root. You can't however control the arguments so make sure there is no privilege escalation in these binaries.
You really don't want to give a PHP script full root access.

If you're running on the same host, I would suggest to either directly write the configuration files and (re)start services or to write a wrapper script.
The first option obviously needs a very high privilege level, which I would not recommend to do. However, it will be the most simple to implement. Your other named options with ssh do not help much, as a possible attacker still may easily get root privileges
The second option is way more secure and involves to write a program with high level access, which only takes specified input files, e.g. from a directory. The php-script is merely a frontend for the user and will write said input files and thus only needs very low privileges. That way, you have a separation between your high- and low privileges and therefore mitigate the risk, as it is much easier to secure a program, with which a possible attacker may only work indirectly through text files. This option requires more work, but is a lot more secure.

You can extend option 3 and use SSH Keys without any library
$command = sprintf('ssh -i%s -o "StrictHostKeyChecking no" %s#%s "%s"',
$path, $this->getUsername(), $this->getAddress(), $cmd );
return shell_exec( $command );
I use it quite a lot in my project. You can have a look into SSH adapter I created.
The problem with above is you can't make real time decisions (while connected to a server). If you need real time try PHP extension called SSH2.
ps. I agree with you. SSH seams to be the most secure option.

You can use setcap to allow Nginx to bind to port 80/443 without having to run as root. Nginx only has to run as root to bind on port 80/443 (anything < 1024). setcap is detailed in this answer.
There are a few cavets though. You'll have to chown the log files to the right user (chown -R nginx:nginx /var/log/nginx), and config the pid file to be somewhere else other than /var/run/ (pid /path/to/nginx.pid).
lawl0r provides a good answer for setuid/sudo.
As a last restort you could reload the configuration periodically using a cron job if it has changed.
Either way you don't need SSH when it's on the same system.

Dude that is totally wrong.
You do not want to change any config files you want to create new files and then include them in default config files.
Example for apache you have *.conf files and include directive. It is totally insane to change default config files. That is how I do it.
You do not need ssh or anything like that.
Belive me it is better and safer if you will.

Related

ftp_exec(): Unknown SITE command

I need to run a linux command from php. So I used ftp_exec() function.
$command='ls -al> /ftp_test/t.log';
if (ftp_exec($ftp_conn,$command))
{
echo "$command executed successfully.";
}
else
{
echo "Execution of $command failed.";
}
But it gives me warning
Warning: ftp_exec(): Unknown SITE command
I have googled and found for ftp_exec "execution via FTP isn't very widely supported. Check that it works on the servers that you intend to connect to before you start coding something that requires this."
Can anybody give me a idea to run a linux command from php ?
If you have the appropriate authorization you may do so via SSH:
$file_list = shell_exec('ssh user#site "ls -la"');
You'll need for user to have an authorized ssh key for site, and the user must be accessible from whatever user is running PHP. This usually boils down to using user wwwrun for both.
Or you can use sudo for added security, by placing the command into a script of its own, then sudoing it:
$file_list = shell_exec('sudo /usr/local/bin/ssh-ls-site');
Now user wwwrun can be allowed to run ssh-ls-site but can't modify its contents, so he can't run arbitrary commands, nor has he access to the ssh authorization key.
The ssh-ls-site can log the request as well as updating a local marker file, and exiting immediately if the file is newer than a certain guard time. This will prevent possible DoS attacks against site (running lots of allowed commands, exhausting resources), and also improve performances; if for example you need to run the command often, you can save the results into a temporary file. Then if this file is found to exist, and is not too old, you just read back its contents instead of asking it to #site, effectively caching the command locally.

bash script execution from php and instantaneous output back to webpage

I have a collection of bash and Perl scripts to
develop a directory structure desired for deployment on linux box
(optionally) export code from svn
build a package from this source
This is working well from terminal. Now my client requests a web interface to this process.
e.g "Create New Package" button on certain page will invoke above steps one by one and return the output to user as script echos, not when the whole scripts executes.
Is it possible to send instantaneous output from bash script to webpage or php script which has invoked it through program execution functions (system, exec, passthru ... or any thing else that suite this process flow) ?
What is elegant why to do this ?
What security precautions I should take while doing such thing (if possible)?
Edit
After some search I have found part of the solution but still not working :
$cmd = 'cat ./password.txt|sudo -S ./setup.sh ';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that the child will read from
);
flush();
$process = proc_open($cmd, $descriptorspec, $pipes, './', array());
echo "<pre>";
if (is_resource($process)) {
while ($s = fgets($pipes[1])) {
print "Message:".$s;
flush();
}
while ($s = fgets($pipes[2])) {
print "Error:".$s;
flush();
}
}
echo "</pre>";
output: (webpage)
Error:WARNING: Improper use of the sudo command could lead to data loss
Error:or the deletion of important system files. Please double-check your
Error:typing when using sudo. Type "man sudo" for more information.
Error:
Error:To proceed, enter your password, or type Ctrl-C to abort.
Error:
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:sudo: 3 incorrect password attempts**
First issue I am having now is to pass sudo passoword
help please !
I would use a kind of master / slave design. The slave would be your perl / bash script, just doing a job (packaging, compiling, exporting code or so), and feed a log entry.
The master would be your php process. So the principle is the following: the master and the slave share a communication channel, and communicate asynchronously from that channel.
You could imagine a database like:
create table tasks ( id INT primary key, status INT, argument VARCHAR(100));
You php page should switch user choice, and filter input:
switch ($_GET['action']) {
case 'export':
$argument = sanitize($_GET['arg']);
add_task('export', $argument);
break;
case '...':
// ...
}
and the add_task function could be something like:
function add_task($action, $arg)
{
return $db->add('tasks', array($action, NEW_TASK, $arg);
}
The slaves could be run via a cron job, and query the database, feeding the progression of the task.
The pro are:
independant systems, ease the evolution.
if a client gets disconnected, the job is never lost
easier to secure
The cons are:
a little more complicated at the beginning.
less reactive, because of the polling time of the slaves running (for instance if they run every 5 minutes)
less output than the direct output of the command
Notice that you can then implement xml-rpc like triggers to run the slaves, rather than using a message passing system.
The simplest approach is to use shell_exec for your purpose. It executes a command via shell and returns the complete output as a string, hence you can display it on your website.
If this doesn't suit your purpose, because maybe you want some responses while waiting for the command to finish, check out the other program execution functions available in php (hint: there are a few good comments on the manual pages).
Keep in mind, when evoking commandline scripts this way, generated output will have the file owner, group and permissions of your webserver (p.e. wwwrun or whatever). If parts of your deployment need a separate owner, group and/or file permissions, you have to manually set them either in your scripts or after invoking shell_exec (chmod, chown and chgrp can deal with this in php).
About security:
Alot of web-based applications put that kind of function into a separate installation directory, and kindly ask you to remove this directory after installing. I even remember some of them shouting at admins quite persistent unless it is removed. This is an easy way preventing this script being invoked by wrong hands at a wrong time. If your application might need it after installing, then you have to put it into an area where only authorized users have access to (p.e. admin area or something similar).
You can use the Comet web application model to do real time updating of the results page.
For the sudo problem, as a quick and dirty solution I'd use restricted set of commands that the web server can execute without password. Example (add in /etc/sudoers):
apache ALL=(ALL) NOPASSWD: /sbin/service myproject *
which would allow apache to run /sbin/service myproject stop, /sbin/service myproject start and so on.
Take a look at Jenkins, it does the web part nicely, you'll only have to add your scripts in the build.
A better solution would be, as suggested by Aif, to separate the logic. One daemon process waiting for tasks and a web application visualizing the results.
Always use escapeshellarg and escapeshellcmd when executing system commands via PHP for security. It would also be suggested to have the user within a chroot'd directory as limited as possible.
You seem to have solved the issue of getting stdout and stdin to be outputted to your webpage by using proc_open. Now the problem looks to be executing the script via sudo.
From a security perspective, having a PHP app run a script as root (via sudo) makes me cringe. Having the root password in password.txt is a pretty huge security hole.
What I would do (if possible) is to make whatever changes necessary so that setup.sh can be run by the unix user that is running Apache. (I'm assuming you're running PHP with Apache.) If setup.sh is going to be executed by the webserver, it should be able to run it without resorting to sudo.
If you really need to execute the script as a different user, you may check into suPHP, which is designed to run PHP scripts as a non-standard user.
Provide automated sudo rights to a specific user using the NOPASSWD option of /etc/sudoers then run the command with the prefix sudo -u THE_SUDO_USER to have the command execute as that user. This prevents the security hole of giving the entire apache user sudo rights, but also allows sudo to be executed on the script from within apache.

How to let apache to do ssh

I want to run some command from a remote server in my php using exec. Like this:
<? php
exec('ssh user#remote_server command');
?>
My account has access to ssh to that remote_server with public/private key but not apache. Note that I don't have root access to either of the machines. All the answers for Generating SSH keys for 'apache' user need root access.
My recommendation: use phpseclib, a pure PHP SSH implementation. eg.
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('username', 'password')) {
exit('Login Failed');
}
echo $ssh->exec('pwd');
echo $ssh->exec('ls -la');
?>
Web server process is owned by apache user not root .
Make sure that apache user have password less login to remote server
SE linux should be disabled . Refer
I would try:
$identity_file = '/home/you/.ssh/id_rsa'; // <-- replace with actual priv key
exec('ssh -i ' . $identity_file . ' user#remote_server command');
and see if you can authenticate like that. You will have to make sure that the identity file is readable by Apache.
The downside is now the Apache user can read your private key, which means any other site running as the Apache user can as well. But even if you create a private key for the Apache user, the same is true. For better security, see about running PHP as your specific user using suPHP or suExec.
This is a bad idea without root access. To make sure that Apache's user can see your private key, you'll have to make it world-readable: without root you can't chown www-data:www-data, so not only will Apache be able to see it, every user on the system will be able to. Because this is such a bad idea, OpenSSH won't allow it by default - it will refuse to run if your private key has unreasonably open file permissions.
I very strongly advise against doing this. If you really need to be able to have PHP run remote commands with an SSH key, you'll need someone with root access to set up something more secure, along the lines of your link.
A more secure alternative would be to write a PHP script on the target machine that takes an HTTP request containing a password that you define, executes a pre-defined command and returns the output. If this script is written securely and can only execute that one pre-defined command, all an attacker can do is run that same command as you - as long as the password for this script isn't the same as any of your own passwords. It needn't be exactly one command, and it can even take some arguments if you're careful: the important point is that you're not allowing a remote user to execute arbitrary commands on the target machine. If you're sure that the command you want to be able to run isn't potentially harmful, and your script doesn't contain coding errors allowing other commands to be run instead, then this isn't too bad an idea.
Use SSH2.
http://php.net/manual/en/book.ssh2.php

Linux screen managment on root through php user

I want to make a script that would run something in screen on root user. This has to be done through php system() function therefore I need to find out way to sudo to root and pass a password, all using PHP.
If you really need to sudo from PHP (not recommended), it's best to only allow specific commands and not require password for them.
For example, if PHP is running as the apache user, and you need to run /usr/bin/myapp, you could add the following to /etc/sudoers (or wherever sudoers is):
apache ALL = (root) NOPASSWD:NOEXEC: /usr/bin/myapp
This means that user apache can run /usr/bin/myapp as root without password, but the app can't execute anything else.
I'm sure there must be a better way to do whatever it is you're trying to accomplish than whatever mechanism you're trying to create.
If you simply want to write messages from a php script to a single screen session somewhere, try this:
In php
Open a file with append-write access:
$handle = fopen("/var/log/from_php", "wb");
Write to your file:
fwrite($handle, "Sold another unit to " . $customer . "\n");
In your screen session
tail -F /var/log/from_php
If you can't just run tail in a screen session, you can use the write(1) utility to write messages to different terminals. See write(1) and mesg(1) for details on this mechanism. (I don't like it as much as the logfile approach, because that is durable and can be searched later. But I don't know exactly what you're trying to accomplish, so this is another option that might work better than tail -F on a log file.)

Sync local and remote folders using rsync from php script without typing password

How can I sync local and remote folders using rsync from inside a php script without beeing prompted to type a password?
I have already set up a public key to automate the login on remote server for my user.
So this is running without any problem from cli:
rsync -r -a -v -e "ssh -l user" --delete ~/local/file 111.111.11.111:~/remote/;
But, when I try to run the same from a PHP script (on a webpage in my local server):
$c='rsync -r -a -v -e "ssh -l user" --delete ~/local/file 111.111.11.111:~/remote/';
//exec($c,$data);
passthru($c,$data);
print_r($data);
This is what I receive:
255
And no file is uploaded from local to remote server.
Searching on the net I found this clue:
"You could use a combination of BASh and Expect shell code here, but it wouldn't be very secure, because that would automate the root login. Generate keys for nobody or apache (whatever user as which Apache is being executed). Alternatively, install phpSuExec, Suhosin, or suPHP so that the scripts are run as the user for which they were called."
Well, I don't know how to "automate the root login" for PHP, which is running as "apache". Maybe to make the script run as the actual user is a better idea, I don't know... Thanks!
UPDATE:
- As this works fine:
passthru('ssh user#111.111.11.111 | ls',$data);
Returning a list of the home foder, I can be sure, there is no problem with the automatic login. It mast be something with rsync running from the PHP script.
I also have chmod -R 0777 on the local and remote folders just in case. But I didn't get it yet.
UPDATE:
All the problem is related to the fact that "when ran on commandline, ssh use the keyfiles on $HOME/.ssh/, but under PHP, it's ran using Apache's user, so it might not have a $HOME; much less a $HOME/.ssh/id_dsa. So, either you specifically tell it which keyfile to use, or manually create that directory and its contents."
While I cannot get rsync, this is how I got to transfer the file from local to remote:
if($con=ssh2_connect('111.111.11.111',22)) echo 'ok!';
if(ssh2_auth_password($con,'apache','xxxxxx')) echo ' ok!';
if(ssh2_scp_send($con,'localfile','/remotefolder',0755)) echo ' ok!';
Local file needs: 0644
Remote folder needs: 0775
I guess if the solution wouldn't be to run php with the same user bash does...
#Yzmir Ramirez gave this suggestion: "I don't think you want to "copy the key somewhere where apache can get to it" - that's a security violation. Better to change the script to run as a secure user and then setup the .ssh keys for passwordless login between servers.
This is something I have to invest some more time. If somebody know how to do this, please, it would be of great help.
When I set this same thing up in an application of ours, I also ran into 255 errors, and found that they can mean a variety of things; it's not a particularly helpful error code. In fact, even now, after the solution's been running smoothly for well over a year, I still see an occasional 255 show up in the logs.
I will also say that getting it to work can be a bit of a pain, since there are several variables involved. One thing I found extremely helpful was to construct the rsync command in a variable and then log it. This way, I can grab the exact rsync command being used and try to run it manually. You can even su to the apache user and run it from the same directory as your script (or whatever your script is setting as the cwd), which will make it act the same as when it's run programmatically; this makes it far simpler to debug the rsync command since you're not having to deal with the web server. Also, when you run it manually, if it's failing for some unstated reason, add in verbosity flags to crank up the error output.
Below is the code we're using, slightly edited for security. Note that our code actually supports rsync'ing to both local and remote servers, since the destination is fully configurable to allow easy test installations.
try {
if ('' != $config['publishSsh']['to']['remote']):
//we're syncing with a remote server
$rsyncToRemote = escapeshellarg($config['publishSsh']['to']['remote']) . ':';
$rsyncTo = $rsyncToRemote . escapeshellarg($config['publishThemes']['to']['path']);
$keyFile = $config['publishSsh']['to']['keyfile'];
$rsyncSshCommand = "-e \"ssh -o 'BatchMode yes' -o 'StrictHostKeyChecking no' -q -i '$keyFile' -c arcfour\"";
else:
//we're syncing with the local machine
$rsyncTo = escapeshellarg($config['publishThemes']['to']['path']);
$rsyncSshCommand = '';
endif;
$log->Message("Apache running as user: " . trim(`whoami`), GLCLogger::level_debug);
$deployCommand = "
cd /my/themes/folder/; \
rsync \
--verbose \
--delete \
--recursive \
--exclude '.DS_Store' \
--exclude '.svn/' \
--log-format='Action: %o %n' \
--stats \
$rsyncSshCommand \
./ \
$rsyncTo \
2>&1
"; //deployCommand
$log->Message("Deploying with command: \n" . $deployCommand, GLCLogger::level_debug);
exec($deployCommand, $rsyncOutputLines, $returnValue);
$log->Message("rsync status code: $returnValue", GLCLogger::level_debug);
$log->Message("rsync output: \n" . implode("\n", $rsyncOutputLines), GLCLogger::level_debug);
if (0 != $returnValue):
$log->Message("Encountered error while publishing themes: <<<$returnValue>>>");
throw new Exception('rsync error');
endif;
/* ... process output ... */
} catch (Exception $e) {
/* ... handle errors ... */
}
A couple of things to notice about the code:
I'm using exec() so that I can capture the output in a variable. I then parse it so that I can log and report the results in terms of how many files were added, modified, and removed.
I'm combining rsync's standard output and standard error streams and returning both. I'm also capturing and checking the return result code.
I'm logging everything when in debug mode, including the user Apache is running as, the rsync command itself, and the output of the rsync command. This way, it's trivially easy to run the same command as the same user with just a quick copy-and-paste, as I mention above.
If you're having problems with the rsync command, you can adjust its verbosity without impact and see the output in the log.
In my case, I was able to simply point to an appropriate key file and not be overly concerned about security. That said, some thoughts on how to handle this:
Giving Apache access to the file doesn't mean that it has to be in the web directory; you can put the file anywhere that's accessible by the Apache user, even on another network machine. Depending on your other layers of security, this may or may not be an acceptable compromise.
Depending on the data you're copying, you may be able to heavily lock down the permissions of the ssh user on the other machine, so that if someone unscrupulous managed to get in there, they'd have minimal ability to do damage.
You can use suEXEC to run the script as a user other than the Apache user, allowing you to lock access to your key to that other user. Thus, someone compromising the Apache user simply would not have access to the key.
I hope this helps.
When you let the web server run the command, it runs it as its own user ("nobody" or "apache" or similar) and looks for a private key in that user's home directory/.ssh, while the private key you setup for your own user is in "/home/you/.ssh/keyfile", or however you named it.
Make ssh use that specific key with the -i parameter:
ssh -i /home/you/.ssh/keyfile
and it should work
A Simple but In-direct solution to this problem is here, which i am using.
DO NOT run rsync directly from php, it does have issues as well as it can be security risk to do so.
Rather, just prepare two scripts.
In php script, you just have to change value of one variable on filesystem from 0 to 1.
Now, on ther other side, make an rsync script, that will run forever, and will have following logic.
""
If the file has 0 in it, then do not run rsync, if it is 1, then run the rsync, then change its value back to 0 after successful run of rsync;
""
I am doing it as well.

Categories