I have a collection of bash and Perl scripts to
develop a directory structure desired for deployment on linux box
(optionally) export code from svn
build a package from this source
This is working well from terminal. Now my client requests a web interface to this process.
e.g "Create New Package" button on certain page will invoke above steps one by one and return the output to user as script echos, not when the whole scripts executes.
Is it possible to send instantaneous output from bash script to webpage or php script which has invoked it through program execution functions (system, exec, passthru ... or any thing else that suite this process flow) ?
What is elegant why to do this ?
What security precautions I should take while doing such thing (if possible)?
Edit
After some search I have found part of the solution but still not working :
$cmd = 'cat ./password.txt|sudo -S ./setup.sh ';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that the child will read from
);
flush();
$process = proc_open($cmd, $descriptorspec, $pipes, './', array());
echo "<pre>";
if (is_resource($process)) {
while ($s = fgets($pipes[1])) {
print "Message:".$s;
flush();
}
while ($s = fgets($pipes[2])) {
print "Error:".$s;
flush();
}
}
echo "</pre>";
output: (webpage)
Error:WARNING: Improper use of the sudo command could lead to data loss
Error:or the deletion of important system files. Please double-check your
Error:typing when using sudo. Type "man sudo" for more information.
Error:
Error:To proceed, enter your password, or type Ctrl-C to abort.
Error:
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:Password:
Error:Sorry, try again.
Error:sudo: 3 incorrect password attempts**
First issue I am having now is to pass sudo passoword
help please !
I would use a kind of master / slave design. The slave would be your perl / bash script, just doing a job (packaging, compiling, exporting code or so), and feed a log entry.
The master would be your php process. So the principle is the following: the master and the slave share a communication channel, and communicate asynchronously from that channel.
You could imagine a database like:
create table tasks ( id INT primary key, status INT, argument VARCHAR(100));
You php page should switch user choice, and filter input:
switch ($_GET['action']) {
case 'export':
$argument = sanitize($_GET['arg']);
add_task('export', $argument);
break;
case '...':
// ...
}
and the add_task function could be something like:
function add_task($action, $arg)
{
return $db->add('tasks', array($action, NEW_TASK, $arg);
}
The slaves could be run via a cron job, and query the database, feeding the progression of the task.
The pro are:
independant systems, ease the evolution.
if a client gets disconnected, the job is never lost
easier to secure
The cons are:
a little more complicated at the beginning.
less reactive, because of the polling time of the slaves running (for instance if they run every 5 minutes)
less output than the direct output of the command
Notice that you can then implement xml-rpc like triggers to run the slaves, rather than using a message passing system.
The simplest approach is to use shell_exec for your purpose. It executes a command via shell and returns the complete output as a string, hence you can display it on your website.
If this doesn't suit your purpose, because maybe you want some responses while waiting for the command to finish, check out the other program execution functions available in php (hint: there are a few good comments on the manual pages).
Keep in mind, when evoking commandline scripts this way, generated output will have the file owner, group and permissions of your webserver (p.e. wwwrun or whatever). If parts of your deployment need a separate owner, group and/or file permissions, you have to manually set them either in your scripts or after invoking shell_exec (chmod, chown and chgrp can deal with this in php).
About security:
Alot of web-based applications put that kind of function into a separate installation directory, and kindly ask you to remove this directory after installing. I even remember some of them shouting at admins quite persistent unless it is removed. This is an easy way preventing this script being invoked by wrong hands at a wrong time. If your application might need it after installing, then you have to put it into an area where only authorized users have access to (p.e. admin area or something similar).
You can use the Comet web application model to do real time updating of the results page.
For the sudo problem, as a quick and dirty solution I'd use restricted set of commands that the web server can execute without password. Example (add in /etc/sudoers):
apache ALL=(ALL) NOPASSWD: /sbin/service myproject *
which would allow apache to run /sbin/service myproject stop, /sbin/service myproject start and so on.
Take a look at Jenkins, it does the web part nicely, you'll only have to add your scripts in the build.
A better solution would be, as suggested by Aif, to separate the logic. One daemon process waiting for tasks and a web application visualizing the results.
Always use escapeshellarg and escapeshellcmd when executing system commands via PHP for security. It would also be suggested to have the user within a chroot'd directory as limited as possible.
You seem to have solved the issue of getting stdout and stdin to be outputted to your webpage by using proc_open. Now the problem looks to be executing the script via sudo.
From a security perspective, having a PHP app run a script as root (via sudo) makes me cringe. Having the root password in password.txt is a pretty huge security hole.
What I would do (if possible) is to make whatever changes necessary so that setup.sh can be run by the unix user that is running Apache. (I'm assuming you're running PHP with Apache.) If setup.sh is going to be executed by the webserver, it should be able to run it without resorting to sudo.
If you really need to execute the script as a different user, you may check into suPHP, which is designed to run PHP scripts as a non-standard user.
Provide automated sudo rights to a specific user using the NOPASSWD option of /etc/sudoers then run the command with the prefix sudo -u THE_SUDO_USER to have the command execute as that user. This prevents the security hole of giving the entire apache user sudo rights, but also allows sudo to be executed on the script from within apache.
Related
I need to run a linux command from php. So I used ftp_exec() function.
$command='ls -al> /ftp_test/t.log';
if (ftp_exec($ftp_conn,$command))
{
echo "$command executed successfully.";
}
else
{
echo "Execution of $command failed.";
}
But it gives me warning
Warning: ftp_exec(): Unknown SITE command
I have googled and found for ftp_exec "execution via FTP isn't very widely supported. Check that it works on the servers that you intend to connect to before you start coding something that requires this."
Can anybody give me a idea to run a linux command from php ?
If you have the appropriate authorization you may do so via SSH:
$file_list = shell_exec('ssh user#site "ls -la"');
You'll need for user to have an authorized ssh key for site, and the user must be accessible from whatever user is running PHP. This usually boils down to using user wwwrun for both.
Or you can use sudo for added security, by placing the command into a script of its own, then sudoing it:
$file_list = shell_exec('sudo /usr/local/bin/ssh-ls-site');
Now user wwwrun can be allowed to run ssh-ls-site but can't modify its contents, so he can't run arbitrary commands, nor has he access to the ssh authorization key.
The ssh-ls-site can log the request as well as updating a local marker file, and exiting immediately if the file is newer than a certain guard time. This will prevent possible DoS attacks against site (running lots of allowed commands, exhausting resources), and also improve performances; if for example you need to run the command often, you can save the results into a temporary file. Then if this file is found to exist, and is not too old, you just read back its contents instead of asking it to #site, effectively caching the command locally.
I have a php script that leads up to running another expect script by passing it arguments.
$output = shell_exec("expect login_script.tcl '$user' '$host' '$port' '$password'");
Using shell_exec doesn't work as the script gets run in the background or 'within' the php script. I need it to run in the foreground, allowing user interactivity. Is there an elegant way to do this. Already it is getting messy by having to use different scripting languages. I tried wrapping the two scripts with a shell script that called the php script, assigned output the result as a variable (which was a command) and then ran sh on that. However I have the same problem again where the scripts are run in the background and any user interactivity creates a halt/freeze. Its ok in this situation if php 'quits' out when calling shell exec. Ie. php stops and expect gets run as if you called it. (the same as if i just copied the command that is output and pasted it into the terminal).
Update
I am having much more luck with the following command in php:
shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &");
However, can this be improved in order to not close the shell immediately after the secondary script (login_script) is finished?
Further Update
From reading the answers I think I need to clarify things as it looks like people are assuming a 'more complicated' issue.
the two processes do not need to communicate with each other, I should probably not have put the $output = shell_exec in the example and just shell_exec on its own as I believe this has led to the confusion.
The php script needs to only initiate the expect script with some cli parameters, e.g. my-script 'param1' 'param2' and can be thought of as completely 'asynchronous'. This is much like the behaviour of launcher programs like 'launchy' or 'synapse' they can launch other programs but need not affect them. Nor do they wait for the secondary program to quit/finish.
I made the mistake of saying 'shell_exec' doesn't work for me. What I should have said was that 'I have so far not succeeded with using shell_exec', but shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &"); is 'working' but still trying to find the right quote combination to allow passing arguments to the expect script.
Task managing is an interesting but difficult job.
Because your user can move during a task (and leads it to an unexpected result, such as session freezes, or an incomplete work from the process), you need to execute it in background. If you need to interact between your user and your process, you'll need to create a way to communicate.
The easiest way (I think) is to use a file, shared between your user session and the task.
If you have a lot of users simultaneously and communicates a lot between user and processes, you can mount a partition in memory to optimize the read/write operations.
In your fstab, a line like :
tmpfs /memory tmpfs defaults,uid=www-data,gid=www-data,size=128M 0 0
Or, in a script, you could do :
#!/bin/sh
mkfs -t ext2 -q /dev/ram1 65536
[ ! -d /memory ] && mkdir -p /memory
mount /dev/ram1 /memory
chmod -R 777 /memory
You'll need to take care of a lot of things :
file access (to avoid concurrency between your webapp and your processes)
time (to avoid zombies or useless long-running scripts)
security (such operations must be carefully designed)
resources management (to avoid that 10000 processes runs simuntaneouly)
...
I think what you're looking for is the proc_open() command. It gives you access to the stdin/stdout streams of the background process. You can pass your own stdin/stdout streams to the new process in the $descriptorSpec parameter, which will let your background process talk to the user.
Your 'foreground' application will have to wait around until the background process has died. I haven't actuallly done this with PHP, but I'm guessing you'll have to watch the $pipes to see when they get closed -- then you'll know the background process is finished and you can delete the process resource and continue on with whatever the foreground process needs to do.
In the end, I managed to get it working by by adding a third quotation mark type: ` (I believe it is called a 'tack'?) which allowed me to pass arguments to the next script from the first script
The command I needed in my php script was:
$command = `gnome-terminal -e 'bash -c "expect ~/commands/login_script.tcl \"$user\" \"$host\" \"$port\" \"$password\"; exec bash"' &`;
shell_exec($command);
It took a while to get all the quotes right as swapping the type of quotes around can lead to it not working.
Here is a video demonstrating the end result
Use:
pcntl_exec("command", array("parameter1", "parameter2"));
For example, I have a script that starts the mysql command using the parameters in the current php project that looks like:
pcntl_exec("/usr/bin/mysql", array(
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));
This doesn't rely on gnome terminal or anything, it replaces PHP with the program you call.
You do need to know the full path of the command, which is a pain because it can vary by platform, but you can use the env command command which is available at /usr/bin/env on most systems to find the command for you. The above example above becomes:
pcntl_exec("/usr/bin/env", array(
"mysql",
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));
I'm currently working on a project to make changes to the system with PHP (e.g. change the config file of Nginx / restarting services).
The PHP scripts are running on localhost. In my opinion the best (read: most secure) way is to use SSH to make a connection. I considering one of the following options:
Option 1: store username / password in php session and prompt for sudo
Using phpseclib with a username / password, save these values in a php session and prompt for sudo for every command.
Option 2: use root login
Using phpseclib with the root username and password in the login script. In this case you don't have to ask the user for sudo. (not really a safe solution)
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('root', 'root-password')) {
exit('Login Failed');
}
?>
Option 3: Authenticate using a public key read from a file
Use the PHP SSHlib with a public key to authenticate and place the pubkey outside the www root.
<?php
$connection = ssh2_connect('shell.example.com', 22, array('hostkey' => 'ssh-rsa'));
if (ssh2_auth_pubkey_file($connection, 'username', '/home/username/.ssh/id_rsa.pub', '/home/username/.ssh/id_rsa', 'secret')) {
echo "Public Key Authentication Successful\n";
} else {
die('Public Key Authentication Failed');
}
?>
Option 4: ?
I suggest you to do this in 3 simple steps:
First.
Create another user (for example runner) and make your sensitive data (like user/pass, private key, etc) accessible just for this user. In other words deny your php code to have any access to these data.
Second.
After that create a simple blocking fifo pipe and grant write access to your php user.
Last.
And finally write a simple daemon to read lines from the fifo and execute it for example by ssh command. Run this daemon with runner user.
To execute a command you just need to write your command in the file (fifo pipe). Outputs could be redirected in another pipe or some simple files if needed.
to make fifo use this simple command:
mkfifo "FIFONAME"
The runner daemon would be a simple bash script like this:
#!/bin/bash
while [ 1 ]
do
cmd=$(cat FIFONAME | ( read cmd ; echo $cmd ) )
# Validate the command
ssh 192.168.0.1 "$cmd"
done
After this you can trust your code, if your php code completely hacked, your upstream server is secure yet. In such case, attacker could not access your sensitive data at all. He can send commands to your runner daemon, but if you validate the command properly, there's no worry.
:-)
Method 1
I'd probably use the suid flag. Write a little suid wrapper in C and make sure all commands it executes are predefined and can not be controlled by your php script.
So, you create your wrapper and get the command from ARGV. so a call could look like this:
./wrapper reloadnginx
Wrapper then executes /etc/init.d/nginx reload.
Make sure you chown wrapper to root and chmod +s. It will then spawn the commands as root but since you predefined all the commands your php script can not do anything else as root.
Method 2
Use sudo and set it up for passwordless execution for certain commands. That way you achieve the same effect and only certain applications can be spawned as root. You can't however control the arguments so make sure there is no privilege escalation in these binaries.
You really don't want to give a PHP script full root access.
If you're running on the same host, I would suggest to either directly write the configuration files and (re)start services or to write a wrapper script.
The first option obviously needs a very high privilege level, which I would not recommend to do. However, it will be the most simple to implement. Your other named options with ssh do not help much, as a possible attacker still may easily get root privileges
The second option is way more secure and involves to write a program with high level access, which only takes specified input files, e.g. from a directory. The php-script is merely a frontend for the user and will write said input files and thus only needs very low privileges. That way, you have a separation between your high- and low privileges and therefore mitigate the risk, as it is much easier to secure a program, with which a possible attacker may only work indirectly through text files. This option requires more work, but is a lot more secure.
You can extend option 3 and use SSH Keys without any library
$command = sprintf('ssh -i%s -o "StrictHostKeyChecking no" %s#%s "%s"',
$path, $this->getUsername(), $this->getAddress(), $cmd );
return shell_exec( $command );
I use it quite a lot in my project. You can have a look into SSH adapter I created.
The problem with above is you can't make real time decisions (while connected to a server). If you need real time try PHP extension called SSH2.
ps. I agree with you. SSH seams to be the most secure option.
You can use setcap to allow Nginx to bind to port 80/443 without having to run as root. Nginx only has to run as root to bind on port 80/443 (anything < 1024). setcap is detailed in this answer.
There are a few cavets though. You'll have to chown the log files to the right user (chown -R nginx:nginx /var/log/nginx), and config the pid file to be somewhere else other than /var/run/ (pid /path/to/nginx.pid).
lawl0r provides a good answer for setuid/sudo.
As a last restort you could reload the configuration periodically using a cron job if it has changed.
Either way you don't need SSH when it's on the same system.
Dude that is totally wrong.
You do not want to change any config files you want to create new files and then include them in default config files.
Example for apache you have *.conf files and include directive. It is totally insane to change default config files. That is how I do it.
You do not need ssh or anything like that.
Belive me it is better and safer if you will.
I know there's been similar questions but they don't solve my problem...
After checking out the folders from the repo (which works fine).
A method is called from jquery to execute the following in php.
exec ('svn cleanup '.$checkout_dir);
session_write_close(); //Some suggestion that was supposed to help but doesn't
exec ('svn commit -m "SAVE DITAMAP" '.$file);
These would output the following:
svn cleanup USER_WORKSPACE/0A8288
svn commit -m "SAVE DITAMAP" USER_WORKSPACE/0A8288/map.ditamap
1) the first line (exec ('svn cleanup')...executes fine.
2) as soon as I call svn commit then my server hangs, and everything goes to hell
The apache error logs show this error:
[notice] Child 3424: Waiting 240 more seconds for 4 worker threads to finish.
I'm not using the php_svn module because I couldn't get it to compile on windows.
Does anyone know what is going on here? I can execute the exact same cmd from the terminal windows and it executes just fine.
since i cannot find any documentation on jquery exec(), i assume this is calling php. i copied this from the documentation page:
When calling exec() from within an apache php script, make sure to take care of stdout, stderr and stdin (as in the example below). If you forget this and your shell command produces output the sh and apache deamons may never return (they will normally time out after a few minutes). From the calling web page the script may seem to not return any data.
If you want to start a php process that continues to run independently from apache (with a different parent pid) use nohub. Example:
exec('nohup php process.php > process.out 2> process.err < /dev/null &');
hope it helps
Okay, I've found the problem.
It actually didn't have anything to do with the exec running the background, especially because a one file commit doesn't take a lot of time.
The problem was that the commit was expecting a --username and --password that didn't show up, and just caused apache to hang.
To solve this, I changed the svnserve.conf in the folder where I installed svn and changed it to allow non-auth users write access.
I don't think you'd normally want to do this, but my site already authenticates the user name and pass upon logging in.
Alternatively you could
I have tried calling a windows program several ways and I have gotten the same result each time.
The program opens up on my machine (without a GUI) but never closes each means that the browser is forever loading.
Though when executing the query string manually through the command line prompt the program closes. Not only that, but the program doesn't actually execute
(it is just launched i.e. there aren't any results).
I just want to know the proper way of starting a program with switches through PHP.
Here is the query string that works (closes the program after executing):
"C:\Program Files (x86)\Softinterface, Inc\Convert PowerPoint\ConvertPPT.exe" /S
"C:\Users\Farzad\Desktop\upload\test.ppt" /T "C:\Users\Farzad\Desktop\upload\test.png" /C 18
If the program never closes, then PHP can't return a value from exec(). The program must close. Chances are there is a problem accessing your files on your desktop in this manner. It will be executed with whatever permissions the webserver has defined.
http://php.net/manual/en/function.exec.php
You might consider the advanced functionality of proc_open(). It will give you access to all the necessary pipes, but I don't think that will help you in this situation.
If the target directory on your Windows machine is C:\Program Files (x86)\Softinterface, Inc\Convert PowerPoint\ConvertPPT.exe, you need to double-quote the directories that have space character within them.
To translate it into php terms, it should be like this:
$directory = 'C:\"Program Files (x86)"\"Softinterface, Inc"\"Convert PowerPoint"\ConvertPPT.exe';
$command = $directory . ' enter your arguments here';
exec($command, $output, $return_var);
// if $return_var == 0, you hit the jackpot.
The physical directory where your Windows desktop is stored belongs to your user profile folder. That means that other users (including the one Apache runs as, which is typical "Local System") won't have the appropriate permissions to read and write files on it. While you can adjust your Apache set-up to make it run with your own user, Farzad, it's more common to put web applications in an entirely different directory tree. It may happen that ConvertPPT.exe just stalls because it's trying to write a file at a location where it's not allowed. I suggest you create a top folder directory and make sure it's world-writeable (once finished, you can tighten these permissions if you like).
Once you discard (or confirm) that the issue is caused by lack of appropriate credentials, make sure you are escaping your command and arguments properly. See this link:
http://es2.php.net/manual/en/function.exec.php#101579
One more thing you can try is to close PHP sessions before issuing the call to exec():
http://es2.php.net/manual/en/function.exec.php#99781
You have probably run into this bug: http://bugs.php.net/bug.php?id=44994
which has been bothering me for ages, even today, on PHP 5.3.5.
It seems like there is some kind of deadlock between the error output of the program and the apache error log file handle into which the program is redirected to write its stderr output, making the program be stuck for ever until the apache processes are killed.
Also, when using passthru, or system, or the backtick operator, there's an intermediary "cmd.exe" process that is used to run the program in an invisible console, and I have seen this cmd process getting stuck without even running the program.
I don't really have a solution as of now, and it seems the bug, even though reproduced by many people, hasn't been resolved.