OpenBSD - PHP (chroot) - exec() cannot execute statically linked binary - php

I use OpenBSD and PHP for my private homepage. For the purpose of education (don't use this in production), I have to tried to execute a simple Hello World program in a chroot. Linked the binary statically. But I always get the result 127 (command not found).
How I execute the command in PHP:
<?php
$output = null;
$result = null;
echo getcwd();
exec("./foo", $output, $result);
var_dump($output);
var_dump($result);
?>
The program ./foo definitely resides in the current working directory. Also the file permissions are correct.
The program in C:
#include <stdio.h>
int main(int argc, char *argv[])
{
fprintf(stdout, "foo: stdout");
fprintf(stderr, "foo: stderr");
return 42;
}
Compiled with:
$ cc -static -o foo foo.c
The output from PHP:
htdocs/example.org/www
array(0) {
}
int(127)
I would understand this behavior if the program is linked dynamically (missing shared libraries).
Is there a specific security feature enabled in the default configuration from OpenBSD which doesn't allow PHP to execute binaries or can somebody explain why this isn't working?
Also haven't disabled the function exec() in /etc/php-8.0.ini.

I have found the solution.
In the default configuration from OpenBSD, httpd and PHP run in a chroot. The default directory /var/www will be used and there are only a few applications in /var/www/bin. Also the default shell is missing, unfortunately PHP's exec requires the shell. Therefore the error 127 (command not found).
A quick and dirty solution is to copy /bin/sh (which is statically linked) in the chroot:
$ cp /bin/sh /var/www/bin
Now PHP's exec command should work on OpenBSD.

Related

Allow Apache user to check service status [duplicate]

I have a CentOS 5.7 linux server and use php5.3.x.
On a pfSense system, you can restart services-that required root permissions using a php web page.
I'm trying to do something similar, I have written some php code to execute shell commands. For example, to restart the sshd service:
<?php
exec('/sbin/service sshd restart');
?>
and I tried to execute that command via exec function, but it needs root permission, but we have a apache user authority.
I have come across a few solutions:
"run apache with root user"
really unsafe. I do not want to do that.
"apache ALL=NOPASSWD:/sbin/service to /etc/sudoers"
I tried but and still have a problem.
Any other solutions?
Thanks for answers.
now.. it's interesting. i tried #refp post and it worked my local ubuntu server. But when i tried same at my cenOS vps server. It's not working.and that is apache's error log "rm: cannot remove `/var/lock/subsys/vsftpd': Permission denied"
Read this whole post before trying it out, there are choices to be made.
Solution using a binary wrapper (with suid bit)
1) Create a script (preferrably .sh) that contains what you want to be ran as root.
# cat > php_shell.sh <<CONTENT
#!/bin/sh
/sbin/service sshd restart
CONTENT
2) This file should be owned by root, and since it will later run with root permissions make sure that only root has permission to write to the file.
# chown root php_shell.sh
# chmod u=rwx,go=xr php_shell.sh
3) To run the script as root no matter what user that executes it, we will need a binary wrapper. Create one that will execute our php_shell.sh.
# cat > wrapper.c <<CONTENT
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int
main (int argc, char *argv[])
{
setuid (0);
/* WARNING: Only use an absolute path to the script to execute,
* a malicious user might fool the binary and execute
* arbitary commands if not.
* */
system ("/bin/sh /path/to/php_shell.sh");
return 0;
}
CONTENT
4) Compile and set proper permissions, including the suid bit (saying that it should run with root privileges):
# gcc wrapper.c -o php_root
# chown root php_root
# chmod u=rwx,go=xr,+s php_root
php_root will now run with root permissions, and execute the commands specified in php_shell.sh.
If you don't need to the option to easily change what commands that will be executed I'd recommend you to write the commands directly in wrapper.c under step 4. Then you don't need to have a binary executing a external script executing the commands in question.
In wrapper.c, use system ("your shell command here"); to specify what commands you'd like to execute.
I would not have PHP execute any sudo commands. To me that sounds like asking for trouble. Instead I would create two separate systems.
The first system, in PHP (the web tier), would handle user requests. When a request is made that needs a sudo command I would place this request in some queue. This could be a database of some sort or middle-ware such as ZeroMQ.
The second system (the business tier) would read or receive messages from this queue and would have the ability to execute sudo commands but won't be in the scope of your web-server process.
I know this is a bit vague and it can be solved in different ways with various technologies but I think this is the best and safest way to go.
Allow the www-data user to run to run program1 and program2 with no password:
sudo visudo
Add to the contents of the sudoers file:
User_Alias WWW_USER = www-data
Cmnd_Alias WWW_COMMANDS = /sbin/program1, /sbin/program2
WWW_USER ALL = (ALL) NOPASSWD: WWW_COMMANDS
Save.
from https://askubuntu.com/questions/76920/call-a-shell-script-from-php-run-as-root
Solution using a binary wrapper (with suid bit)
Some modification of Filip Roséen - refp post.
To execute any command modified wrapper.c
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <string.h>
int main (int argc, char **argv)
{
setuid (0);
char cmd[100] = "";
int i;
char *p;
for(i=0; i < argc; i++) {
if(i != 0){
strcat(cmd, *(argv+i));
strcat(cmd, " ");
}
}
system (cmd);
return 0;
}
Compile and set proper permissions;
gcc wrapper.c -o php_root # php_root can be any name.
chown root php_root
chmod u=rwx,go=xr,+s php_root
Now call from PHP. Execute any command.
shell_exec('./php_root '.$cmd);//execute from wrapper
I recently published a project that allows PHP to obtain and interact with a real Bash shell, it will easily give you a shell logged in as root. Then you can just execute the individual bash commands rather than bundling then in a script. That way you can also handle the return. Get it here: https://github.com/merlinthemagic/MTS
After downloading you would simply use the following code:
$shell = \MTS\Factories::getDevices()->getLocalHost()->getShell('bash', true);
$return1 = $shell->exeCmd('service sshd restart');
echo $return1;
//On CentOS 7 output would be like:
//Redirecting to /bin/systemctl restart sshd.service
//On CentOS 6 and lower output would be like:
//Stopping sshd: [ OK ]
//Starting sshd: [ OK ]
setenforce 0
Write this command in the terminal this command will disable SELinux.
this isn't a good practice.

Prevent pipe from going into sub-shell of script

I'm piping lines into a PHP script (see contrived example below). Alas the pipe unintentionally flows into the shell command in the script, thus nano doesn't run because it chokes on STDIN.
I want the shell command to run completely unrelated to the STDIN piped to the main script. So the PHP script should in a way "eat" the STDIN, so it doesn't reach the sub-shell. How do I fix this?
Note that exec(), system() and passthru() all give the same result.
$ echo -e "World\nEverybody" | php script.php
Hello World
Received SIGHUP or SIGTERM
Hello Everybody
Received SIGHUP or SIGTERM
script.php:
<?php
foreach(file("php://stdin") as $name) {
echo "Hello $name";
passthru("nano");
}
?>
Environment:
PHP 7.1.14 / PHP 5.6.30
GNU bash, version 3.2.57
GNU nano version 2.0.6
The pipe isn't really flowing into the sub-shells. In fact, nothing is flowing in. In order to connect nano's STDIN to the terminal, you pipe the controlling terminal (always /dev/tty) into nano, like this:
passthru("nano </dev/tty");
Here's an answer to your follow-up question. (Very good question IMO. My previous answer was slightly wrong in fact. STDIN does flow into the child processes.)
If the script consists of just passthru("nano") and you don't pipe anything into PHP, then nano works without </dev/tty. Why is this?
Linux behavior
In fact, child processes do inherit STDIN from their parent processes, but because of buffering, this isn't entirely clear sometimes. And since they inherit the same STDIN, when EOF is reached, they do whatever they do when EOF is reached (to see what nano does in this case, see below).
Let's take PHP out of the equation and see what we get when we turn buffering on or off. Here's some C code that will read from STDIN, system(), and read from STDIN again:
#include <stdio.h>
#include <stdlib.h>
int main() {
// setvbuf(stdin, NULL, _IONBF, 0 );
char buffer[32];
gets(buffer);
printf("Hello %s\n", buffer);
system("bash -c 'read FOO; echo This is bash, got $FOO'");
gets(buffer);
printf("Hello2 %s\n", buffer);
}
Compile (ignore the warnings about gets) and run:
$ cc -o script script.c
$ echo -e "Foo\nBar\nCar" | ./script
Hello Foo
This is bash, got
Hello2 Bar
bash didn't get anything. The gets after system magically got this input. Now uncomment the first line:
- // setvbuf(stdin, NULL, _IONBF, 0 );
+ setvbuf(stdin, NULL, _IONBF, 0 );
And we get:
$ cc -o script script.c
$ echo -e "Foo\nBar\nCar" | ./script
Hello Foo
This is bash, got Bar
Hello2 Car
This time bash got the second input. "Too long; didn't read": We do in fact have the same STDIN.
`nano` internals
First of all, you will find that nano's behavior is the same even if you take PHP out of the equation:
$ echo foo | nano
Received SIGHUP or SIGTERM
nano could in theory very well detect if we've got a terminal, and if we don't, attempt to open /dev/tty (it's just a regular open call). In fact, nano does this if you execute nano like this:
echo foo | nano -
The scoop_stdin function in src/nano.c takes care of this in version 2.9.4: http://git.savannah.gnu.org/cgit/nano.git/tree/src/nano.c?h=v2.9.4#n1122
And the finish_stdin_pager function in version 2.7.4: http://git.savannah.gnu.org/cgit/nano.git/tree/src/nano.c?h=v2.7.4#n1116)
So what happens when nano gets EOF? EOF in key input is handled like this:
Version 2.7.4: get_key_buffer() calls handle_hupterm(0) directly.
http://git.savannah.gnu.org/cgit/nano.git/tree/src/winio.c?h=v2.7.4#n207
Version 2.9.4: die(_("Too many errors from stdin"));
http://git.savannah.gnu.org/cgit/nano.git/tree/src/winio.c?h=v2.9.4#n207
(The reason I'm linking both is because the message changed at some point.)
I hope that sort of makes things clearer.

How to execute system calls in php via browser?

I have a C program that makes a system call (centOS 6.0) to encrypt a file, my code is:
#include <stdlib.h>
int main () {
system ("gpg -c --batch --passphrase mypass file.txt");
return 0;
}
The executable object is called encrypt_file
When I run ./encrypt_file directly through CLI it runs perfectly I obtain my file.txt.gpg, but when I try to execute it via browser I get no response.
Code in php:
shell_exec("./encrypt_file");
The reason I chose to make a c program is that I need the passphrase to be in the code but not visible, when I delete the .c file that contains the passphrase all I have left is my .exe and no visible passphrase.
I already changed permissions to apache user by issuing the following:
chown apache.apache /var/www/html/
And added the next line in /etc/sudoers:
apache ALL=(ALL) NOPASSWD:ALL
NOTE: The only command I have issues is gpg, I can make a system call with any other command that I needed to use, I can even run python scripts, and other C programs that doesn't contain anything related to gpg.
I hope a fast reply! I need to use a lot this encrypt_file!
Checking the error_log in /var/log/httpd/error_log I saw this line:
gpg: Fatal: can't create directory `/var/www/.gnupg': Permission denied
Then I found a solution at this site -> http://gnupg.10057.n7.nabble.com/Exi...pt-td7342.html
I added the --homedir option with the PATH that I found in the error.log of apache to the gpg command and it works perfectly!
Thanks to all!

Execute root commands via PHP

I have a CentOS 5.7 linux server and use php5.3.x.
On a pfSense system, you can restart services-that required root permissions using a php web page.
I'm trying to do something similar, I have written some php code to execute shell commands. For example, to restart the sshd service:
<?php
exec('/sbin/service sshd restart');
?>
and I tried to execute that command via exec function, but it needs root permission, but we have a apache user authority.
I have come across a few solutions:
"run apache with root user"
really unsafe. I do not want to do that.
"apache ALL=NOPASSWD:/sbin/service to /etc/sudoers"
I tried but and still have a problem.
Any other solutions?
Thanks for answers.
now.. it's interesting. i tried #refp post and it worked my local ubuntu server. But when i tried same at my cenOS vps server. It's not working.and that is apache's error log "rm: cannot remove `/var/lock/subsys/vsftpd': Permission denied"
Read this whole post before trying it out, there are choices to be made.
Solution using a binary wrapper (with suid bit)
1) Create a script (preferrably .sh) that contains what you want to be ran as root.
# cat > php_shell.sh <<CONTENT
#!/bin/sh
/sbin/service sshd restart
CONTENT
2) This file should be owned by root, and since it will later run with root permissions make sure that only root has permission to write to the file.
# chown root php_shell.sh
# chmod u=rwx,go=xr php_shell.sh
3) To run the script as root no matter what user that executes it, we will need a binary wrapper. Create one that will execute our php_shell.sh.
# cat > wrapper.c <<CONTENT
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int
main (int argc, char *argv[])
{
setuid (0);
/* WARNING: Only use an absolute path to the script to execute,
* a malicious user might fool the binary and execute
* arbitary commands if not.
* */
system ("/bin/sh /path/to/php_shell.sh");
return 0;
}
CONTENT
4) Compile and set proper permissions, including the suid bit (saying that it should run with root privileges):
# gcc wrapper.c -o php_root
# chown root php_root
# chmod u=rwx,go=xr,+s php_root
php_root will now run with root permissions, and execute the commands specified in php_shell.sh.
If you don't need to the option to easily change what commands that will be executed I'd recommend you to write the commands directly in wrapper.c under step 4. Then you don't need to have a binary executing a external script executing the commands in question.
In wrapper.c, use system ("your shell command here"); to specify what commands you'd like to execute.
I would not have PHP execute any sudo commands. To me that sounds like asking for trouble. Instead I would create two separate systems.
The first system, in PHP (the web tier), would handle user requests. When a request is made that needs a sudo command I would place this request in some queue. This could be a database of some sort or middle-ware such as ZeroMQ.
The second system (the business tier) would read or receive messages from this queue and would have the ability to execute sudo commands but won't be in the scope of your web-server process.
I know this is a bit vague and it can be solved in different ways with various technologies but I think this is the best and safest way to go.
Allow the www-data user to run to run program1 and program2 with no password:
sudo visudo
Add to the contents of the sudoers file:
User_Alias WWW_USER = www-data
Cmnd_Alias WWW_COMMANDS = /sbin/program1, /sbin/program2
WWW_USER ALL = (ALL) NOPASSWD: WWW_COMMANDS
Save.
from https://askubuntu.com/questions/76920/call-a-shell-script-from-php-run-as-root
Solution using a binary wrapper (with suid bit)
Some modification of Filip Roséen - refp post.
To execute any command modified wrapper.c
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <string.h>
int main (int argc, char **argv)
{
setuid (0);
char cmd[100] = "";
int i;
char *p;
for(i=0; i < argc; i++) {
if(i != 0){
strcat(cmd, *(argv+i));
strcat(cmd, " ");
}
}
system (cmd);
return 0;
}
Compile and set proper permissions;
gcc wrapper.c -o php_root # php_root can be any name.
chown root php_root
chmod u=rwx,go=xr,+s php_root
Now call from PHP. Execute any command.
shell_exec('./php_root '.$cmd);//execute from wrapper
I recently published a project that allows PHP to obtain and interact with a real Bash shell, it will easily give you a shell logged in as root. Then you can just execute the individual bash commands rather than bundling then in a script. That way you can also handle the return. Get it here: https://github.com/merlinthemagic/MTS
After downloading you would simply use the following code:
$shell = \MTS\Factories::getDevices()->getLocalHost()->getShell('bash', true);
$return1 = $shell->exeCmd('service sshd restart');
echo $return1;
//On CentOS 7 output would be like:
//Redirecting to /bin/systemctl restart sshd.service
//On CentOS 6 and lower output would be like:
//Stopping sshd: [ OK ]
//Starting sshd: [ OK ]
setenforce 0
Write this command in the terminal this command will disable SELinux.
this isn't a good practice.

PHP system() - return status is always 0

I need to get the following scripts running.
// File: script_a.php
<?php exit(1); ?>
// File: script_b.php
<?php
system('php script_a.php', $return);
var_dump($return);
?>
Now my problem: On my windows system running script_b.php shows int(1) as expected. On our Unix-Server I always get int(0), what makes it impossible for me to check, if a certain failure happens inside the script_a.php.
Does anybody knows this problem and how to solve it?
You might want to check if it's calling the right php executable on th Unix machine. On many UNIX systems you would need to call the php-cli executable insted of the php one for use on the command line.
Another thing to check would be permissions. Maybe the user executing the script_b.php script doesn't have permissions to execute script_a?
__halt_compiler() is called somewhere , able to check that ?
Try making the PHP system call with the absolute path of both the PHP executable and the script filename, e.g.: system('/usr/bin/php /path/to/script_a.php', $return);. Maybe it's a path issue. (You may find the absolute path of your PHP executable with: which php).
Also, as someone suggested, try debugging the actual return value of script_a.php on your UNIX server by running php script_a.php; echo $? on the command line. That echo will output the last return value, i.e., the value returned by script_a.php.
Anyway, I suggest doing an include with a return statement as described in Example #5 of the include() documentation. If you can adapt your scripts like this, it's a more efficient way of communicating them.
// File: script_a.php
<?php return 1; ?>
// File: script_b.php
<?php
$return = (include 'script_a.php');
var_dump($return);
?>
Have you checked if safe_mode is enabled on unix server?
PHP Note:
Note: When safe mode is enabled, you
can only execute files within the
safe_mode_exec_dir. For practical
reasons, it is currently not allowed
to have .. components in the path to
the executable.
Or maybe the system function is forbidden to be executed?
I can't reproduce it either (PHP 5.3.3, Ubuntu).
When I set the exit-value to something better grep-able, like "666", tracing the scripts returned also what is expected:
strace -f php5 script_b.php 2>&1 | grep EXITSTATUS
[pid 18574] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 666}], 0, NULL) = 18575
waitpid(18574, [{WIFEXITED(s) && WEXITSTATUS(s) == 666}], 0) = 18574
The "-f" to strace let's it follow child processes as you use the system call. "2>&1" redirects stderr to stdout to let everything grep. You can also pipe it to "|less" to go through but the output is long and not very readable.
I can't reproduce this on my system, Ubuntu Hardy. Here's a sample:
/tmp$ mkdir /tmp/sbuzz
/tmp$ cd /tmp/sbuzz
/tmp/sbuzz$ echo '<?php exit(1); ?>' >script_a.php
/tmp/sbuzz$ cat >script_b.php
<?php
system('php script_a.php', $return);
var_dump($return);
?>
/tmp/sbuzz$ php script_b.php
int(1)
/tmp/sbuzz$ echo '<?php exit(2); ?>' >script_a.php
/tmp/sbuzz$ php script_b.php
int(2)
/tmp/sbuzz$
Exit code 0 means successful execution of the program, so it kind of sounds like you are perhaps running the wrong script_a.php or perhaps the "php" executable isn't doing what you are expecting? Perhaps you have a script called "php" that is in your path before the interpreter? What does "which php" report? On my system it says "/usr/bin/php".
If PHP can't find the script, it would exit with 1, for example:
/tmp/sbuzz$ cat script_b.php
<?php
system('php doesnt_exist_script_a.php', $return);
var_dump($return);
?>
/tmp/sbuzz$ php script_b.php
Could not open input file: doesnt_exist_script_a.php
int(1)
/tmp/sbuzz$
In this case I changed the script_b.php to try to run a script that doesn't exist, and I get the exit code 1 (it should be 2 if it completed successfully, because I changed the script_a above), but it also shows the error that it couldn't run the program.
You might want to try changing it to specifically run the full path to the PHP executable:
system('/usr/bin/php script_a.php')
or also the full path to the script as well:
system('/usr/bin/php /tmp/sbuzz/script_a.php')
You could also try specifically executing a program that will return 1, just as another data-point, such as:
system('false')
system('bash -c "exit 69"')
You might want to try an exit code other than 1, which is a common failure. That's why I did "exit 69" above. "false" will exit with 1.
Also, of course, try running the script_a.php directly:
/tmp/sbuzz$ php script_a.php
/tmp/sbuzz$ echo $?
2
/tmp/sbuzz$
The "$?" is the exit code of the last run command, at the shell prompt.
Try:
<?php
die(1);
?>
If that fails as well, check out the stdout of:
strace php script_a.php
Not sure if these problems are related, but you may take a look on exec always returns -1 (or 127) as i had a similar problem in the past... even if i didn't acually solved it.
In your case, it might be another problem, not sure how it would be reproduceable, but i've seen caeses where the return string for an unknown command would be the return string from bash (bash: command not found). On most servers i dont anything though. You may try and check the shell setup for the current user(i assume it would be www-data)
Taking in consideration of your comment that your problem is occuring in UNIX system when your script_b is something like
system('php script_a.php | tee myLogFile', $return);
You may use this syntax
system("bash -c 'php script_a.php | tee log.txt ; exit \${PIPESTATUS[0]}'", $return);
Do man bash and search for PIPESTATUS for more details.
I know this is an old thread, but I just had a similar problem.
The exit status was being set to 0 when I had the script run in the background, something like
system('php script_a.php &', $return);
Could you have been doing that but just generalizing for readability?
you need to be running as root or use sudo to access via php.
try something like this:-
system('sudo /usr/bin/php -f script_a.php', $return);
in your script_b.php
and edit /etc/sudoers to add the following line:-
apache ALL=(ALL) NOPASSWD: /usr/bin/php -f script_a.php
if php is not in /usr/bin/php change that reference
and also mention the full path of script_a.php file somthing like /var/www/html/script_a.php or path where it is physically located.
Thanks.

Categories