suPHP and Lazarus console application running into weird shell malfunctions - php

i do appologize for the title, but couldn't find any other explaination. My company is running a development server with the latest LTS Ubuntu+Apache2+suPHP. To handle it, i am writing a Zend2 and Lazarus application. The web part with Zend runs well.
The problem is the console application written in Lazarus. It runs a couple of classes, to create databases and users, to download frameworks and so on. Also it should run a couple of shell commands for administration purpose (with root permissions). To aquire the rights, i am using a pretty ugly solution, using echo mymagicpassword | sudo -S mymagiccommand.
Here's a snippet:
constructor TRootProcess.Create(AOwner: TComponent);
begin
inherited Create(AOwner);
Options:=[poUsePipes,poWaitOnExit];
Executable:='/bin/sh';
Parameters.Add('-c');
Parameters.Add('echo %pwd% | sudo -S ');
end;
function TRootProcess.ExecuteCommand(command: String): String;
var
str: TStringList;
begin
str:=TStringList.Create;
command:=Copy(Parameters.GetText, 0, Length(Parameters.GetText)-1)+command;
command:=StringReplace(command,'%pwd%','mymagicpassword',[rfReplaceAll]);
Parameters.SetText(PChar(command));
Execute;
str.Clear;
str.LoadFromStream(Output);
Result:=str.Text;
end;
If i run this application by hand, everything runs well. But if i run it from PHP Applicaiton using shell_exec , the whole application runs (even the very last log entries) beside, starting other shell applications (ls, cp mkdir, useradd, chmod and so on)
I have actually no idea, what the problem is, anymore.
I don't get any errors in stdout/stderr, suPHP log or even Apache2 log.
Also running from PHP went well for about a week and apparently stopped working.
Thanks in advance

The problem is not really well described. At the very least, the line with Copy( is wrong, since strings start with index 1, not 0.
The loadfromstream is also not safe. Specially with larger outputs this might not complete. See "TProcess large I/O" in the Lazarus/FPC wiki.
Finally, you spawn new shells. After the command is done, the shell will be destroyed, and the next command will have yet another new shell. So doing "cd" is pretty pointless that way.

Related

php freezes when executing an external sh script

I'll try to explain my problem in a time line history:
I've tried to run several external scripts from php and to return the exit code to the server with an ajax call again.
A single call should start or stop an service on that machine. That works fine on this developing machine.
OS : raspbian Os
Webserver : NginX 1.2.1
Php : 5.4.3.6
However I've exported the code to a larger machine with much more power and everything seemed to work fine but one thing:
A single call causes the php-fpm to freezes and never to come back. By detailed examination I found out, that the call created a zombie process I can not terminate (even with sudo).
OS : Ubuntu
Webserver : NginX 1.6.2
Php : 5.5.9
The only solution seemed to stop the php-fpm proc and than to restart it again. Then everything seems to work fine again, as long as I try to call that script again.
Calling php line
exec("sudo ".$script, $output, $return_var);
(With all variables are normal 'strings' with no special chars)
Start script
#!/bin/sh
service radicale start 2>&1
The service by the way started, but every time the webserver freezes and I had to restart php manually, but that is not acceptable (even for a web server). But only for that single script and only for that service (radicale) with that solemn command (start).
Searching in Google brought me to the point that there is a conflict between the php commands exec() and session_start().
Links:
https://bugs.php.net/bug.php?id=44942
https://bugs.php.net/bug.php?id=44994
Their conclusion was, that that bug could be worked around with such a construct:
...
session_write_close();
exec("sudo ".$script, $output, $return_var);
session_start();
...
But that, for my opinion, was no debugging, but more a helplessly workaround, because you loose the functionality of letting the user know, that his actions have fully functioned, but more let him believe an error has occurred. Much more confusing is the fact, that it runs fully on the Raspberry Pi A, but not on a 64-bit machine with a much larger CPU and 8 GB RAM.
So is there a real solution anywhere or is this workaround the only way to solve that problem? I've read a article about php having some probs with exec/shell_exec and the recognition of the return value? How can that be lost? Someone's having a guess?
THX for reading that long awful English, but I'm no native speaker and was no well listening student in my lessons.
It is likely the case that the new machine simply is not set up the way the Raspberry PI was setup -
You need to do a few things in your shell before this will work on your larger machine:
1). Allow php to use sudo.
sudo usermod -G sudo -a your-php-user
Note that to get the username for your-php-user, you can just run a script that says:
<?php echo get_current_user(); ?> - or alternatively:
<?php echo exec('whoami'); ?> -
2). Allow that user to use sudo without a password
sudo visudo - this command will open /etc/sudoers with a failsafe to keep you from botching anything.
Add this line to the very end:
your-php-user ALL=(ALL) NOPASSWD: /path/to/your/script,/path/to/other/script
You can put as many scripts there, separated by commas, as you need.
Now, your script should work just fine.
AGAIN, please note that you need to change your-php-user to whatever your php user is.
Hope this helps!
This is not a real solution, but it's a better solution than none.
Calling a bash script with
<?php
...
exec("sudo ".$script, $output, $return_var);
...
?>
ends only in this special case in a zombie Thread. As php-fpm waits in expectation for a result, it still holds the line, not giving up nor time outs for the rest of its thread still living. So every other request to the php server is still in queue and will never be processed. That may be okay for some long living or working threads, but my request was done in some [ms].
I did not found the cause for this. As far as I could do debugging, I wasn't the triggered Radicale process fault, for this on gave a any time clean and brave 0 as in return. It seemed that a php process just couldn't get a return line from it and so it still waits and waits.
No time left I changed the malfunction script from
#!/bin/sh
service radicale start 2>&1
to
#!/bin/sh
service radicale start > /dev/null 2>&1 &
... so signaling every returning line to nirvana and disconnecting all subroutines. For now the server did not hung itself up and works as desired. But the feeling this may be a major bug in php still stays in the back of my head, with the hope, that - someday - someone may defeat that bug.

(U)Mounting through "exec" with "sudo". The user is a "sudoer" with NOPASSWD

I've already taken a look at both of these:
PHP: mount USB device
Error on mount through php "exec"
But, my problem appears to be different.
I have built an extensive library that's used to call Linux CLI tools. It's built around proc_open, it's family and POSIX.
I'm using it to successfully execute all (until I hit this mount/umount bug) CLI tools.
Now, I'm building a RAID setup routine, that involves partprobe, parted - rm, mklabel, mkpart, mdadm - stop, zero-superblock, create, dd, mkfs and ultimately mount/umount.
There are actually two graceful routines, one for assembling the RAID, the other one for disassembly.
As the title says, the problem relies in mount and umount. The other tools and their commands listed above execute successfully.
Environment
Arch Linux - Linux stone 3.11.6-1-ARCH #1 SMP PREEMPT Fri Oct 18 23:22:36 CEST 2013 x86_64 GNU/Linux.
The Arch is running with systemd - might be that is somehow affecting the mounting.
An Apache web server (latest), that runs mod_php (latest). Apache is run as http:http.
http is in wheel group, and wheels are sudoers - %wheel ALL=(ALL) NOPASSWD: ALL.
Please, don't start the webserver being given a full root capabilities discussion - the unit is a NAS, it's running a custom WebOS, and it's meant for intranet only. Even if there are hacking attempts - those will, most probably, break the whole system and that's not healthy for the customer. The NAS is a storage for Mobotix IP cameras, it runs a load of dependent services and the units are already deployed in over 30 objects with no issues. In short, the webserver is not serving a web, but an OS.
Before writing, I added, for a quick test, http explicitly to sudoers - http ALL=(ALL) NOPASSWD: ALL - didn't work.
Problem
The last command run in the RAID assembly process is mount /dev/md/stone\:supershare /mnt/supershare, which returns with an exit code of 0.
Performing a subsequent mount results in:
mount: /dev/md127 is already mounted or /mnt/supershare busy
/dev/md127 is already mounted on /mnt/supershare
with an exit code of 32. So, the array is mounted somewhere.
Performing an umount /dev/md/stone\:supershare afterwards the above mount, returns with an exit code of 0. Performing an subsequent umount results in:
umount: /dev/md/stone:supershare: not mounted
The commands above are auto-run with sudo.
So, it's mounted successfully and unmounted sucessfully, but... I'm logged in as root on TTY0, running lsblk after having performed the mount operation, yet, I do not see the mountpoint:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 55.9G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part [SWAP]
├─sda3 8:3 0 12G 0 part /
└─sda4 8:4 0 16.6G 0 part /home
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 899M 0 part
└─md127 9:127 0 1.8G 0 raid0
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 899M 0 part
└─md127 9:127 0 1.8G 0 raid0
Attempting the same mount command from TTY0 mounts it successfully (lsblk displays after).
If I mount it with my CLI tool, then run mount -l and lsblk also with the CLI tool, the mountpoint is visible.
Running immediately both commands from TTY0 as root, do not display the mountpoint.
Rebooting, to reset all mounts (not automounted), then, mounting from TTY0 and running lsblk from TTY0 displays the mountpoint.
Then, running lsblk with CLI tool, displays the mountpoint.
Then, running umount with CLI tool, exit code 0 - unmounted.
Running lsblk with CLI tool again, does not display the mountpoint.
Running lsblk from TTY0, still does display the mountpoint.
It appears that when the mount/umount is run with my CLI tool, it executes the commands privately for the sudo session runner.
umounting after TTY0 has mounted, does unmount it, but again - privately.
Logging in from TTY0 as http and running lsblk after having mounted the RAID from CLI tool, the mountpoint is not displayed. This kind of negates the "executes privately for the sudo session runner".
I've also found a material in IBM's:
The mount command uses the real user ID, not the effective user ID, to determine if the user has appropriate access. System group members can issue device mounts, provided they have write access to the mount point and those mounts specified in the /etc/file systems file. Users with root user authority can issue any mount command.
I hope I've explained good enough and not too confusing, I also hope that you guys will be able to help me catch the issue here.
Update (2013-10-28)
I attempted a test with the CLI tool outside web context, a simple PHP file, that'd I exec with root and a custom user.
In both scenarios, the mounting and unmounting was successful. So, it must be something with Apache executing the commands, though, I don't understand why do other commands work.
Question
What is causing the issue, and how do I overcome it?
In short, the hassle has been resolved.
It was the Apache's corresponding systemd service, that had PrivateTmp=true directive. Apparently, the directive executes the process with a new file system namespace.
This question, while attempting to debug and fix the issue spawned a numerous other posts around the internet.
https://unix.stackexchange.com/questions/97897/sudo-mount-from-webserver-apache-by-mod-php-result-not-visible-by-root
https://bbs.archlinux.org/viewtopic.php?id=172072
https://unix.stackexchange.com/questions/98182/a-process-run-as-root-when-performing-mount-is-mounting-for-self-how-to-ma/98191#98191
Each derived from stuff I've learn in the process.
I started with getting deeper information about mount working on EUID. Soon, I found out that my simple sudo call is actually not executing with EUID 0. That led me to multiple queries on how to do so, that in return spawned command syntax like sudo -i 'su' -c 'mount /dev/sdb1 /mnt/firstone' and other derivatives.
Having no success with the solution, I looked further.
I started to think of trying to add the entry to /etc/fstab, that led me to loads of permission issues. Also, sudo and my CLI tool proved to be incomplete for the task. Lets bring the big weapons - lets compile Apache with -DBIG_SECURITY_HOLE, also known as, give Apache the possibility to be run as root.
Lets append entry to the tab, lets attempt to mount... and... fail!
After numerous tests, queries and what not, I stumbled upon per process mount that led me here and opened the dimension of namespaces to me.
Okay, that explains everything - checking /proc/<pid>/mounts validates it, now, lets gnaw deeper and see how to overcome it.
Again, after numerous attempts and no success, I started posting questions based around my fresh knowledge of namespaces. Narrowing the questions down and becoming more technical (at least I think I did), that eventually led to a user hiciu who pointed me into systemd direction, specifically, Apaches service - PrivateTmp.
Voila! ...apparently systemd can enforce new namespaces.
I had the same strange behavior of apache and spent more than 3 days without any working solution. Then luckyly I found this post and as you described PrivateTmp caused the issue. In my case, I tried to mount a drive from php:
<?php
...
exec("sudo mount /dev/sda1 /mnt/drive", $output, $ret);
...
?>
When I ran above code from web browser, exec function return 0 (success) and I was even abble to list thru mapped drive within the code:
exec("ls /mnt/drive", $o, $r);
foreach ($o as $line){
echo $line.'<BR>';
}
But when I tried search for mapped drive from cli, I cannot see it. I tried everything, include change permissions, change php.ini etc. Nothing help. Until now, changing
PrivateTmp=false
in
/lib/systemd/system/apache2.service
does the trick. Thank you very much for sharing!
It was searching for this and is looks like this behavior is implemented and detectable from php via chroot:
system('ischroot;echo $?');
gives 0 with the setting PrivateTmp=true (saying 'you are in a chroot') and 1 with PrivateTemp=false.

Invoking "php" command from a PHP script causing strange process behavior

I just moved a site from one host to another. The server environment is very similar (LAMP stack) and all the code worked when it got transferred, except one line. I've mutated it a bit for testing and am still getting very odd results:
<?php
$out = `php ../test/test.php 123 abc`;
?>
When running php ../test/test.php 123 abc from the command line in SSH, it works fine, as expected. And when I run: php testrunner.php (the file which has only the "$out" line above in it) in SSH, it also works as expected.
But once I load testrunner.php from the browser, it just hangs. Using ps aux | grep php to monitor the processes, processes seem to spawn up and die down (truncated for brevity):
myuser 12790 0.0 0.3 259016 45284 . . . 0:00 php ../test/test.php 123 abc
If I modify the "$out" line to be:
<?php
$out = `php ../test/test.php 123 abc &`;
?>
then I cause that script to run in the background. Surprisingly, a few seconds later when I run ps aux | grep php again, it shows the same stuff but with a new PID. I keep running ps aux and keep seeing it with a different PID. This continues for quite some time (several seconds, maybe even a minute).
This is very odd to me, since test.php only has a line to echo some text for testing purposes.
Works fine from the terminal. Hangs and has other weird behavior when invoked from the web. Am I missing something?
(I have evidence by redirecting output to a log file that, when run from the web browser, the PHP script seems to invoke ITSELF instead of the other script, test.php. And when it behaves like this, it doesn't receive any $argv parameters... but when I run it from the command line, all is well! Strange?)
UPDATE: Geez... I was just watching the server processes and the PHP ones of test.php started spawning out of control. They multiplied into the hundreds, maybe thousands, of processes: the server was brought down for a minute, SSH and everything. It's back up now, but I can't explain what's going on. There's no loops in the code and both the files involved are super-simple, isolated for testing purposes...
I'm working with my host as they respond to my support ticket, to see if this is environment-related or what... what could cause this to be happening, simply by changing the server environment?
My host, A Small Orange, has been helpful, but in the end, all I or they can figure is (from my support ticket):
... that SuPHP or some other security-based software we have running as part of our stack is preventing your processes from spawning new processes (because that behavior can be insecure for obvious reasons) ...
In any case, the scripts work fine on my Macbook (very different configuration with nginx) and on my old host's LAMP stack, which ASO has a similar setup.
Perhaps I will ask about spawning long-running processes without invoking the command line so that the calling script isn't blocked in another question.
Remove the spaces and put underscore
$out = `php ../test/test.php_123_abc`;

Running shell_exec() in symfony

I have a program that returns a comma-separated string of numbers after doing some background processing. I intended to run this in symfony using shell_exec; however, all I get is NULL (revealed through a var_dump(). I tried the following debugging steps.
I ran the file (it's a PHP class) through a command-line lime unit test in Symfony - it works and gives the correct result there.
Just to check, I tried a simple command ls -l at the same place to see whether I would get anything. Again, I had the same problem - the var_dump in the browser showed NULL, but it worked through the command line.
What could be the problem? Are there restrictions on running shell_exec() in a browser?
EDIT: Just to clarify, shell_exec() commands work when I run them as standalone php scripts on the web server (for example, by putting them in my document root. They don't seem to be working under the symfony framework, for some reason.
I finally solved it, and it turned out to be something quite simple, and quite unrelated.
The shell command I was running was in this format: face_query -D args. I didn't realize that Apache would be executing PHP as user www-data and thus the program face_query wouldn't be in the PATH (the directory is actually ~/bin). Changing the program name to the full path of the program solved it.
I also gather from this that only commands which www-data has permission to execute can be run. In this case, www-data is in the same group as my user, but it might be a problem otherwise.
Have you tried using exec? Or one of the other variants. I am never sure of which one to use and always lump with exec.
http://uk.php.net/manual/en/function.exec.php
Is your web server running php in safe mode?
Note: This function is disabled when PHP is running in safe mode.
From: http://php.net/manual/en/function.shell-exec.php

Unexpected behavior when calling a Ruby script via PHP's shell_exec()

I have a Ruby script that's being used to do some API calls/screen scraping, but our main app is in PHP. Our PHP app is using shell_exec() to call the Ruby script.
The ruby script works great when called from the command line–but it will randomly exits early when called via PHP's shell exec.
Here's an example of the Ruby script:
#!/usr/bin/env ruby
require 'rubygems'
require 'mysql'
require 'net/http'
require 'open-uri'
require 'uri'
require 'cgi'
require 'fileutils'
# Bunch of code here ... works fine
somePath = 'http://foo.com/bar.php'
# Seems to always exit when I do a Net::HTTP or open-uri call
post = Net::HTTP.post_form(URI.parse(somePath),{'id'=>ID,'q'=>'some query'})
data = post.body
# OR
data = open(somePath).read
# More code here ...
So, all I can deduce so far is that it's always exiting when I try to grab/read an external URL via net/http or open-uri calls. The pages I'm grabbing can accept POST or GET requests, but it seems to be exiting either way.
I'm outputting the results with PHP after the shell_exec call, but there are no error messages or exits. I do have messages being output by my Ruby script with "puts ...." here and there. Could that be a problem (I'm thinking 'no' because it doesn't exit with earlier puts messages)?
Again, it works fine when called from the shell. It's almost like the shell_exec call isn't waiting for the net/http call to finish.
Any ideas?
I'm not sure on this, but given your explanation, which sounds plausible, have you looked at all at proc_open:
http://us3.php.net/proc_open
Ruby's open-uri requires tempfile, so I'm guessing there's a file ownership conflict between you running your ruby script and the web server running it. Can the web server create a temp file using tempfile?
Just an FYI, I never really uncovered why this was happening. The best I could deduce was that some type of permission issue was preventing Ruby's open-uri commands from working properly.
I opted for queuing these jobs in a db table and running my ruby script via cron periodically. Everything seems to work fine when the ruby script runs with root/sudo perms.
Run on Linux terminal:
sudo -H -u <user> bash -c <your code> where <user> is the Apache's user.
To find Apache's user you can echo("shell_exec(\"whoami\")"); inside your code and run it on browser. whoami works on Linux and Windows, but if you're under Windows, the Apache default user is your user. You can test it anyway in case it's different, but I can't tell how to run the code on Windows like if it's Apache running it.
After that you can have a clue of what's happening. In most cases the problem is the Apache's root folder is different from operating system's folder. So when you run a command with absolute path, the OS consider / and Apache consider /var/www/html on Linux, /opt/lampp/htdocs on Xampp(Linux) and C:/xampp/htdocs on Xampp(Windows). You get the idea i think.

Categories