Capture exec() error output of background process - php

I'm using the following command to open a temporary ssh tunnel for making a mysql connection:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 > /dev/null');
$connection = #new \mysqli(127.0.0.1, $username, $password, $database, 3400);
This works splendidly. However, once in a while there may be another process using that port in which case it fails.
bind [127.0.0.1]:3400: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 3401
Could not request local forwarding.
What I'd like to do is capture the error output of exec() so that I can retry using a different port. If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
One solution I've come up with is to pipe output to a file instead of /dev/null:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 >temp.log 2>&1');
$output = file_get_contents('temp.log');
This works, but it feels messy. I'd prefer not to use the filesystem just to get the error response. Is there a way to capture the error output of this command without piping it to a file?
UPDATE: For the sake of clarity:
(a) Capturing result code using the second argument of exec() does not work in this case. Don't ask me why - but it will always return 0 (success)
(b) stdout must be redirected somewhere or php will not treat it as a background process and script execution will stop until it completes. (https://www.php.net/manual/en/function.exec.php#refsect1-function.exec-notes)
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.

As far as i can tell, exec is not the right tool. For a more controlled approach, you may use proc_open. This may look something like this:
$process = proc_open(
'ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1',
[/*stdin*/ 0 => ["pipe", "r"], /*stdout*/ 1 => ["pipe", "w"], /*stderr*/2 => ["pipe", "w"]],
$pipes
);
// Set the streams to non-blocking
// This is required since any unread output on the pipes may result in the process still marked as running
// Note that this does not work on windows due to restrictions in the windows api (https://bugs.php.net/bug.php?id=47918)
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
// Wait a litte bit - you would probably have to loop here and check regulary
// Also note that you may need to read from stdout and stderr from time to time to allow the process to finish
sleep(2);
// The process should now be running as background task
// You can check if the process has finished like this
if (
!proc_get_status($process)["running"] ||
proc_get_status($process)["signaled"] ||
proc_get_status($process)["stopped"] ||
) {
// Process should have stopped - read the output
$stdout = stream_get_contents($pipes[1]) ?: "";
$stderr = stream_get_contents($pipes[2]) ?: "";
// Close everything
#fclose($pipes[1]);
#fclose($pipes[2]);
proc_close($process);
}
You can find more details on that the manual on proc_open

If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
You can redirect stdout to null and stderr to stdout. That would seem to me as the simpler way of doing what you want (minimal modification).
So instead of
>temp.log 2>&1
do:
2>&1 1>/dev/null
Note that the order of the redirects is important.
Test
First we exec without redirection, then we redirect as above to capture stderr.
<?php
$me = $argv[0];
$out = exec("ls -la no-such-file {$me}");
print("The output is '{$out}'\n");
print("\n----\n");
$out = exec("ls -la no-such-file {$me} 2>&1 1>/dev/null");
print("The output is '{$out}'\n");
print("\n");
~
$ php -q tmp.php
ls: cannot access 'no-such-file': No such file or directory
The output is 'The output is '-rw-r--r-- 1 lserni users 265 Oct 25 22:48 tmp.php'
----
The output is 'ls: cannot access 'no-such-file': No such file or directory'
Update
This requirement was not clear initially: "process must detach" (as if it went into the background). Now, the fact is, whatever redirection you do to the original stream via exec() will prevent the process from detaching, because at the time the detachment would happen, the process has not completed, its output is not delivered.
That is also why exec() reports a zero error code - there was no error in spawning. If you want the final result, someone must wait for the process to finalize. So, you have to redirect locally (that way it will be the local file that will wait), then reconnect with whoever it was that waited for the process to finalize and then read the results.
For what you want, exec will never work. You ought to use the proc_* functions.
You might however force detach even so using nohup (you have no control over the spawned pid, so this is less than optimal)
if (file_exists('nohup.out')) { unlink('nohup.out'); }
$out = shell_exec('nohup ssh ... 2>&1 1>/dev/null &');
...still have to wait for connection to be established...
...read nohup.out to verify...
...
...do your thing...
As I said, this is less than optimal. Using proc_*, while undoubtedly more complicated, would allow you to start the ssh connection in tunnel mode without a terminal, and terminate it as soon as you don't need it anymore.
Actually, however, no offense intended, but this is a "X-Y problem". What you want to do is open a SSH tunnel for MySQL. So I'd look into doing just that.

Related

How to make a non-blocking php exec call?

I need to echo text to a named pipe (FIFO) in Linux. Even though I'm running in background with '&' and redirecting all output to a /dev/null, the shell_exec call always blocks.
There are tons of answers to pretty much exactly this question all over the internet, and they all basically point to the following php manual section:
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
And sure enough, when I try the non-blocking approach (of backgrounding and redirecting to /dev/null) with other commands like sleep, php successfully executes without hanging. But for the case of echo-ing to the FIFO, php hangs even though running the same command with bash produces no visible output and immediately returns to the shell.
In bash, I can run:
bash$ { echo yay > fifo & } &> /dev/null
bash$ cat fifo
yay
[1]+ Done echo yay > fifo
but when running the following php file with php echo.php:
<?php
shell_exec("{ echo yay > fifo & } &> /dev/null");
?>
it hangs, unless I first open fifo for reading.
So my question is, why is this blocking, but sleep isn't? In addition, I want to know what is happening behind the scenes: when I put the '&' in the php call, even though the shell_exec call blocks, the echo call clearly doesn't block whatever bash session php invoked it on, because when I CTRL+C out of php, I can read 'yay' from the FIFO (if I don't background the echo command, after CTRL+C the FIFO contains no text). This suggests that perhaps php is waiting on the pid of the echo command before going to the next instruction. Is this true?
I've been trying something similar and in the end came up with this solution:
/**
* This runs a shell command on the server under the current PHP user, that is in CLI mode it is the user you are logged in with.
* If a command is run in the background the method will return the location of the tempfile that captures the output. In that case you will have to manually remove the temporary file.
*/
static public function command($cmd, $show_output = true, $escape_command = false, $run_in_background = false)
{
if ($escape_command)
$cmd = escapeshellcmd($cmd);
$f = trim(`mktemp`);
passthru($cmd . ($show_output ? " | tee $f" : " > $f") . ($run_in_background ? ' &' : ''));
return $run_in_background ? $f : trim(`cat $f ; rm -rf $f`);
}
The trick is to write the output to a temporary file and return that when the command has finished (blocking behavior) or just return the file path (non-blocking behavior). Also, I'm using passthru rather than shell_exec because interactive sessions are not possible with the latter because of the blocking behavior.

Know if a specific line is outputed with exec

I'm writing a small PHP script running on a Linux nginx server and I need to execute a jar file. I managed to do this with the function exec() like this
exec("java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223 > /dev/null &");
Since that program takes quite some time to load, I would like to be able to notify the user when it is loaded so they can start using it (the program is OpenTripPlanner and it has a user interface accessible from a browser).
This particular program outputs a whole lot of info about the process and all, but when the program is done loading, it outputs a specific line, which looks like this
14:31:52.863 INFO (GrizzlyServer.java:130) Grizzly server running.
Since that line means that the program is ready to use, I figured I could check the output and when I read a line that contains "Grizzly server running" I could notify the user.
The thing is that I don't know how I could do that. I know exec() outputs the last line of the process, but that "Grizzly server running" line isn't the last one since the process doesn't stop after it is outputted (it only stops if we manually kill it). I also know that shell_exec() returns the whole output, but again, the whole output isn't there since the process isn't done yet.
Do you guys have any idea on how to do that or an alternative I could use?
Thank you
EDIT
Based on AbraCadaver's answer, here's how I did
$cmd = "java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223"
exec($cmd . " > .grizzly &");
$ready = false;
while (!$ready) {
if (strpos(file_get_contents('.grizzly'), 'Grizzly server running') !== false) {
$ready = true;
} else {
sleep(5);
}
}
An issue I had was that (I think) strpos took too long and was asked to scan the output too often, that's why I added that 5 seconds sleep time (the whole process takes about 1 minute, so I though this was a fair time). Now the output is only checked every 5 seconds and I get the expected result.
Thanks a lot!
May be a better way but this should work. Redirect to .grizzly and then continuously check the file for Grizzly server running.:
echo "Please wait...";
exec("java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223 > .grizzly &");
while(strpos(file_get_contents('.grizzly'), 'Grizzly server running.') === false){}
echo "Grizzly server running.";
popen() in 'r' mode allows you to read stdout of the process you run by chunk size you pass to fread() or until EOF is occurred.
$hndl = popen("java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223 > /dev/null &", 'r');
while ( $data = fread($hndl, 4096) ) {
echo $data;
}
pclose($hndl);

Starting a daemon from PHP

For a website, I need to be able to start and stop a daemon process. What I am currently doing is
exec("sudo /etc/init.d/daemonToStart start");
The daemon process is started, but Apache/PHP hangs. Doing a ps aux revealed that sudo itself changed into a zombie process, effectively killing all further progress. Is this normal behavior when trying to start a daeomon from PHP?
And yes, Apache has the right to execute the /etc/init.d/daemonToStart command. I altered the /etc/sudoers file to allow it to do so. No, I have not allowed Apache to be able to execute any kind of command, just a limited few to allow the website to work.
Anyway, going back to my question, is there a way to allow PHP to start daemons in a way that no zombie process is created? I ask this because when I do the reverse, stopping an already started daemon, works just fine.
Try appending > /dev/null 2>&1 & to the command.
So this:
exec("sudo /etc/init.d/daemonToStart > /dev/null 2>&1 &");
Just in case you want to know what it does/why:
> /dev/null - redirect STDOUT to /dev/null (blackhole it, in other words)
2>&1 - redirect STDERR to STDOUT (blackhole it as well)
& detach process and run in the background
I had the same problem.
I agree with DaveRandom, you have to suppress every output (stdout and stderr). But no need to launch in another process with the ending '&': the exec() function can't check the return code anymore, and returns ok even if there is an error...
And I prefer to store outputs in a temporary file, instead of 'blackhole'it.
Working solution:
$temp = tempnam(sys_get_temp_dir(), 'php');
exec('sudo /etc/init.d/daemonToStart >'.$temp.' 2>&1');
Just read file content after, and delete temporary file:
$output = explode("\n", file_get_contents($temp));
#unlink($temp);
I have never tried starting a daemon from PHP, but I have tried running other shell commands, with much trouble. Here are a few things I have tried, in the past:
As per DaveRandom's answer, append /dev/null 2>&1 & to the end of your command. This will redirect errors to standard output. You can then use this output to debug.
Make sure your webserver's user's PATH contains all referenced binaries inside your daemon script. You can do this by calling exec('echo $PATH; whoami;). This will tell you the user PHP is running under, and it's current PATH variable.

Does PHP proc_open block the Web request?

By default, on Linux, does creating a process via proc_open() make the PHP script not terminate until the spawned process terminates? I don't want it to, and I close the process handle right away.
proc_open itself does not block, that's clear enough. But what about the overall HTTP request execution?
I had some time on the weekend, so I made a little research on proc_open() on *nix systems.
While proc_open() doesn't block PHP's script execution even if shell script is not run in background PHP automatically invokes proc_close() after PHP's script is completely executed if you don't invoke it yourself. So, we can imagine that we have always a line with proc_close() in the end of script.
The problem lies with unobvious, but logical proc_close() behaviour. Let's imagine we have a script like:
$proc = proc_open('top -b -n 10000',
array(
array('pipe', 'r'),
array('pipe', 'w')),
$pipes);
//Process some data outputted by our script, but not all data
echo fread($pipes[1],100);
//Don't wait till scipt execution ended - exit
//close pipes
array_map('fclose',$pipes);
//close process
proc_close($proc);
Strange, proc_close() whould wait before shell script execution ended but our script was soon terminated. It hapens because we closed pipes(it seems that PHP does it silently if we forgot) so as soon as that script tries to write something to already non-existent pipe - it gets an error and terminates.
Now, let's try without pipes (well, there will be, but they will use current tty without any link to PHP):
$proc = proc_open("top -b -n 10000", array(), $pipes);
proc_close($proc);
Now, our PHP script is waiting for our shell script to end. Can we avoid it? Luckily PHP spawns shell scripts with
sh -c 'shell_script'
so, we can just kill out sh process and leave our script running:
$proc = proc_open("top -b -n 10000", array(), $pipes);
$proc_status=proc_get_status($proc);
exec('kill -9 '.$proc_status['pid']);
proc_close($proc);
Of cource we could just have run the process in background like:
$proc = proc_open("top -b -n 10000 &", array(), $pipes);
proc_close($proc);
and not have any problems, but this feature leads us to the most complex question: can we run a process with proc_open() read some output and then force the process to background? Well, in a way - yes.
Main problem here is pipes: we can't close them or our process will die, but we need them to read some usefull data from that process. It turns out that we can use a magic trick here - gdb.
First, create a file somewhere(/usr/share/gdb_null_descr in my example) with following contents:
p dup2(open("/dev/null",0),1)
p dup2(open("/dev/null",0),2)
It will tell gdb to change descriptors 1 and 2(well, they are usually stdout and stderr) to new file handlers (/dev/null in this example, but you can change it).
Now, last thing: make sure gdb can connect to other running processes - it is default on some systems, but for example on ubuntu 10.10 you have to set /proc/sys/kernel/yama/ptrace_scope to 0 if you are don't run it as root.
Enjoy:
$proc = proc_open('top -b -n 10000',
array(
array('pipe', 'r'),
array('pipe', 'w'),
array('pipe', 'w')),
$pipes);
//Process some data outputted by our script, but not all data
echo fread($pipes[1],100);
$proc_status=proc_get_status($proc);
//Find real pid of our process(we need to go down one step in process tree)
$pid=trim(exec('ps h -o pid --ppid '.$proc_status['pid']));
//Kill parent sh process
exec('kill -s 9 '.$proc_status['pid']);
//Change stdin/stdout handlers in our process
exec('gdb -p '.$pid.' --batch -x /usr/share/gdb_null_descr');
array_map('fclose',$pipes);
proc_close($proc);
edit: I forgot to mention that PHP doesn't run your shell script instantly, so you have to wait a bit before executing other shell commands, but usually it is fast enough(or PHP is slow enough) and I'm to lazy to add that checks to my examples.
I ran into a similar problem and wrote a small script to handle it:
https://github.com/peeter-tomberg/php-shell-executer
What I did was background the process and still have access to the result of the backgrounded process (both stderr and stdout).

How can I capture the PID and output of a backgrounded PHP process at the same time?

I have one PHP script that has to start a second script in the background using exec. The parent script needs to capture the PID of this new process, but I also need to capture any potential output of the child script so it can be logged in the database. I can't just have the child script log to the database because I also need to capture fatal errors or anything that might indicate a problem with the script. The PID is used because the parent process needs to be able to check on the child and see when it finished. The child processes are sometimes run as crons, so forking isn't an option here either. I don't want two execution paths to debug if there are problems.
This was my first solution and it can capture the output, but fails to get the correct PID.
// RedirectToLog.php just reads stdin and logs it to the databse.
$cmd="php child.php </dev/null 2>&1 | php RedirectToLog.php >/dev/null 2>&1 & echo $!";
The problem here is that $! is the PID of the last process that was started in the background which ends up being RedirectToLog.php instead of child.php.
My other idea was to attempt to use a FIFO file (pipe).
$cmd1="php RedirectToLog.php </tmp/myFIFO >/dev/null 2>&1 &"
$cmd2="php child.php </dev/null >/tmp/myFIFO 2>&1 & echo $!"
This one didn't work because I couldn't get RedirectToLog to reliably consume the fifo and when it did, sometimes child.php failed to write EOF to the pipe which left both ends waiting on the other and both processes would hang until one was killed.
use proc_open to get full fd connectivity while starting a process. Take the resource returned by proc_open and use proc_get_status to get the pid.
$descriptors = array( ... ); // i suggest you read php.net on this
$pipes = array();
$res = proc_open($cmd,$descriptors,$pipes);
$info = proc_get_status($res);
echo $info['pid'];
I haven't fully grasped all your problems, but this should get you started.
I think you need to use
proc_open &
proc_get_status
I'm guessing the only reason you really want to capture the childs PID is to monitor it. If so, it might be easier to use proc_open. That way you can basically open file handles for stdin, stdout and stderr and monitor everything from the parent.

Categories