I'm writing a small PHP script running on a Linux nginx server and I need to execute a jar file. I managed to do this with the function exec() like this
exec("java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223 > /dev/null &");
Since that program takes quite some time to load, I would like to be able to notify the user when it is loaded so they can start using it (the program is OpenTripPlanner and it has a user interface accessible from a browser).
This particular program outputs a whole lot of info about the process and all, but when the program is done loading, it outputs a specific line, which looks like this
14:31:52.863 INFO (GrizzlyServer.java:130) Grizzly server running.
Since that line means that the program is ready to use, I figured I could check the output and when I read a line that contains "Grizzly server running" I could notify the user.
The thing is that I don't know how I could do that. I know exec() outputs the last line of the process, but that "Grizzly server running" line isn't the last one since the process doesn't stop after it is outputted (it only stops if we manually kill it). I also know that shell_exec() returns the whole output, but again, the whole output isn't there since the process isn't done yet.
Do you guys have any idea on how to do that or an alternative I could use?
Thank you
EDIT
Based on AbraCadaver's answer, here's how I did
$cmd = "java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223"
exec($cmd . " > .grizzly &");
$ready = false;
while (!$ready) {
if (strpos(file_get_contents('.grizzly'), 'Grizzly server running') !== false) {
$ready = true;
} else {
sleep(5);
}
}
An issue I had was that (I think) strpos took too long and was asked to scan the output too often, that's why I added that 5 seconds sleep time (the whole process takes about 1 minute, so I though this was a fair time). Now the output is only checked every 5 seconds and I get the expected result.
Thanks a lot!
May be a better way but this should work. Redirect to .grizzly and then continuously check the file for Grizzly server running.:
echo "Please wait...";
exec("java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223 > .grizzly &");
while(strpos(file_get_contents('.grizzly'), 'Grizzly server running.') === false){}
echo "Grizzly server running.";
popen() in 'r' mode allows you to read stdout of the process you run by chunk size you pass to fread() or until EOF is occurred.
$hndl = popen("java -Xmx1G -jar /path/otp-1.3.0-SNAPSHOT-shaded.jar --build /path/graphs/3r-REF --inMemory --port 22222 --securePort 22223 > /dev/null &", 'r');
while ( $data = fread($hndl, 4096) ) {
echo $data;
}
pclose($hndl);
Related
I'm using the following command to open a temporary ssh tunnel for making a mysql connection:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 > /dev/null');
$connection = #new \mysqli(127.0.0.1, $username, $password, $database, 3400);
This works splendidly. However, once in a while there may be another process using that port in which case it fails.
bind [127.0.0.1]:3400: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 3401
Could not request local forwarding.
What I'd like to do is capture the error output of exec() so that I can retry using a different port. If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
One solution I've come up with is to pipe output to a file instead of /dev/null:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 >temp.log 2>&1');
$output = file_get_contents('temp.log');
This works, but it feels messy. I'd prefer not to use the filesystem just to get the error response. Is there a way to capture the error output of this command without piping it to a file?
UPDATE: For the sake of clarity:
(a) Capturing result code using the second argument of exec() does not work in this case. Don't ask me why - but it will always return 0 (success)
(b) stdout must be redirected somewhere or php will not treat it as a background process and script execution will stop until it completes. (https://www.php.net/manual/en/function.exec.php#refsect1-function.exec-notes)
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
As far as i can tell, exec is not the right tool. For a more controlled approach, you may use proc_open. This may look something like this:
$process = proc_open(
'ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1',
[/*stdin*/ 0 => ["pipe", "r"], /*stdout*/ 1 => ["pipe", "w"], /*stderr*/2 => ["pipe", "w"]],
$pipes
);
// Set the streams to non-blocking
// This is required since any unread output on the pipes may result in the process still marked as running
// Note that this does not work on windows due to restrictions in the windows api (https://bugs.php.net/bug.php?id=47918)
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
// Wait a litte bit - you would probably have to loop here and check regulary
// Also note that you may need to read from stdout and stderr from time to time to allow the process to finish
sleep(2);
// The process should now be running as background task
// You can check if the process has finished like this
if (
!proc_get_status($process)["running"] ||
proc_get_status($process)["signaled"] ||
proc_get_status($process)["stopped"] ||
) {
// Process should have stopped - read the output
$stdout = stream_get_contents($pipes[1]) ?: "";
$stderr = stream_get_contents($pipes[2]) ?: "";
// Close everything
#fclose($pipes[1]);
#fclose($pipes[2]);
proc_close($process);
}
You can find more details on that the manual on proc_open
If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
You can redirect stdout to null and stderr to stdout. That would seem to me as the simpler way of doing what you want (minimal modification).
So instead of
>temp.log 2>&1
do:
2>&1 1>/dev/null
Note that the order of the redirects is important.
Test
First we exec without redirection, then we redirect as above to capture stderr.
<?php
$me = $argv[0];
$out = exec("ls -la no-such-file {$me}");
print("The output is '{$out}'\n");
print("\n----\n");
$out = exec("ls -la no-such-file {$me} 2>&1 1>/dev/null");
print("The output is '{$out}'\n");
print("\n");
~
$ php -q tmp.php
ls: cannot access 'no-such-file': No such file or directory
The output is 'The output is '-rw-r--r-- 1 lserni users 265 Oct 25 22:48 tmp.php'
----
The output is 'ls: cannot access 'no-such-file': No such file or directory'
Update
This requirement was not clear initially: "process must detach" (as if it went into the background). Now, the fact is, whatever redirection you do to the original stream via exec() will prevent the process from detaching, because at the time the detachment would happen, the process has not completed, its output is not delivered.
That is also why exec() reports a zero error code - there was no error in spawning. If you want the final result, someone must wait for the process to finalize. So, you have to redirect locally (that way it will be the local file that will wait), then reconnect with whoever it was that waited for the process to finalize and then read the results.
For what you want, exec will never work. You ought to use the proc_* functions.
You might however force detach even so using nohup (you have no control over the spawned pid, so this is less than optimal)
if (file_exists('nohup.out')) { unlink('nohup.out'); }
$out = shell_exec('nohup ssh ... 2>&1 1>/dev/null &');
...still have to wait for connection to be established...
...read nohup.out to verify...
...
...do your thing...
As I said, this is less than optimal. Using proc_*, while undoubtedly more complicated, would allow you to start the ssh connection in tunnel mode without a terminal, and terminate it as soon as you don't need it anymore.
Actually, however, no offense intended, but this is a "X-Y problem". What you want to do is open a SSH tunnel for MySQL. So I'd look into doing just that.
I need to echo text to a named pipe (FIFO) in Linux. Even though I'm running in background with '&' and redirecting all output to a /dev/null, the shell_exec call always blocks.
There are tons of answers to pretty much exactly this question all over the internet, and they all basically point to the following php manual section:
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
And sure enough, when I try the non-blocking approach (of backgrounding and redirecting to /dev/null) with other commands like sleep, php successfully executes without hanging. But for the case of echo-ing to the FIFO, php hangs even though running the same command with bash produces no visible output and immediately returns to the shell.
In bash, I can run:
bash$ { echo yay > fifo & } &> /dev/null
bash$ cat fifo
yay
[1]+ Done echo yay > fifo
but when running the following php file with php echo.php:
<?php
shell_exec("{ echo yay > fifo & } &> /dev/null");
?>
it hangs, unless I first open fifo for reading.
So my question is, why is this blocking, but sleep isn't? In addition, I want to know what is happening behind the scenes: when I put the '&' in the php call, even though the shell_exec call blocks, the echo call clearly doesn't block whatever bash session php invoked it on, because when I CTRL+C out of php, I can read 'yay' from the FIFO (if I don't background the echo command, after CTRL+C the FIFO contains no text). This suggests that perhaps php is waiting on the pid of the echo command before going to the next instruction. Is this true?
I've been trying something similar and in the end came up with this solution:
/**
* This runs a shell command on the server under the current PHP user, that is in CLI mode it is the user you are logged in with.
* If a command is run in the background the method will return the location of the tempfile that captures the output. In that case you will have to manually remove the temporary file.
*/
static public function command($cmd, $show_output = true, $escape_command = false, $run_in_background = false)
{
if ($escape_command)
$cmd = escapeshellcmd($cmd);
$f = trim(`mktemp`);
passthru($cmd . ($show_output ? " | tee $f" : " > $f") . ($run_in_background ? ' &' : ''));
return $run_in_background ? $f : trim(`cat $f ; rm -rf $f`);
}
The trick is to write the output to a temporary file and return that when the command has finished (blocking behavior) or just return the file path (non-blocking behavior). Also, I'm using passthru rather than shell_exec because interactive sessions are not possible with the latter because of the blocking behavior.
I am wanting to execute a large, database intensive script, but do not need to wait for the process to finish. I would simply like to call the script, let it run in the background and then redirect to another page.
EDIT:
i am working on a local Zend community server, on Windows 7.
I have access to remote linux servers where the project also resides, so i can do this on linux or windows.
i have this
public function createInstanceAction()
{
//calls a seperate php process which creates the instance
exec('php -f /path/to/file/createInstance.php');
Mage::getSingleton('adminhtml/session')->addSuccess(Mage::helper('adminhtml')->__('Instance creation process started. This may take up to a few minutes.'));
$this->_redirect('instances/adminhtml_instances/');
return;
}
this works perfectly, but the magento application hangs around for the process to finish. it does everything i expect, logging to file from time to time, and am happy with how its running. Now all i would like to do is have this script start, the controller action does not hang around, but instead redirects and thats that. from what I have learnt about exec(), you can do so by changing the way i call exec() above, to :
exec('php -f /path/to/file/createInstance.php > /dev/null 2>&1 &');
which i took from here
if i add "> /dev/null 2>&1 &" to the exec call, it doesnt wait around as expected, but it does not execute the script anymore. Could someone tell me why, and if so, tell me how i can get this to work please?
Could this be a permission related issue?
thanks
EDIT : Im assuming it would be an issue to have any output logged to file if i call the exec function with (/dev/null 2>&1 &) as that would cancel that. is that correct?
After taking time to fully understand my own question and the way it could be answered, i have prepared my solution.
Thanks to all for your suggestions, and for excusing my casual, unpreparedness when asking the question.
The answer to the above question depends on a number of things, such as the operating system you are referring to, which php modules you are running and even as far as what webserver you are running. So if i had to start the question again, the first thing i would do is state what my setup is.
I wanted to achieve this on two environments :
1.) Windows 7 running Zend server community edition.
2.) Linux (my OS is Linux odysseus 2.6.32-5-xen-amd64 #1 SMP Fri Sep 9 22:23:19 UTC 2011 x86_64)
to get this right, i wanted it to work either way when deploying to windows or linux, so i used php to determine what the operating system was.
public function createInstanceAction()
{
//determines what operating system is Being used
if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN')
{
//This is a windows server
//call a seperate php process to run independently from the broswer action
pclose(popen("start php /path/to/script/script.php","r"));
}
else
{
//assuming its linux, but in fact it simply means its not windows
// to check for linux specifically use (strtoupper(substr(PHP_OS, 0, 3)) === 'LIN')
exec('php -f /path/to/file/script.php >/dev/null 2>&1 &');
}
//the browser will not hang around for this process to complete, and you can contimue with whatever actions you want.
//myscript log any out put so i can capture info as it runs
}
In short, ask questions once you understand them. there are many ways to to achieve the above, and this is just one solution that works for my development and production environments.
thanks for the help all.
PHP popen
From the docs (this should help you do other stuff, while that process is working; not sure if closing the current PHP process will kill the opened process):
/* Add redirection so we can get stderr. */
$handle = popen('/path/to/executable 2>&1', 'r');
echo "'$handle'; " . gettype($handle) . "\n";
$read = fread($handle, 2096);
echo $read;
pclose($handle);
Solution 2:
Trick the browser to close the connection (assuming there is a browser involved):
ob_start();
?><html><!--example html body--></html><?php
$strContents=ob_get_clean();
header("Connection: Close");
header("Content-encoding: none");//doesn't work without this, I don't know why:(
ignore_user_abort(true);
header("Content-type: text/html");
header("Content-Length: ".strlen($strContents));
echo $strContents;
flush();
//at this point a real browser would close the connection and finish rendering;
//crappy http clients like some curl implementations (and not only) would wait for the server to close the connection, then finish rendering/serving results...:(
//TODO: add long running operations here, exec, or whatever you have.
You could write a wrapper-script, say createInstance.sh like
#! /bin/bash
trap "" SIGHUP
php -f "$1" > logfile.txt 2>&1 &
Then you call the script from within PHP:
exec('bash "/path/to/file/createInstance.sh"');
which should detach the new php process most instantly from the script. If that doesen't help, you might try to use SIGABRT, SIGTERM or SIGINT instead of SIGHUP, I don't know exactly which signal is sent.
I've been able to use:
shell_exec("nohup $command > /dev/null & echo $!")
Where $command is for example:
php script.php --parameter 1
I've noticed some strange behavior with this. For example running mysql command line doesn't work, only php scripts seem to work.
Also, running cd /path/to/dir && php nohup $command ... doesn't work either, I had to chdir() within the PHP script and then run the command for it to work.
The PHP executable included with Zend Server seems to be what's causing attempts to run a script in the background (using the ampersand & operator in the exec) to fail.
We tested this using our standard PHP executable and it worked fine. It's something to do with the version shipped with Zend Server though our limited attempts to figure out what that was going on have not turned anything up.
I maintain a game server and unruly players frequently crash the application. My moderation team needs the ability to restart the server process, but allowing ssh access would be impractical/insecure, so im using shell exec to pass the needed commands to restart the server process from a web based interface. The problem is, the shell session doesnt detatch properly and thus php maintains its session untill it finally times out and closes the session/stops the server process.
Here's how I'm calling shell_exec:
$command='nohup java -jar foobar_server.jar';
shell_exec($command);
shell_exec will wait until the command you've executed returns (e.g. drops back to a shell prompt). If you want to run that as a background task, so shelL_exec returns immediately, then do
$command='nohup java -jar foobar_server.jar &';
^--- run in background
Of course, that assumes you're doing this on a unix/linux host. For windows, it'd be somewhat different.
If you try this you'd see it won't work. To fully detach in PHP you must also do stdout redirection else shell_exec will hang even with '&'.
This is what you'd really want:
shell_exec('java -jar foobar_server.jar >/dev/null 2>&1 &');
But to take this one step further, I would get rid of the web interface and make this a one-minute interval cronjob which first checks if the process is running, and if it's not start a new instance:
#!/bin/bash
if ! pidof foobar_server.jar; then
java -jar foobar_server.jar >/tmp/foobar_server.log 2>&1 &;
fi
And have that run every minute, if it finds a running process it does nothing, else it starts a new instance. Worst case scenerio after a server crash is 59 seconds downtime.
Cheers
I'm looking for the best, or any way really to start a process from php in the background so I can kill it later in the script.
Right now, I'm using: shell_exec($Command);
The problem with this is it waits for the program to close.
I want something that will have the same effect as nohup when I execute the shell command. This will allow me to run the process in the background, so that later in the script it can be closed. I need to close it because this script will run on a regular basis and the program can't be open when this runs.
I've thought of generating a .bat file to run the command in the background, but even then, how do I kill the process later?
The code I've seen for linux is:
$PID = shell_exec("nohup $Command > /dev/null & echo $!");
// Later on to kill it
exec("kill -KILL $PID");
EDIT: Turns out I don't need to kill the process
shell_exec('start /B "C:\Path\to\program.exe"');
The /B parameter is key here.
I can't seem to find where I found this anymore. But this works for me.
Will this function from the PHP Manual help?
function runAsynchronously($path,$arguments) {
$WshShell = new COM("WScript.Shell");
$oShellLink = $WshShell->CreateShortcut("temp.lnk");
$oShellLink->TargetPath = $path;
$oShellLink->Arguments = $arguments;
$oShellLink->WorkingDirectory = dirname($path);
$oShellLink->WindowStyle = 1;
$oShellLink->Save();
$oExec = $WshShell->Run("temp.lnk", 7, false);
unset($WshShell,$oShellLink,$oExec);
unlink("temp.lnk");
}
Tried to achieve the same on a Windows 2000 server with PHP 5.2.8.
None of the solutions worked for me. PHP kept waiting for the response.
Found the solution to be :
$cmd = "E:\PHP_folder_path\php.exe E:\some_folder_path\backgroundProcess.php";
pclose(popen("start /B ". $cmd, "a")); // mode = "a" since I had some logs to edit
From the php manual for exec:
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
ie pipe the output into a file and php won't wait for it:
exec('myprog > output.txt');
From memory, I believe there is a control character that you can prepend (like you do with #) to the exec family of commands that also prevents execution from pausing - can't remember what it is though.
Edit Found it! On unix, programs executed with & prepended will run in the background. Sorry, doesn't help you much.
On my Windows 10 and Windows Server 2012 machines, the only solution that worked reliably within pclose/popen was to invoke powershell's Start-Process command, as in:
pclose(popen('powershell.exe "Start-Process foo.bat -WindowStyle Hidden"','r'));
Or more verbosely if you want to supply arguments and redirect outputs:
pclose(popen('powershell.exe "Start-Process foo.bat
-ArgumentList \'bar\',\'bat\'
-WindowStyle Hidden
-RedirectStandardOutput \'.\\console.out\'
-RedirectStandardError \'.\\console.err\'"','r'));