In this code:
session_write_close();
echo "reload";
flush();
// exec("/etc/init.d/streaminit stop");
// sleep(2);
// session_write_close();
// exec("/etc/init.d/streaminit start");
// //all we have to do is copy currentView into nextView to trigger a page reload
// sleep(2);
the echo of "reload" works, but if the lines below it are uncommented, nothing is echoed. I have tried many permutations of this and the conclusion is that the exec command is preventing the echo from working.
I found some discussion of exec causing problems with Apache2, and one person said that session_write_close() might prevent the problem. Evidently in this case it doesn't. Are there any known fixes for this? Am I doing something wrong?
(streaminit is a shell script that starts and stops the mjpeg_streamer. The shell commands are asynchronous (with & at the end))
I finally found this in the documentation for PHP's exec: "If a program is started with this function, in order for it to continue running in the background (my emphasis), the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends." The fix:
exec("/etc/init.d/streaminit stop > /dev/null 2>&1 &”);
For those unfamiliar (like me until a minute ago), this redirects the stdout device to /dev/null, and the 2>&1 means "send stderr output to the same place as stdout. Finally, the & means "run this command in the background". Works!
Related
I need to echo text to a named pipe (FIFO) in Linux. Even though I'm running in background with '&' and redirecting all output to a /dev/null, the shell_exec call always blocks.
There are tons of answers to pretty much exactly this question all over the internet, and they all basically point to the following php manual section:
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
And sure enough, when I try the non-blocking approach (of backgrounding and redirecting to /dev/null) with other commands like sleep, php successfully executes without hanging. But for the case of echo-ing to the FIFO, php hangs even though running the same command with bash produces no visible output and immediately returns to the shell.
In bash, I can run:
bash$ { echo yay > fifo & } &> /dev/null
bash$ cat fifo
yay
[1]+ Done echo yay > fifo
but when running the following php file with php echo.php:
<?php
shell_exec("{ echo yay > fifo & } &> /dev/null");
?>
it hangs, unless I first open fifo for reading.
So my question is, why is this blocking, but sleep isn't? In addition, I want to know what is happening behind the scenes: when I put the '&' in the php call, even though the shell_exec call blocks, the echo call clearly doesn't block whatever bash session php invoked it on, because when I CTRL+C out of php, I can read 'yay' from the FIFO (if I don't background the echo command, after CTRL+C the FIFO contains no text). This suggests that perhaps php is waiting on the pid of the echo command before going to the next instruction. Is this true?
I've been trying something similar and in the end came up with this solution:
/**
* This runs a shell command on the server under the current PHP user, that is in CLI mode it is the user you are logged in with.
* If a command is run in the background the method will return the location of the tempfile that captures the output. In that case you will have to manually remove the temporary file.
*/
static public function command($cmd, $show_output = true, $escape_command = false, $run_in_background = false)
{
if ($escape_command)
$cmd = escapeshellcmd($cmd);
$f = trim(`mktemp`);
passthru($cmd . ($show_output ? " | tee $f" : " > $f") . ($run_in_background ? ' &' : ''));
return $run_in_background ? $f : trim(`cat $f ; rm -rf $f`);
}
The trick is to write the output to a temporary file and return that when the command has finished (blocking behavior) or just return the file path (non-blocking behavior). Also, I'm using passthru rather than shell_exec because interactive sessions are not possible with the latter because of the blocking behavior.
Following code is executed in PHP
shell_exec('wget -q -T 0 -b -O "'.dirname(__FILE__).'/logs/log.txt" "'.$_SERVER['SERVER_NAME'].'/my/script.php" > /dev/null 2>&1');
After few hours script just stops without any reason. No error in log.txt, neither in apache errors.
These lines should prevent such behaviour:
session_write_close();
ini_set('max_execution_time',0);
set_time_limit(0);
And i also added echo(' '); flush(); from time to time, to make sure that apache doesn't kill script because of no output (it appeared in apache error logs).
Interesting hint is, when I add ignore_user_abort(1); at the beginning - script doesn't stop.
I know there are other ways to finish the script and collect the output, but what I'm looking for is the reason why wget abandons downloading, even when timeout is set to 0. Or maybe the reason is elsewhere?
For a website, I need to be able to start and stop a daemon process. What I am currently doing is
exec("sudo /etc/init.d/daemonToStart start");
The daemon process is started, but Apache/PHP hangs. Doing a ps aux revealed that sudo itself changed into a zombie process, effectively killing all further progress. Is this normal behavior when trying to start a daeomon from PHP?
And yes, Apache has the right to execute the /etc/init.d/daemonToStart command. I altered the /etc/sudoers file to allow it to do so. No, I have not allowed Apache to be able to execute any kind of command, just a limited few to allow the website to work.
Anyway, going back to my question, is there a way to allow PHP to start daemons in a way that no zombie process is created? I ask this because when I do the reverse, stopping an already started daemon, works just fine.
Try appending > /dev/null 2>&1 & to the command.
So this:
exec("sudo /etc/init.d/daemonToStart > /dev/null 2>&1 &");
Just in case you want to know what it does/why:
> /dev/null - redirect STDOUT to /dev/null (blackhole it, in other words)
2>&1 - redirect STDERR to STDOUT (blackhole it as well)
& detach process and run in the background
I had the same problem.
I agree with DaveRandom, you have to suppress every output (stdout and stderr). But no need to launch in another process with the ending '&': the exec() function can't check the return code anymore, and returns ok even if there is an error...
And I prefer to store outputs in a temporary file, instead of 'blackhole'it.
Working solution:
$temp = tempnam(sys_get_temp_dir(), 'php');
exec('sudo /etc/init.d/daemonToStart >'.$temp.' 2>&1');
Just read file content after, and delete temporary file:
$output = explode("\n", file_get_contents($temp));
#unlink($temp);
I have never tried starting a daemon from PHP, but I have tried running other shell commands, with much trouble. Here are a few things I have tried, in the past:
As per DaveRandom's answer, append /dev/null 2>&1 & to the end of your command. This will redirect errors to standard output. You can then use this output to debug.
Make sure your webserver's user's PATH contains all referenced binaries inside your daemon script. You can do this by calling exec('echo $PATH; whoami;). This will tell you the user PHP is running under, and it's current PATH variable.
I thought that this would run without waiting for an output:
php /scripts/htdocs/summaries.live/app/scripts/generate-pdfs.php live 1 > /dev/null 2>&1
But it's not happening. PHP's exec() function is waiting for an output. How can I work around this to prevent this from happening?
you're missing & on the end of command
php /scripts/htdocs/summaries.live/app/scripts/generate-pdfs.php live 1 > /dev/null 2>&1 &
If running something with exec, the documentation states
Note:
If a program is started with this function, in order for it to
continue running in the background, the output of the program must be
redirected to a file or another output stream. Failing to do so will
cause PHP to hang until the execution of the program ends.
I have this in one PHP file:
echo shell_exec('nohup /usr/bin/php -f '.CRON_DIRECTORY.'testjob.php > /dev/null 2>&1 &');
and in testjob.php I have:
file_put_contents('test.txt',time()); exit;
And it all runs just dandy. However if I go to processes it's not terminating testjob.php after it runs.
(Having to post this as an answer instead of comment as stackoverflow still won't let me post comments...)
Works for me. I made testjob.php exactly as described, and another file test.php with just the given line (except I removed CRON_DIRECTORY, because testjob.php was in the same directory for me).
To be sure I was measuring correctly, I added "sleep(5)" at the top of testjob.php, and in another window I have:
watch 'ps a |grep php'
running. This happens:
I run test.php
test.php exits immediately but testjob.php appears in my list
After 5 seconds it disappears.
I wondered if shell might matter, so I switched from bash to sh. Same result.
I also wondered if it might be because your outer script is long-running. So I put "sleep(10)" at the bottom of test.php. Same result (i.e. testjob.php finishes after 5 seconds, test.php finishes 5 seconds after that).
So, unhelpfully, your problem is somewhere other than the code you've posted.
Remove & from the end of your command. This symbol says nohup to continue running in background, thus shell_exec is waiting for task to complete... and waiting... and waiting... till the end of times ;)
I don't even understan why would you perform this command with nohup.
echo shell_exec('/usr/bin/php -f '.CRON_DIRECTORY.'testjob.php > /dev/null 2>&1');
should be enough.
You're executing PHP and make that execution a background task. That means it will run in background until it is finished. shell_exec will not kill that process or something similar.
You might want to set an execution limit, PHP cli has a setting of unlimited by default. See as well set_time_limit PHP Manual;
So if you wonder why the php process does not terminate, you need to debug the script. If that's too complicated and you're unable to find out why the script runs that long, you might just want to terminate the process after some time, e.g. 1 minute.