I'm piping lines into a PHP script (see contrived example below). Alas the pipe unintentionally flows into the shell command in the script, thus nano doesn't run because it chokes on STDIN.
I want the shell command to run completely unrelated to the STDIN piped to the main script. So the PHP script should in a way "eat" the STDIN, so it doesn't reach the sub-shell. How do I fix this?
Note that exec(), system() and passthru() all give the same result.
$ echo -e "World\nEverybody" | php script.php
Hello World
Received SIGHUP or SIGTERM
Hello Everybody
Received SIGHUP or SIGTERM
script.php:
<?php
foreach(file("php://stdin") as $name) {
echo "Hello $name";
passthru("nano");
}
?>
Environment:
PHP 7.1.14 / PHP 5.6.30
GNU bash, version 3.2.57
GNU nano version 2.0.6
The pipe isn't really flowing into the sub-shells. In fact, nothing is flowing in. In order to connect nano's STDIN to the terminal, you pipe the controlling terminal (always /dev/tty) into nano, like this:
passthru("nano </dev/tty");
Here's an answer to your follow-up question. (Very good question IMO. My previous answer was slightly wrong in fact. STDIN does flow into the child processes.)
If the script consists of just passthru("nano") and you don't pipe anything into PHP, then nano works without </dev/tty. Why is this?
Linux behavior
In fact, child processes do inherit STDIN from their parent processes, but because of buffering, this isn't entirely clear sometimes. And since they inherit the same STDIN, when EOF is reached, they do whatever they do when EOF is reached (to see what nano does in this case, see below).
Let's take PHP out of the equation and see what we get when we turn buffering on or off. Here's some C code that will read from STDIN, system(), and read from STDIN again:
#include <stdio.h>
#include <stdlib.h>
int main() {
// setvbuf(stdin, NULL, _IONBF, 0 );
char buffer[32];
gets(buffer);
printf("Hello %s\n", buffer);
system("bash -c 'read FOO; echo This is bash, got $FOO'");
gets(buffer);
printf("Hello2 %s\n", buffer);
}
Compile (ignore the warnings about gets) and run:
$ cc -o script script.c
$ echo -e "Foo\nBar\nCar" | ./script
Hello Foo
This is bash, got
Hello2 Bar
bash didn't get anything. The gets after system magically got this input. Now uncomment the first line:
- // setvbuf(stdin, NULL, _IONBF, 0 );
+ setvbuf(stdin, NULL, _IONBF, 0 );
And we get:
$ cc -o script script.c
$ echo -e "Foo\nBar\nCar" | ./script
Hello Foo
This is bash, got Bar
Hello2 Car
This time bash got the second input. "Too long; didn't read": We do in fact have the same STDIN.
`nano` internals
First of all, you will find that nano's behavior is the same even if you take PHP out of the equation:
$ echo foo | nano
Received SIGHUP or SIGTERM
nano could in theory very well detect if we've got a terminal, and if we don't, attempt to open /dev/tty (it's just a regular open call). In fact, nano does this if you execute nano like this:
echo foo | nano -
The scoop_stdin function in src/nano.c takes care of this in version 2.9.4: http://git.savannah.gnu.org/cgit/nano.git/tree/src/nano.c?h=v2.9.4#n1122
And the finish_stdin_pager function in version 2.7.4: http://git.savannah.gnu.org/cgit/nano.git/tree/src/nano.c?h=v2.7.4#n1116)
So what happens when nano gets EOF? EOF in key input is handled like this:
Version 2.7.4: get_key_buffer() calls handle_hupterm(0) directly.
http://git.savannah.gnu.org/cgit/nano.git/tree/src/winio.c?h=v2.7.4#n207
Version 2.9.4: die(_("Too many errors from stdin"));
http://git.savannah.gnu.org/cgit/nano.git/tree/src/winio.c?h=v2.9.4#n207
(The reason I'm linking both is because the message changed at some point.)
I hope that sort of makes things clearer.
Related
I use OpenBSD and PHP for my private homepage. For the purpose of education (don't use this in production), I have to tried to execute a simple Hello World program in a chroot. Linked the binary statically. But I always get the result 127 (command not found).
How I execute the command in PHP:
<?php
$output = null;
$result = null;
echo getcwd();
exec("./foo", $output, $result);
var_dump($output);
var_dump($result);
?>
The program ./foo definitely resides in the current working directory. Also the file permissions are correct.
The program in C:
#include <stdio.h>
int main(int argc, char *argv[])
{
fprintf(stdout, "foo: stdout");
fprintf(stderr, "foo: stderr");
return 42;
}
Compiled with:
$ cc -static -o foo foo.c
The output from PHP:
htdocs/example.org/www
array(0) {
}
int(127)
I would understand this behavior if the program is linked dynamically (missing shared libraries).
Is there a specific security feature enabled in the default configuration from OpenBSD which doesn't allow PHP to execute binaries or can somebody explain why this isn't working?
Also haven't disabled the function exec() in /etc/php-8.0.ini.
I have found the solution.
In the default configuration from OpenBSD, httpd and PHP run in a chroot. The default directory /var/www will be used and there are only a few applications in /var/www/bin. Also the default shell is missing, unfortunately PHP's exec requires the shell. Therefore the error 127 (command not found).
A quick and dirty solution is to copy /bin/sh (which is statically linked) in the chroot:
$ cp /bin/sh /var/www/bin
Now PHP's exec command should work on OpenBSD.
I have a small script I tested on the command line using php test.php.
test.php
<?php
exec('ps -acux | grep test', $testvar);
print_r($testvar);
?>
This works fine. I am able to run the script and get the desired result. However, when I add the code to a file being run by my PHP server, the result is empty.
My OS is FreeBSD. Looking at the man page for ps the only restriction I see is on the -a option. It states:
If the security.bsd.see_other_uids sysctl is set to zero, this option
is honored only if the UID of the user is 0.
My security.bsd.see_other_uids is set to 1.
$ sysctl security.bsd.see_other_uids
security.bsd.see_other_uids: 1
The only thing I can think of is that the command is being run by my user when I run it via the command line whereas when run by PHP, it's being run by www. I'm not seeing anything in the manual of ps that indicates www shouldn't be able to run the command.
As per the comments you need to redirect stderr to stdout. This is achieved by 2>&1 to the end of your command. So an example of the PHP exec() is as follows:
exec('ps -acux | grep test 2>&1', $testvar);
This allows you to see potential errors. In my case there was no error and likely a bug of some sorts. However this is useful in all cases where you execute a command. If any error were to occur you can retrieve it to see what is happening with your command.
I'm running this command to run Drush which is basically a PHP CLI for Drupal, in the running container:
docker-compose -f ../docker-compose.test.yml exec php scripts/bin/vendor/drush.phar -r public_html status-report
The output if this command is fine, it's the list of status information about a specific Drupal instance in the container. I won't be pasting it here as it's long, and irrelevant.
Now let's filter this information by piping it into grep:
docker-compose -f ../docker-compose.test.yml exec php scripts/bin/vendor/drush.phar -r public_html status-report | grep -e Warning -e Error
The result is:
Cro Error L
Gra Warning P
HTT Error F
HTT Warning T
Dru Warning N
XML Error L
Which is wrong, it looks like it has been cut to pieces, and most of it is missing.
Now, if we will disable allocating of pseudo-tty by adding -T flag:
docker-compose -f ../docker-compose.test.yml exec -T php scripts/bin/vendor/drush.phar -r public_html status-report | grep -e Warning -e Error
The output is correct:
Cron maintenance Error Last run 3 weeks 1 day ago
Gravatar Warning Potential issues
HTTP request status Error Fails
HTTPRL - Non Warning This server does not handle hanging
Drupal core update Warning No update data available
XML sitemap Error Last attempted generation on Tue, 04/18/2017
Why is that?
Bonus question, which probably will be answered by the answer to the previous one: Are there any important side effects of using -T?
Docker version 18.06.1-ce, build e68fc7a215
docker-compose version 1.22.0
UPDATE #1:
To simplify things I saved the correct output of the whole scripts/bin/vendor/drush.phar -r public_html status-report into a file test.txt and tried:
docker-compose -f ../docker-compose.test.yml exec php cat test.txt | grep -e Warning -e Error
Interestingly the output is correct now with and witout -T, so it has to have something to do with Drush/php, although I'm still interested what can be a cause of this.
PHP 7.1.12 (cli) (built: Dec 1 2017 04:07:00) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies
with Zend OPcache v7.1.12, Copyright (c) 1999-2017, by Zend Technologies
with Xdebug v2.5.5, Copyright (c) 2002-2017, by Derick Rethans
Drush 8.1.17
UPDATE #2:
To isolate problem further I put all content in a PHP file, that is simply printing it, and after:
docker-compose -f ../docker-compose.test.yml exec php php php.php | grep -e Warning -e Error
I'm getting a correct output!
So it has to have something to do with how Drush is printing its messages, but I fail to see what it can be. That could be pretty interesting if we could figure this out.
UPDATE #3:
Ok guys, that's actual magic. The problem happens also with running drush without any commands, to list all available ones. The list of commands is broken when output is being piped, so this can be tested without actual Drupal instance.
Now I want to present you magic.
In drush, output for list of available commands in being generated in commands/core/help.drush.phpin function drush_core_help(). There is this call: drush_help_listing_print($command_categories); I looked into it. Inside is a call drush_print_table($rows, FALSE, array('name' => 20)); that is responsible for generating part of the output that's getting broken.
So inside of it, I decided to intercept the output, just before the last call to drush_print(), by adding simple file_put_contents('/var/www/html/data.txt', $output);
And now it's time for the absolutely magical part for me.
When I execute:
docker-compose -f ../docker-compose.test.yml exec php scripts/bin/vendor/drush/drush -r public_html
The last group of commands can be checked in this file, and in my case it's:
adminrole-update Update the administrator role permissions.
elysia-cron Run all cron tasks in all active modules for specified site using elysia cron system. This replaces the standard "core-cron" drush handler.
generate-redirects Create redirects.
libraries-download Download library files of registered libraries.
(ldl, lib-download)
libraries-list (lls, Show a list of registered libraries.
lib-list)
BUT, if I execute the same command, but the output will be piped or redirected, so for example:
docker-compose -f ../docker-compose.test.yml exec php scripts/bin/vendor/drush/drush -r public_html | cat
SOMETHING DIFFERENT WILL BE SAVED INTO A FILE:
adminrole-update U
p
d
a
t
e
t
h
e
a
d
m
i
n
i
s
t
r
a
t
o
r
r
(and the rest of the broken output)
So the fact of piping/redirecting of the output, influences execution of the command, before the pipe/redirection actually happens.
How is that even possible? O_o
It's not uncommon for a command-line program to change its output presentation based on whether its output is a terminal, or not. For example, ls by itself, with no options, displays files in a columnar format. When piped, the output changes to a list of one-file-per-line. You can see this in the source code for GNU ls:
case LS_LS:
/* This is for the 'ls' program. */
if (isatty (STDOUT_FILENO))
{
format = many_per_line;
set_quoting_style (NULL, shell_escape_quoting_style);
/* See description of qmark_funny_chars, above. */
qmark_funny_chars = true;
}
else
{
format = one_per_line;
qmark_funny_chars = false;
}
break;
You can emulate the behavior of ls | ... with the explicit argument ls -1, and this too is not uncommon: programs that implicitly change their output presentation often provide a way to explicitly engage that alternate presentation.
Support for this isn't just a convention: it's actually a requirement for ls in POSIX:
The default format shall be to list one entry per line to standard output; the exceptions are to terminals or when one of the -C, -m, or -x options is specified. If the output is to a terminal, the format is implementation-defined.
This all may seem magical: how does ls know it's got a pipe following it since it comes before the pipe? The answer is quite simple, really: the shell parses the whole command line, sets up the pipes, and then forks the respective programs with the input/output wired to pipes appropriately.
So, what part of the command is doing the alternate presentation? I suspect it's an interaction between the environment of your exec and the column width calculation in drush. On my local environment, drush help | ... doesn't produce any unusual results. You might try piping to (or through) cat -vet to discover any unusual characters in the output.
That said, regarding docker-compose specifically: based on this thread, you're not the only one who has encountered this or a similar issue. I've not trawled the docker source code, but - generally - not allocating a pseudo-tty will make the other end act like a non-interactive shell, which means things like your .bash_profile won't run and you won't be able to read stdin in the run command. This can give the appearance of things not working.
The thread linked above mentions a work around of this form:
docker exec -i $(docker-compose ...) < input-file
which seems reasonable given the meaning of -i, but it also seems rather convoluted for basic scripting.
The fact that -T makes it work for you suggests to me that you have something in your .bash_profile (or similar login-shell-specific start up file) that's changing certain values (maybe COLUMNS) or altering the values in such a way as to have the observed deleterious effect. You might try removing everything from those files, then adding them back to see if any particular one causes the issue.
I didn't read that very detailed question, but from glancing over it, I'd say the -T option to the exec subcommand is essential if you want to process stdout and stderr in the environment where you execute docker-compose.
I'm trying to use PHP to trigger a bash script that should never stop running. It's not just that the command needs to run and I don't need to wait for output, it needs to continue running after PHP is finished. This has worked other times (and the question has been asked already), the difference seems to be my bash script has a trap for when it's closed.
Here is my bash script:
#!/bin/bash
set -e
WAIT=5
FILE_LOCK="$1"
echo "Daemon started (PID $$)..."
echo "$$" > "$FILE_LOCK"
trap cleanup 0 1 2 3 6 15
cleanup()
{
echo "Caught signal..."
rm -rf "$FILE_LOCK"
exit 1
}
while true; do
# do things
sleep "$WAIT"
done
And here is my PHP:
$command = '/path/to/script.sh /tmp/script.lock >> /tmp/script.log 2>&1 &';
$lastLine = exec($command, $output, $returnVal);
I see the script run, the lock file get created, then it exits, and the trap removes the lock file. In my /tmp/script.log I see:
Daemon started (PID 55963)...
Caught signal...
What's odd is that this only happens when running the PHP via Apache. From command line it keeps running as expected.
The signal on the trap that's being caught is 0.
I've tried wrapping my command in a bash environment, like $command = '/bin/bash -c "' . addslashes($command) . '"';, also tried adding nohup to the beginning. Nothing seems to be working. Is this possible to do for a never ending script?
Found the problem thanks to #lxg.
My # do things command was giving errors, which was causing the script to exit. For some reason they were suppressed.
When removing set -e from the beginning of my bash script I started seeing the errors output to my log file. Not sure why they didn't show up before.
The issue was in my bash loop it was running PHP commands. Even though my bash user and Apache user are the same, for some reason they had different $PATHs. This meant that when running on command line I was using a PHP7 binary, but when Apache trigged bash commands it was using a PHP5 binary (even though Apache itself is configured to use PHP7). So the application errored out and that is what caused the script to die.
The solution was to explicitly set the PHP binary path in my bash loop.
I was doing this with
BIN_PHP=$(which php)
But on true command line it would return one value (/path/to/php7/bin/php) vs command line initiated by Apache (/path/to/php5/bin/php). Despite Apache being the same as a my command line user, it didn't load the ~/.bashrc which specified my correct PHP path.
I have a PHP script that listens on a queue. Theoretically, it's never supposed to die. Is there something to check if it's still running? Something like Ruby's God ( http://god.rubyforge.org/ ) for PHP?
God is language agnostic but it would be nice to have a solution that works on windows as well.
I had the same issue - wanting to check if a script is running. So I came up with this and I run it as a cron job. It grabs the running processes as an array and cycles though each line and checks for the file name. Seems to work fine. Replace #user# with your script user.
exec("ps -U #user# -u #user# u", $output, $result);
foreach ($output AS $line) if(strpos($line, "test.php")) echo "found";
In linux run ps as follows:
ps -C php -f
You could then do in a php script:
$output = shell_exec('ps -C php -f');
if (strpos($output, "php my_script.php")===false) {
shell_exec('php my_script.php > /dev/null 2>&1 &');
}
The above code lists all php processes running in full, then checks to see if "my_script.php" is in the list of running processes, if not it runs the process and does not wait for the process to terminate to carry on doing what it was doing.
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php 2>&1 | mail -s "Daemon stopped" you#example.org
Edit:
Technically, this invokes the mailer right away, but only completes the command when the php script ends. Doing this captures the output of the php-script and includes in the mail body, which can be useful for debugging what caused the script to halt.
Simple bash script
#!/bin/bash
while [true]; do
if ! pidof -x script.php;
then
php script.php &
fi
done
Not for windows, but...
I've got a couple of long-running PHP scripts, that have a shell script wrapping it. You can optionally return a value from the script that will be checked in the shell-script to exit, restart immediately, or sleep for a few seconds -and then restart.
Here's a simple one that just keeps running the PHP script till it's manually stopped.
#!/bin/bash
clear
date
php -f cli-SCRIPT.php
echo "wait a little while ..."; sleep 10
exec $0
The "exec $0" restarts the script, without creating a sub-process that will have to unravel later (and take up resources in the meantime). This bash script wraps a mail-sender, so it's not a problem if it exits and pauses for a moment.
Here is what I did to combat a similar issue. This helps in the event anyone else has a parameterized php script that you want cron to execute frequently, but only want one execution to run at any time. Add this to the top of your php script, or create a common method.
$runningScripts = shell_exec('ps -ef |grep '.strtolower($parameter).' |grep '.dirname(__FILE__).' |grep '.basename(__FILE__).' |grep -v grep |wc -l');
if($runningScripts > 1){
die();
}
You can write in your crontab something like this:
0 3 * * * /usr/bin/php -f /home/test/test.php my_special_cron
Your test.php file should look like this:
<?php
php_sapi_name() == 'cli' || exit;
if($argv[1]) {
substr_count(shell_exec('ps -ax'), $argv[1]) < 3 || exit;
}
// your code here
That way you will have only one active instace of the cron job with my-special-cron as process key. So you can add more jobs within the same php file.
test.php system_send_emails sendEmails
test.php system_create_orders orderExport
Inspired from Justin Levene's answer and improved it as ps -C doesn't work in Mac, which I need in my case. So you can use this in a php script (maybe just before you need daemon alive), tested in both Mac OS X 10.11.4 & Ubuntu 14.04:
$daemonPath = "FULL_PATH_TO_DAEMON";
$runningPhpProcessesOfDaemon = (int) shell_exec("ps aux | grep -c '[p]hp ".$daemonPath."'");
if ($runningPhpProcessesOfDaemon === 0) {
shell_exec('php ' . $daemonPath . ' > /dev/null 2>&1 &');
}
Small but useful detail: Why grep -c '[p]hp ...' instead of grep -c 'php ...'?
Because while counting processes grep -c 'php ...' will be counted as a process that fits in our pattern. So using a regex for first letter of php makes our command different from pattern we search.
One possible solution is to have it listen on a port using the socket functions. You can check that the socket is still listening with a simple script. Even a monitoring service like pingdom could monitor its status. If it dies, the socket is no longer listening.
Plenty of solutions.. Good luck.
If you have your hands on the script, you can just ask him to set a time value every X times in db, and then let a cron job check if that value is up to date.
troelskn wrote:
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php | mail -s "Daemon stopped" you#example.org
This will call mail each time a line is printed in daemon.php (which should be never, but still.)
Instead, use the double ampersand operator to separate the commands, i.e.
php daemon.php & mail -s "Daemon stopped" you#example.org
If you're having trouble checking for the PHP script directly, you can make a trivial wrapper and check for that. I'm not sufficiently familiar with Windows scripting to put how it's done here, but in Bash, it'd look like...
wrapper_for_test_php.sh
#!/bin/bash
php test.php
Then you'd just check for the wrapper like you'd check for any other bash script: pidof -x wrapper_for_test_php.sh
I have used cmder for windows and based on this script I came up with this one that I managed to deploy on linux later.
#!/bin/bash
clear
date
while true
do
php -f processEmails.php
echo "wait a little while for 5 secobds...";
sleep 5
done