The following scripts monitors /dev/shm/test for new files and outputs info about it in real time.
The problem is that when user closes the browser, a inotifywait process remains open, and so on.
Is there any way to avoid this?
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "a") // stderr is a file to write to
);
$process = proc_open('inotifywait -mc -e create /dev/shm/test/', $descriptorspec, $pipes);
if (is_resource($process)) {
header("Content-type: text/html;charset=utf-8;");
ob_end_flush(); //ends the automatic ob started by PHP
while ($s = fgets($pipes[1])) {
print $s;
flush();
}
fclose($pipes[1]);
fclose($pipes[0]);
fclose($pipes[2]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
echo "command returned $return_value\n";
}
?>
That's because inotifywait will wait until changes happen to the file /dev/shm/test/, then will output diagnostic information on standard error and event information on standard output, and fgets() will wait until it can read a line: Reading ends when $length - 1 bytes (2nd parameter) have been read, or a newline (which is included in the return value), or an EOF (whichever comes first). If no length is specified, it will keep reading from the stream until it reaches the end of the line.
So basically, you should read data from the child process' stdout pipe non-blocking mode with stream_set_blocking($pipes[1], 0), or check manually if there is data on that pipe with stream_select().
Also, you need to ignore user abort with ignore_user_abort(true).
As inotifywait runs as own process that basically never ends you need to send it a KILL signal. If you run the script on cli the Ctrl+C signal is sent to the inotifywait process too - but you don't have that when running in the webserver.
You send the signal in a function that gets called by register_shutdown_function or by __destruct in a class.
This simple wrapper around proc_open could help:
class Proc
{
private $_process;
private $_pipes;
public function __construct($cmd, $descriptorspec, $cwd = null, $env = null)
{
$this->_process = proc_open($cmd, $descriptorspec, $this->_pipes, $cwd, $env);
if (!is_resource($this->_process)) {
throw new Exception("Command failed: $cmd");
}
}
public function __destruct()
{
if ($this->isRunning()) {
$this->terminate();
}
}
public function pipe($nr)
{
return $this->_pipes[$nr];
}
public function terminate($signal = 15)
{
$ret = proc_terminate($this->_process, $signal);
if (!$ret) {
throw new Exception("terminate failed");
}
}
public function close()
{
return proc_close($this->_process);
}
public function getStatus()
{
return proc_get_status($this->_process);
}
public function isRunning()
{
$st = $this->getStatus();
return $st['running'];
}
}
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "a") // stderr is a file to write to
);
$proc = new Proc('inotifywait -mc -e create /dev/shm/test/', $descriptorspec);
header("Content-type: text/html;charset=utf-8;");
ob_end_flush(); //ends the automatic ob started by PHP
$pipe = $proc->pipe(1);
while ($s = fgets($pipe)) {
print $s;
flush();
}
fclose($pipe);
$return_value = proc->close($process);
echo "command returned $return_value\n";
Or you could use the Symfony Process Component which does exactly the same (plus other useful things)
You can use ignore_user_abort to specify that the script should not stop executing when the user closes the browser window. That will solve half of the problem, so you also need to check if the window was closed inside your loop with connection_aborted to determine when you need to shut down everything in an orderly manner:
header("Content-type: text/html;charset=utf-8;");
ignore_user_abort(true);
ob_end_flush(); //ends the automatic ob started by PHP
while ($s = fgets($pipes[1])) {
print $s;
flush();
if (connection_aborted()) {
proc_terminate($process);
break;
}
}
Does this help?
$proc_info = proc_get_status($process);
pcntl_waitpid($proc_info['pid']);
Related
I'm using this Mozilla SSE example
I added inside the loop a sample PHP proc_open example.
Run from browser, everything works fine.
The only problem is proc_open() execute a command that can take more than 2 minute to finish, which make the browser timeout after 2 minutes only. And our server use non-thread PHP.
Question:
How I can make the PHP script send something to the browser while waiting for proc_open() to finish in a non-thread PHP script ?.
Code:
date_default_timezone_set("America/New_York");
header("Cache-Control: no-store");
header("Content-Type: text/event-stream");
$counter = rand(1, 10);
while (true) {
// Run a local command
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
$cwd = '/tmp';
$env = array('some_option' => 'aeiou');
$process = proc_open('HelloWorldProgram', $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0], '<?php print_r($_ENV); ?>');
fclose($pipes[0]);
echo stream_get_contents($pipes[1]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
echo "command returned $return_value\n";
}
// Every second, send a "ping" event.
echo "event: ping\n";
$curDate = date(DATE_ISO8601);
echo 'data: {"time": "' . $curDate . '"}';
echo "\n\n";
// Send a simple message at random intervals.
$counter--;
if (!$counter) {
echo 'data: This is a message at time ' . $curDate . "\n\n";
$counter = rand(1, 10);
}
ob_end_flush();
flush();
// Break the loop if the client aborted the connection (closed the page)
if ( connection_aborted() ) break;
sleep(1);
}
I had a problem like this in one of my project so use this line set_time_limit(0); in the start of your script like this
date_default_timezone_set("America/New_York");
header("Cache-Control: no-store");
header("Content-Type: text/event-stream");
set_time_limit(0);//this will prevent the script to stop
Workaround
If someone had the same issue, I did this workaround which send a "Ping" to the browser while your real command is running and waiting to finish.
MyScript.sh:
#!/bin/bash
for task in "$#"; do {
$task &
} done
while true; do {
echo "Ping";
sleep 30;
} done
Use:
$ sh MyScript.sh "Your Command Here With All Arguments"
Ping
Ping
Ping
If you setup correctly the SSE, you browser will receive "Ping" every 30 seconds in the same time your command is running so your browser will never timeout.
Say, in PHP, I have bunch of unit tests.
Say they require some service to be running.
Ideally I want my bootstrap script to:
start up this service
wait for the service to attain a desired state
hand control to the unit-testing framework of choice to run the tests
clean up when the tests end, gracefully terminating the service as appropriate
set up some way of capturing all output from the service along the way for logging and debugging
I'm currently using proc_open() to initialize my service, capturing the output using the pipe mechanism, checking that the service is getting to the state I need by examining the output.
However at this point I'm stumped - how can I capture the rest of the output (including STDERR) for the rest of the duration of the script, while still allowing my unit tests to run?
I can think of a few potentially long-winded solutions, but before investing the time in investigating them, I would like to know if anyone else has come up against this problem and what solutions they found, if any, without influencing the response.
Edit:
Here is a cutdown version of the class I am initializing in my bootstrap script (with new ServiceRunner), for reference:
<?php
namespace Tests;
class ServiceRunner
{
/**
* #var resource[]
*/
private $servicePipes;
/**
* #var resource
*/
private $serviceProc;
/**
* #var resource
*/
private $temp;
public function __construct()
{
// Open my log output buffer
$this->temp = fopen('php://temp', 'r+');
fputs(STDERR,"Launching Service.\n");
$this->serviceProc = proc_open('/path/to/service', [
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
], $this->servicePipes);
// Set the streams to non-blocking, so stream_select() works
stream_set_blocking($this->servicePipes[1], false);
stream_set_blocking($this->servicePipes[2], false);
// Set up array of pipes to select on
$readables = [$this->servicePipes[1], $this->servicePipes[2]);
while(false !== ($streams = stream_select($read = $readables, $w = [], $e = [], 1))) {
// Iterate over pipes that can be read from
foreach($read as $stream) {
// Fetch a line of input, and append to my output buffer
if($line = stream_get_line($stream, 8192, "\n")) {
fputs($this->temp, $line."\n");
}
// Break out of both loops if the service has attained the desired state
if(strstr($line, 'The Service is Listening' ) !== false) {
break 2;
}
// If the service has closed one of its output pipes, remove them from those we're selecting on
if($line === false && feof($stream)) {
$readables = array_diff($readables, [$stream]);
}
}
}
/* SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED */
/* Set up the pipes to be redirected to $this->temp here */
register_shutdown_function([$this, 'shutDown']);
}
public function shutDown()
{
fputs(STDERR,"Closing...\n");
fclose($this->servicePipes[0]);
proc_terminate($this->serviceProc, SIGINT);
fclose($this->servicePipes[1]);
fclose($this->servicePipes[2]);
proc_close($this->serviceProc);
fputs(STDERR,"Closed service\n");
$logFile = fopen('log.txt', 'w');
rewind($this->temp);
stream_copy_to_stream($this->temp, $logFile);
fclose($this->temp);
fclose($logFile);
}
}
Suppose the service is implemented as service.sh shell script with the following contents:
#!/bin/bash -
for i in {1..4} ; do
printf 'Step %d\n' $i
printf 'Step Error %d\n' $i >&2
sleep 0.7
done
printf '%s\n' 'The service is listening'
for i in {1..4} ; do
printf 'Output %d\n' $i
printf 'Output Error %d\n' $i >&2
sleep 0.2
done
echo 'Done'
The script emulates startup process, prints the message indicating that the service is ready, and prints some output after startup.
Since you are not proceeding with the unit tests until the "service-ready marker" is read, I see no special reason to do this asynchronously. If you want to run some process (updating UI etc.) while waiting for the service, I would suggest using an extension featuring asynchronous functions (pthreads, ev, event etc.).
However, if there are only two things to be done asynchronously, then why not fork a process? The service can run in the parent process, and the unit tests can be launched in the child process:
<?php
$cmd = './service.sh';
$desc = [
1 => [ 'pipe', 'w' ],
2 => [ 'pipe', 'w' ],
];
$proc = proc_open($cmd, $desc, $pipes);
if (!is_resource($proc)) {
die("Failed to open process for command $cmd");
}
$service_ready_marker = 'The service is listening';
$got_service_ready_marker = false;
// Wait until service is ready
for (;;) {
$output_line = stream_get_line($pipes[1], PHP_INT_MAX, PHP_EOL);
echo "Read line: $output_line\n";
if ($output_line === false) {
break;
}
if ($output_line == $service_ready_marker) {
$got_service_ready_marker = true;
break;
}
if ($error_line = stream_get_line($pipes[2], PHP_INT_MAX, PHP_EOL)) {
$startup_errors []= $error_line;
}
}
if (!empty($startup_errors)) {
fprintf(STDERR, "Startup Errors: <<<\n%s\n>>>\n", implode(PHP_EOL, $startup_errors));
}
if ($got_service_ready_marker) {
echo "Got service ready marker\n";
$pid = pcntl_fork();
if ($pid == -1) {
fprintf(STDERR, "failed to fork a process\n");
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
} elseif ($pid) {
// parent process
// capture the output from the service
$output = stream_get_contents($pipes[1]);
$errors = stream_get_contents($pipes[2]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Use the captured output
if ($output) {
file_put_contents('/tmp/service.output', $output);
}
if ($errors) {
file_put_contents('/tmp/service.errors', $errors);
}
echo "Parent: waiting for child processes to finish...\n";
pcntl_wait($status);
echo "Parent: done\n";
} else {
// child process
// Cleanup
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Run unit tests
echo "Child: running unit tests...\n";
usleep(5e6);
echo "Child: done\n";
}
}
Sample Output
Read line: Step 1
Read line: Step 2
Read line: Step 3
Read line: Step 4
Read line: The service is listening
Startup Errors: <<<
Step Error 1
Step Error 2
Step Error 3
Step Error 4
>>>
Got service ready marker
Child: running unit tests...
Parent: waiting for child processes to finish...
Child: done
Parent: done
You can use the pcntl_fork() command to fork the current process to do both tasks and wait for the tests to finish:
<?php
// [launch service here]
$pid = pcntl_fork();
if ($pid == -1) {
die('error');
} else if ($pid) {
// [read output here]
// then wait for the unit tests to end (see below)
pcntl_wait($status);
// [gracefully finishing service]
} else {
// [unit tests here]
}
?>
What I ended up doing, having reached the point where the service had been initialized correctly, was to redirect the pipes from the already opened process as the standard input to a cat process per-pipe, also opened by proc_open() (helped by this answer).
This wasn't the whole story, as I got to this point and realised that the async process was hanging after a while due to the stream buffer filling up.
The key part that I needed (having set the streams to non-blocking previously) was to revert the streams to blocking mode, so that the buffer would drain into the receiving cat processes correctly.
To complete the code from my question:
// Iterate over the streams that are stil open
foreach(array_reverse($readables) as $stream) {
// Revert the blocking mode
stream_set_blocking($stream, true);
$cmd = 'cat';
// Receive input from an output stream for the previous process,
// Send output into the internal unified output buffer
$pipes = [
0 => $stream,
1 => $this->temp,
2 => array("file", "/dev/null", 'w'),
];
// Launch the process
$this->cats[] = proc_open($cmd, $pipes, $outputPipes = []);
}
I am executing a C application by a web interface in PHP. The output of the C application is displayed on the browser after its complete execution. I want that as soon as C application "printf" function prints an output it should be displayed on the browser.
I tried to use flush, ob_flush, ob_flush_end, setting header in PHP but it didnt worked.
Then added fflush(stdout) in my C application and it immediately updates the output on the browser.
The problem is that I dont want to add anything in C application I want to achieve this in PHP. My C code and PHP script are gievn below
hello.c
#include<stdio.h>
void main(void)
{
int i = 0;
for(i = 0; i < 5; i++)
{
printf("hello world\n");
//fflush(stdout);//it immediately updates the browser output if uncommented
sleep(1);
}
}
PHP
<?php
execute_prog('/var/www/html/test/./hello3');
function execute_prog($exe)
{
set_time_limit(1800);
$exe_command = escapeshellcmd($exe);
$descriptorspec = array(
0 => array("pipe", "r"), // stdin -> for execution
1 => array("pipe", "w"), // stdout -> for execution
2 => array("pipe", "w") // stderr
);
$process = proc_open($exe_command, $descriptorspec, $pipes);//creating child process
if (is_resource($process))
{
while(1)
{
$write = NULL;
$read = array($pipes[1]);
$err = NULL;
$except = NULL;
if (false === ($num_changed_streams = stream_select($read, $write, $except, 0)))
{
/* Error handling */
echo "Errors\n";
}
else if ($num_changed_streams > 0)
{
/* At least on one of the streams something interesting happened */
//echo "Data on ".$num_changed_streams." descriptor\n";
if($read)
{
echo "Data on child process STDOUT\n";
$s = fgets($pipes[1]);
print $s."</br>";
ob_flush();
flush();
}
else if($write)
{
echo "Data on child process STDIN\n";
}
else if($err)
{
echo "Data on child process STDERR\n";
}
$num_changed_streams = 0;
}
}
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
echo "exitcode: ".proc_close($process)."\n";
}
return $ret;
}
?>
Any assistance with this would be greatly appreciated.
This has to do with stdio stream buffering. When you run hello3 on the terminal you only observe immediate results because stdout is connected to a terminal and in terminals it is by default line buffered.
The stdio library is clever enough to detect that when run on a pipe no terminal is connected to stdout and turns it to fully buffered (for performance reasons). This is why adding fflush(stdout); updates your browser immediately.
if you want your browser to receive results immediately then either fflush(stdout); each time when you want an update or change buffering to line buffered or unbuffered.
setvbuf(stdout, NULL, _IONBF, 0); // no buffering
setvbuf(stdout, NULL, _IOLBF, 0); // line buffered
setvbuf(stdout, NULL, _IOFBF, 0); // fully buffered
Edit:
If you badly can't modify the C executable then you can inject a library that will set this option for you. Here is a minimal example:
// unbuf.c
#include <stdio.h>
void _nobuff_initialiser(void) {
setvbuf(stdout, NULL, _IONBF, 0); // no buffering
}
Compile with
cc -o unbuf.so -fPIC -shared -init __nobuff_initialiser unbuf.c # Linux
cc -o unbuf.dylib -dynamiclib -init __nobuff_initialiser unbuf.c # OS X
Run with environment variables set
LD_PRELOAD=<pathto unbuf.so> ./hello3 # Linux
DYLD_INSERT_LIBRARIES=<path to unbuf.dylib> ./hello3 # OS X
Or from PHP:
putenv("LD_PRELOAD=<pathto unbuf.so>"); # Linux
putenv("DYLD_INSERT_LIBRARIES=<path to unbuf.dylib>"); # OS X
You can use the unbuffer command in front of your program:
execute_prog('unbuffer /var/www/html/test/./hello3');
It will open a pseudo TTY and the libC will switch to line buffering instead of full buffering.
Detailed info available in man: http://linuxcommand.org/man_pages/unbuffer1.html
I want to read & write STDOUT & STDIN respectively of a C application using a web interface that is in PHP. For this I have a test c application "hello.c" that outputs a string after certain sleep, using PHP I execute that C application. I use stream select function in PHP and detect when there is a change in the stdout of that C application. I read the application output but on PHP I am getting status changed on read discriptor cotinuosly even the C application output is complete. My code for c app, PHP and browser output are given below;
hello.c
#include<stdio.h>
void main(void)
{
int i = 0;
for(i = 0; i < 5; i++)
{
printf("hello world\n");
sleep(1);
}
}
PHP
<?php
execute_prog('/var/www/html/test/./hello3');
function execute_prog($exe)
{
set_time_limit(1800);
$exe_command = escapeshellcmd($exe);
$descriptorspec = array(
0 => array("pipe", "r"), // stdin -> for execution
1 => array("pipe", "w"), // stdout -> for execution
2 => array("pipe", "w") // stderr
);
$process = proc_open($exe_command, $descriptorspec, $pipes);//creating child process
if (is_resource($process))
{
while(1)
{
$write = NULL;
$read = array($pipes[1]);
$err = NULL;
$except = NULL;
if (false === ($num_changed_streams = stream_select($read, $write, $except, 0)))
{
/* Error handling */
echo "Errors\n";
}
else if ($num_changed_streams > 0)
{
/* At least on one of the streams something interesting happened */
//echo "Data on ".$num_changed_streams." descriptor\n";
if($read)
{
echo "Data on child process STDOUT\n";
$s = fgets($pipes[1]);
print $s."</br>";
flush();
}
else if($write)
{
echo "Data on child process STDIN\n";
}
else if($err)
{
echo "Data on child process STDERR\n";
}
$num_changed_streams = 0;
}
}
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
echo "exitcode: ".proc_close($process)."\n";
}
return $ret;
}
?>
Result Of PS Command
ps -eaf | grep hello
apache 24157 22641 0 10:01 ? 00:00:00 [hello3] <defunct>
Browser Output
Data on child process STDOUT hello world
Data on child process STDOUT hello world
Data on child process STDOUT hello world
Data on child process STDOUT hello world
Data on child process STDOUT hello world
Data on child process STDOUT
Data on child process STDOUT
Data on child process STDOUT
Any idea why I am continuously getting "Data on child process STDOUT" ? While this text is continuously being displayed the "ps" result remains as shown above.
Please guide.
EDIT
I added a tweak, I break the while loop when the result of fgets is empty
$s = fgets($pipes[1]);
if(empty($s))
{
echo "Empty";
break;
}
now the continuous "Data on child process STDOUT" is not displayed. As I said its a tweak Still I dont know why the read descriptor was getting true even if C application stopped sending data. Anyone please
You miss the moment when the hello program stops its output and terminates (and its process becomes "defunct"). Generally, fgets( $some_file ) returns a string if there is output and returns false if $some_file has reached its end. Thus, if you add this check:
$s = fgets($pipes[1]);
if( $s === false ) {
// Hello program has finished.
echo 'Finished', PHP_EOL;
// Close all descriptors and return...
}
you should terminate successfully.
I am trying to make a website where people can compile and run their code online, thus we need to find an interactive way for users to send instructions.
Actually, what first comes to mind is exec() or system(), but when users want to input sth, this way won't work. So we have to use proc_open().
For instance, the following code
int main()
{
int a;
printf("please input a integer\n");
scanf("%d", &a);
printf("Hello World %d!\n", a);
return 0;
}
When I used proc_open(), like this
$descriptorspec = array(
0 => array( 'pipe' , 'r' ) ,
1 => array( 'pipe' , 'w' ) ,
2 => array( 'file' , 'errors' , 'w' )
);
$run_string = "cd ".$addr_base."; ./a.out 2>&1";
$process = proc_open($run_string, $descriptorspec, $pipes);
if (is_resource($process)) {
//echo fgets($pipes[1])."<br/>";
fwrite($pipes[0], '12');
fclose($pipes[0]);
while (!feof($pipes[1]))
echo fgets($pipes[1])."<br/>";
fclose($pipes[1]);
proc_close($process);
}
When running the C code, I want to get the first STDOUT stream, and input the number, then get the second STDOUT stream. But if I have the commented line uncommented, the page will be blocked.
Is there a way to solve the problem? How can I read from the pipe while not all data has been put there? Or is there a better way to write this kind of interactive program?
It is more a C or a glibc problem. You'll have to use fflush(stdout).
Why? And what's the difference between running a.out in a terminal and calling it from PHP?
Answer: If you run a.out in a terminal (being stdin a tty) then the glibc will use line buffered IO. But if you run it from another program (PHP in this case) and it's stdin is a pipe (or whatever but not a tty) than the glibc will use internal IO buffering. That's why the first fgets() blocks if uncommented. For more info check this article.
Good news: You can control this buffering using the stdbuf command. Change $run_string to:
$run_string = "cd ".$addr_base.";stdbuf -o0 ./a.out 2>&1";
Here comes a working example. Working even if the C code don't cares about fflush() as it is using the stdbuf command:
Starting subprocess
$cmd = 'stdbuf -o0 ./a.out 2>&1';
// what pipes should be used for STDIN, STDOUT and STDERR of the child
$descriptorspec = array (
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
// open the child
$proc = proc_open (
$cmd, $descriptorspec, $pipes, getcwd()
);
set all streams to non blocking mode
// set all streams to non blockin mode
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
stream_set_blocking(STDIN, 0);
// check if opening has succeed
if($proc === FALSE){
throw new Exception('Cannot execute child process');
}
get child pid. we need it later
// get PID via get_status call
$status = proc_get_status($proc);
if($status === FALSE) {
throw new Exception (sprintf(
'Failed to obtain status information '
));
}
$pid = $status['pid'];
poll until child terminates
// now, poll for childs termination
while(true) {
// detect if the child has terminated - the php way
$status = proc_get_status($proc);
// check retval
if($status === FALSE) {
throw new Exception ("Failed to obtain status information for $pid");
}
if($status['running'] === FALSE) {
$exitcode = $status['exitcode'];
$pid = -1;
echo "child exited with code: $exitcode\n";
exit($exitcode);
}
// read from childs stdout and stderr
// avoid *forever* blocking through using a time out (50000usec)
foreach(array(1, 2) as $desc) {
// check stdout for data
$read = array($pipes[$desc]);
$write = NULL;
$except = NULL;
$tv = 0;
$utv = 50000;
$n = stream_select($read, $write, $except, $tv, $utv);
if($n > 0) {
do {
$data = fread($pipes[$desc], 8092);
fwrite(STDOUT, $data);
} while (strlen($data) > 0);
}
}
$read = array(STDIN);
$n = stream_select($read, $write, $except, $tv, $utv);
if($n > 0) {
$input = fread(STDIN, 8092);
// inpput to program
fwrite($pipes[0], $input);
}
}
The answer is surprisingly simple: leave $descriptorspec empty. If you do so, the child process will simply use the STDIN/STDOUT/STDERR streams of the parent.
➜ ~ ✗ cat stdout_is_atty.php
<?php
var_dump(stream_isatty(STDOUT));
➜ ~ ✗ php -r 'proc_close(proc_open("php stdout_is_atty.php", [], $pipes));'
/home/chx/stdout_is_atty.php:3:
bool(true)
➜ ~ ✗ php -r 'passthru("php stdout_is_atty.php");'
/home/chx/stdout_is_atty.php:3:
bool(false)
➜ ~ ✗ php -r 'exec("php stdout_is_atty.php", $output); print_r($output);'
Array
(
[0] => /home/chx/stdout_is_atty.php:3:
[1] => bool(false)
)
Credit goes to John Stevenson, one of the maintainers of composer.
If you are interested why this happens: PHP does nothing for empty descriptors and uses the C / OS defaults which just happens to be the desired one.
So the C code responsible for proc_open always merely iterates the descriptors. If there are no descriptors specified then all that code does nothing. After that, the actual execution of the child -- at least on POSIX systems -- happens via calling fork(2) which makes the child inherit file descriptors (see this answer). And then the child calls one of execvp(3) / execle(3) / execl(3) . And as the manual says
The exec() family of functions replaces the current process image with a new process image.
Perhaps it's more understandable to say the memory region containing the parent is replaced by the new program. This is accessible as /proc/$pid/mem, see this answer for more. However, the system keeps a tally of the opened files outside of this region. You can see them in /proc/$pid/fd/ -- and STDIN/STDOUT/STDERR are just shorthands for file descriptors 0/1/2. So when the child replaces the memory, the file descriptors just stay in place.