Reading from STDIN pipe when using proc_open - php

I am trying to make a website where people can compile and run their code online, thus we need to find an interactive way for users to send instructions.
Actually, what first comes to mind is exec() or system(), but when users want to input sth, this way won't work. So we have to use proc_open().
For instance, the following code
int main()
{
int a;
printf("please input a integer\n");
scanf("%d", &a);
printf("Hello World %d!\n", a);
return 0;
}
When I used proc_open(), like this
$descriptorspec = array(
0 => array( 'pipe' , 'r' ) ,
1 => array( 'pipe' , 'w' ) ,
2 => array( 'file' , 'errors' , 'w' )
);
$run_string = "cd ".$addr_base."; ./a.out 2>&1";
$process = proc_open($run_string, $descriptorspec, $pipes);
if (is_resource($process)) {
//echo fgets($pipes[1])."<br/>";
fwrite($pipes[0], '12');
fclose($pipes[0]);
while (!feof($pipes[1]))
echo fgets($pipes[1])."<br/>";
fclose($pipes[1]);
proc_close($process);
}
When running the C code, I want to get the first STDOUT stream, and input the number, then get the second STDOUT stream. But if I have the commented line uncommented, the page will be blocked.
Is there a way to solve the problem? How can I read from the pipe while not all data has been put there? Or is there a better way to write this kind of interactive program?

It is more a C or a glibc problem. You'll have to use fflush(stdout).
Why? And what's the difference between running a.out in a terminal and calling it from PHP?
Answer: If you run a.out in a terminal (being stdin a tty) then the glibc will use line buffered IO. But if you run it from another program (PHP in this case) and it's stdin is a pipe (or whatever but not a tty) than the glibc will use internal IO buffering. That's why the first fgets() blocks if uncommented. For more info check this article.
Good news: You can control this buffering using the stdbuf command. Change $run_string to:
$run_string = "cd ".$addr_base.";stdbuf -o0 ./a.out 2>&1";
Here comes a working example. Working even if the C code don't cares about fflush() as it is using the stdbuf command:
Starting subprocess
$cmd = 'stdbuf -o0 ./a.out 2>&1';
// what pipes should be used for STDIN, STDOUT and STDERR of the child
$descriptorspec = array (
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
// open the child
$proc = proc_open (
$cmd, $descriptorspec, $pipes, getcwd()
);
set all streams to non blocking mode
// set all streams to non blockin mode
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
stream_set_blocking(STDIN, 0);
// check if opening has succeed
if($proc === FALSE){
throw new Exception('Cannot execute child process');
}
get child pid. we need it later
// get PID via get_status call
$status = proc_get_status($proc);
if($status === FALSE) {
throw new Exception (sprintf(
'Failed to obtain status information '
));
}
$pid = $status['pid'];
poll until child terminates
// now, poll for childs termination
while(true) {
// detect if the child has terminated - the php way
$status = proc_get_status($proc);
// check retval
if($status === FALSE) {
throw new Exception ("Failed to obtain status information for $pid");
}
if($status['running'] === FALSE) {
$exitcode = $status['exitcode'];
$pid = -1;
echo "child exited with code: $exitcode\n";
exit($exitcode);
}
// read from childs stdout and stderr
// avoid *forever* blocking through using a time out (50000usec)
foreach(array(1, 2) as $desc) {
// check stdout for data
$read = array($pipes[$desc]);
$write = NULL;
$except = NULL;
$tv = 0;
$utv = 50000;
$n = stream_select($read, $write, $except, $tv, $utv);
if($n > 0) {
do {
$data = fread($pipes[$desc], 8092);
fwrite(STDOUT, $data);
} while (strlen($data) > 0);
}
}
$read = array(STDIN);
$n = stream_select($read, $write, $except, $tv, $utv);
if($n > 0) {
$input = fread(STDIN, 8092);
// inpput to program
fwrite($pipes[0], $input);
}
}

The answer is surprisingly simple: leave $descriptorspec empty. If you do so, the child process will simply use the STDIN/STDOUT/STDERR streams of the parent.
➜ ~ ✗ cat stdout_is_atty.php
<?php
var_dump(stream_isatty(STDOUT));
➜ ~ ✗ php -r 'proc_close(proc_open("php stdout_is_atty.php", [], $pipes));'
/home/chx/stdout_is_atty.php:3:
bool(true)
➜ ~ ✗ php -r 'passthru("php stdout_is_atty.php");'
/home/chx/stdout_is_atty.php:3:
bool(false)
➜ ~ ✗ php -r 'exec("php stdout_is_atty.php", $output); print_r($output);'
Array
(
[0] => /home/chx/stdout_is_atty.php:3:
[1] => bool(false)
)
Credit goes to John Stevenson, one of the maintainers of composer.
If you are interested why this happens: PHP does nothing for empty descriptors and uses the C / OS defaults which just happens to be the desired one.
So the C code responsible for proc_open always merely iterates the descriptors. If there are no descriptors specified then all that code does nothing. After that, the actual execution of the child -- at least on POSIX systems -- happens via calling fork(2) which makes the child inherit file descriptors (see this answer). And then the child calls one of execvp(3) / execle(3) / execl(3) . And as the manual says
The exec() family of functions replaces the current process image with a new process image.
Perhaps it's more understandable to say the memory region containing the parent is replaced by the new program. This is accessible as /proc/$pid/mem, see this answer for more. However, the system keeps a tally of the opened files outside of this region. You can see them in /proc/$pid/fd/ -- and STDIN/STDOUT/STDERR are just shorthands for file descriptors 0/1/2. So when the child replaces the memory, the file descriptors just stay in place.

Related

Redirect pipe acquired by proc_open() to file for remainder of process duration

Say, in PHP, I have bunch of unit tests.
Say they require some service to be running.
Ideally I want my bootstrap script to:
start up this service
wait for the service to attain a desired state
hand control to the unit-testing framework of choice to run the tests
clean up when the tests end, gracefully terminating the service as appropriate
set up some way of capturing all output from the service along the way for logging and debugging
I'm currently using proc_open() to initialize my service, capturing the output using the pipe mechanism, checking that the service is getting to the state I need by examining the output.
However at this point I'm stumped - how can I capture the rest of the output (including STDERR) for the rest of the duration of the script, while still allowing my unit tests to run?
I can think of a few potentially long-winded solutions, but before investing the time in investigating them, I would like to know if anyone else has come up against this problem and what solutions they found, if any, without influencing the response.
Edit:
Here is a cutdown version of the class I am initializing in my bootstrap script (with new ServiceRunner), for reference:
<?php
namespace Tests;
class ServiceRunner
{
/**
* #var resource[]
*/
private $servicePipes;
/**
* #var resource
*/
private $serviceProc;
/**
* #var resource
*/
private $temp;
public function __construct()
{
// Open my log output buffer
$this->temp = fopen('php://temp', 'r+');
fputs(STDERR,"Launching Service.\n");
$this->serviceProc = proc_open('/path/to/service', [
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
], $this->servicePipes);
// Set the streams to non-blocking, so stream_select() works
stream_set_blocking($this->servicePipes[1], false);
stream_set_blocking($this->servicePipes[2], false);
// Set up array of pipes to select on
$readables = [$this->servicePipes[1], $this->servicePipes[2]);
while(false !== ($streams = stream_select($read = $readables, $w = [], $e = [], 1))) {
// Iterate over pipes that can be read from
foreach($read as $stream) {
// Fetch a line of input, and append to my output buffer
if($line = stream_get_line($stream, 8192, "\n")) {
fputs($this->temp, $line."\n");
}
// Break out of both loops if the service has attained the desired state
if(strstr($line, 'The Service is Listening' ) !== false) {
break 2;
}
// If the service has closed one of its output pipes, remove them from those we're selecting on
if($line === false && feof($stream)) {
$readables = array_diff($readables, [$stream]);
}
}
}
/* SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED */
/* Set up the pipes to be redirected to $this->temp here */
register_shutdown_function([$this, 'shutDown']);
}
public function shutDown()
{
fputs(STDERR,"Closing...\n");
fclose($this->servicePipes[0]);
proc_terminate($this->serviceProc, SIGINT);
fclose($this->servicePipes[1]);
fclose($this->servicePipes[2]);
proc_close($this->serviceProc);
fputs(STDERR,"Closed service\n");
$logFile = fopen('log.txt', 'w');
rewind($this->temp);
stream_copy_to_stream($this->temp, $logFile);
fclose($this->temp);
fclose($logFile);
}
}
Suppose the service is implemented as service.sh shell script with the following contents:
#!/bin/bash -
for i in {1..4} ; do
printf 'Step %d\n' $i
printf 'Step Error %d\n' $i >&2
sleep 0.7
done
printf '%s\n' 'The service is listening'
for i in {1..4} ; do
printf 'Output %d\n' $i
printf 'Output Error %d\n' $i >&2
sleep 0.2
done
echo 'Done'
The script emulates startup process, prints the message indicating that the service is ready, and prints some output after startup.
Since you are not proceeding with the unit tests until the "service-ready marker" is read, I see no special reason to do this asynchronously. If you want to run some process (updating UI etc.) while waiting for the service, I would suggest using an extension featuring asynchronous functions (pthreads, ev, event etc.).
However, if there are only two things to be done asynchronously, then why not fork a process? The service can run in the parent process, and the unit tests can be launched in the child process:
<?php
$cmd = './service.sh';
$desc = [
1 => [ 'pipe', 'w' ],
2 => [ 'pipe', 'w' ],
];
$proc = proc_open($cmd, $desc, $pipes);
if (!is_resource($proc)) {
die("Failed to open process for command $cmd");
}
$service_ready_marker = 'The service is listening';
$got_service_ready_marker = false;
// Wait until service is ready
for (;;) {
$output_line = stream_get_line($pipes[1], PHP_INT_MAX, PHP_EOL);
echo "Read line: $output_line\n";
if ($output_line === false) {
break;
}
if ($output_line == $service_ready_marker) {
$got_service_ready_marker = true;
break;
}
if ($error_line = stream_get_line($pipes[2], PHP_INT_MAX, PHP_EOL)) {
$startup_errors []= $error_line;
}
}
if (!empty($startup_errors)) {
fprintf(STDERR, "Startup Errors: <<<\n%s\n>>>\n", implode(PHP_EOL, $startup_errors));
}
if ($got_service_ready_marker) {
echo "Got service ready marker\n";
$pid = pcntl_fork();
if ($pid == -1) {
fprintf(STDERR, "failed to fork a process\n");
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
} elseif ($pid) {
// parent process
// capture the output from the service
$output = stream_get_contents($pipes[1]);
$errors = stream_get_contents($pipes[2]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Use the captured output
if ($output) {
file_put_contents('/tmp/service.output', $output);
}
if ($errors) {
file_put_contents('/tmp/service.errors', $errors);
}
echo "Parent: waiting for child processes to finish...\n";
pcntl_wait($status);
echo "Parent: done\n";
} else {
// child process
// Cleanup
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Run unit tests
echo "Child: running unit tests...\n";
usleep(5e6);
echo "Child: done\n";
}
}
Sample Output
Read line: Step 1
Read line: Step 2
Read line: Step 3
Read line: Step 4
Read line: The service is listening
Startup Errors: <<<
Step Error 1
Step Error 2
Step Error 3
Step Error 4
>>>
Got service ready marker
Child: running unit tests...
Parent: waiting for child processes to finish...
Child: done
Parent: done
You can use the pcntl_fork() command to fork the current process to do both tasks and wait for the tests to finish:
<?php
// [launch service here]
$pid = pcntl_fork();
if ($pid == -1) {
die('error');
} else if ($pid) {
// [read output here]
// then wait for the unit tests to end (see below)
pcntl_wait($status);
// [gracefully finishing service]
} else {
// [unit tests here]
}
?>
What I ended up doing, having reached the point where the service had been initialized correctly, was to redirect the pipes from the already opened process as the standard input to a cat process per-pipe, also opened by proc_open() (helped by this answer).
This wasn't the whole story, as I got to this point and realised that the async process was hanging after a while due to the stream buffer filling up.
The key part that I needed (having set the streams to non-blocking previously) was to revert the streams to blocking mode, so that the buffer would drain into the receiving cat processes correctly.
To complete the code from my question:
// Iterate over the streams that are stil open
foreach(array_reverse($readables) as $stream) {
// Revert the blocking mode
stream_set_blocking($stream, true);
$cmd = 'cat';
// Receive input from an output stream for the previous process,
// Send output into the internal unified output buffer
$pipes = [
0 => $stream,
1 => $this->temp,
2 => array("file", "/dev/null", 'w'),
];
// Launch the process
$this->cats[] = proc_open($cmd, $pipes, $outputPipes = []);
}

Why does PHP hang after writing 4096 bytes to a process started with proc_open?

For anyone wondering, after leaving it all for a couple hours it now works perfectly.
I'm trying to pass a video file to VLC using PHP as a proof of concept for an upcoming project proposal for someone.
I've managed to show it works by creating a file < 4KB (Gray for 10 seconds) and testing my script but I'm curious as to the reason why this is happening in the first place.
Here's an example script to see what I mean:
$filepath = 'Path/to/your/video';
$vlcpath = 'Path/to/your/VLC executable';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$vlc = proc_open($vlcpath . ' -', $descriptorspec, $pipes, null, null, ['bypass_shell' => true]);
$file = fopen($filepath, 'r');
stream_copy_to_stream($file, $pipes[0]);
fclose($file);
proc_close($vlc);
I'm on Windows 10 and using PHP 5.5.31. I've seen a few bug reports on the PHP site about this kind of thing but they suggest the latest version has fixed it. I don't quite understand the concepts of blocking a stream but I've already tried PHP v7.0.3 to no avail.
I'm running this script using the command line: php file.php
I ran into the exact same issue trying to do WAV to MP3 conversion using LAME on Windows and was unable to find a workable solution.
I tried dozens of things including blocking/non-blocking writes, writing small (< 1k) chunks of data, sleeping and trying to write but it never was able to write all data. About as much as I could ever write before it failing was around 40kb (failure being fwrite would always return 0 and never write more data to the stream, no matter how long I waited; regardless of the sizes of the chunks written before. I even tried waiting seconds between writes and they would always succeed to about 30-40kb and never write more).
Ultimately I gave up and luckily LAME could read input from a file instead of STDIN, so I just opted to write the data to a temp file, call LAME, and remove the temp file.
Here's the relevant code:
// file descriptors for reading and writing to the Lame process
$descriptors = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'a'), // stderr
);
if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') {
// workaround for Windows conversion
// writing to STDIN seems to hang indefinitely after writing approximately 0xC400 bytes
$wavinput = tempnam(sys_get_temp_dir(), 'wav');
if (!$wavinput) {
throw new Exception('Failed to create temporary file for WAV to MP3 conversion');
}
file_put_contents($wavinput, $data);
$size = 0;
} else {
$wavinput = '-'; // stdin
}
// Mono, variable bit rate, 32 kHz sampling rate, read WAV from stdin, write MP3 to stdout
$cmd = sprintf("%s -m m -v -b 32 %s -", self::$lame_binary_path, $wavinput);
$proc = proc_open($cmd, $descriptors, $pipes);
if (!is_resource($proc)) {
throw new Exception('Failed to open process for MP3 encoding');
}
stream_set_blocking($pipes[0], 0); // set stdin to be non-blocking
for ($written = 0; $written < $size; $written += $len) {
// write to stdin until all WAV data is written
$len = fwrite($pipes[0], substr($data, $written, 0x20000));
if ($len === 0) {
// fwrite wrote no data, make sure process is still alive, otherwise wait for it to process
$status = proc_get_status($proc);
if ($status['running'] === false) break;
usleep(25000);
} else if ($written < $size) {
// couldn't write all data, small pause and try again
usleep(10000);
} else if ($len === false) {
// fwrite failed, should not happen
break;
}
}
fclose($pipes[0]);
$data = stream_get_contents($pipes[1]);
$err = trim(stream_get_contents($pipes[2]));
fclose($pipes[1]);
fclose($pipes[2]);
$return = proc_close($proc);
if ($wavinput != '-') unlink($wavinput); // delete temp file on Windows
if ($return !== 0) {
throw new Exception("Failed to convert WAV to MP3. Shell returned ({$return}): {$err}");
} else if ($written < $size) {
throw new Exception('Failed to convert WAV to MP3. Failed to write all data to encoder');
}
return $data;

PHP: Need to close STDIN in order to read STDOUT?

I recently tried to communicate with a binary on my Ubuntu webserver [1] using the PHP function proc_open. I can establish a connection and define the pipes STDIN, STDOUT, and STDERR. Nice.
Now the bimary I am talking to is an interactive computer algebra software - therefore I would like to keep both STDOUT and STDIN alive after the first command such that I can still use the application a few lines later in an interactive manner (direct user inputs from a web-frontend).
However, as it turns out, the PHP functions to read the STDOUT of the binary (either stream_get_contents or fgets) need a closed STDIN before they can work. Otherwise the program deadlocks.
This is a severe drawback since I can not just reopen the closed STDIN after closing it. So my question is: why does my script deadlock if I want to read the STDOUT when my STDIN is still alive?
Thanks
Jens
[1] proc_open returns false but does not write in error file - permissions issue?
my source:
$descriptorspec = array(
0 => array("pipe","r"),
1 => array("pipe","w"),
2 => array("file","./error.log","a")
) ;
// define current working directory where files would be stored
$cwd = './' ;
// open reduce
$process = proc_open('./reduce/reduce', $descriptorspec, $pipes, $cwd) ;
if (is_resource($process)) {
// some valid Reduce commands
fwrite($pipes[0], 'load excalc; operator x; x(0) := t; x(1) := r;');
// if the following line is removed, the script deadlocks
fclose($pipes[0]);
echo "output: " . stream_get_contents($pipes[1]);
// close pipes & close process
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($process);
}
EDIT:
This code kind of works. Kind of because it uses usleeps to wait for the non-blocked STDOUT to be filled with data. How do I do that more elegantly?
# Elias: By polling the $status['running'] entry you can only determine if the overall process is still running, but not if the process is busy or idling... That is why I have to include these usleeps.
define('TIMEOUT_IN_MS', '100');
define('TIMEOUT_STEPS', '100');
function getOutput ($pipes) {
$result = "";
$stage = 0;
$buffer = 0;
do {
$char = fgets($pipes[1], 4096);
if ($char != null) {
$buffer = 0;
$stage = 1;
$result .= $char;
} else if ($stage == "1") {
usleep(TIMEOUT_IN_MS/TIMEOUT_STEPS);
$buffer++;
if ($buffer > TIMEOUT_STEPS) {
$stage++;
}
}
} while ($stage < 2);
return $result;
}
$descriptorspec = array( 0 => array("pipe", "r"), 1 => array("pipe", "w") ) ;
// define current working directory where files would be stored
$cwd = './' ;
// open reduce
$process = proc_open('./reduce/reduce', $descriptorspec, $pipes, $cwd);
if (is_resource($process)) {
stream_set_blocking($pipes[1], 0);
echo "startup output:<br><pre>" . getOutput($pipes) . "</pre>";
fwrite($pipes[0], 'on output; load excalc; operator x; x(0) := t; x(1) := r;' . PHP_EOL);
echo "output 1:<br><pre>" . getOutput($pipes) . "</pre>";
fwrite($pipes[0], 'coframe o(t) = sqrt(1-2m/r) * d t, o(r) = 1/sqrt(1-2m/r) * d r with metric g = -o(t)*o(t) + o(r)*o(r); displayframe;' . PHP_EOL);
echo "output 2:<br><pre>" . getOutput($pipes) . "</pre>";
// close pipes & close process
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($process);
}
This reminds me of a script I wrote a while back. While it might serve as inspiration to you (or others), it doesn't do what you need. What it does contain is an example of how you can read the output of a stream, without having to close any of the streams.
Perhaps you can apply the same logic to your situation:
$allInput = array(
'load excalc; operator x; x(0) := t; x(1) := r;'
);//array with strings to pass to proc
if (is_resource($process))
{
$output = '';
$input = array_shift($allInput);
do
{
usleep(200);//make sure the running process is ready
fwrite(
$pipes,
$input.PHP_EOL,//add EOL
strlen($input)+1
);
fflush($pipes[0]);//flush buffered data, write to stream
usleep(200);
$status = proc_get_status($process);
while($out = fread($pipes[1], 1024) && !feof($pipes[1]))
$output .= $out;
} while($status['running'] && $input = array_shift($allInput));
//proc_close & fclose calls here
}
Now, seeing as I don't know what it is exactly you are trying to do, this code will need to be tweaked quite a bit. You may, for example, find yourself having to set the STDIN and STDOUT pipes as non-blocking.
It's a simple matter of adding this, right after calling proc_open, though:
stream_set_blocking($pipes[0], 0);
stream_set_blocking($pipes[1], 0);
Play around, have fun, and perhaps let me know if this answer was helpful in any way...
My guess would be that you're doing everything correctly, except that the binary is never notified that it has received all the input and can start to work. By closing STDIN, you're kicking off the work process, because it's clear that there will be no more input. If you're not closing STDIN, the binary is waiting for more input, while your side is waiting for its output.
You probably need to end your input with a newline or whatever other protocol action is expected of you. Or perhaps closing STDIN is the action that's expected of you. Unless the process is specifically created to stay open and continue to stream input, you can't make it do it. If the process reads all input, processes it, returns output and then quits, there's no way you can make it stay alive to process more input later. If the process explicitly supports that behaviour, there should be a definition on how you need to delimit your input.

C application that is executed by PHP does not show output on browser until its execution is completed

I am executing a C application by a web interface in PHP. The output of the C application is displayed on the browser after its complete execution. I want that as soon as C application "printf" function prints an output it should be displayed on the browser.
I tried to use flush, ob_flush, ob_flush_end, setting header in PHP but it didnt worked.
Then added fflush(stdout) in my C application and it immediately updates the output on the browser.
The problem is that I dont want to add anything in C application I want to achieve this in PHP. My C code and PHP script are gievn below
hello.c
#include<stdio.h>
void main(void)
{
int i = 0;
for(i = 0; i < 5; i++)
{
printf("hello world\n");
//fflush(stdout);//it immediately updates the browser output if uncommented
sleep(1);
}
}
PHP
<?php
execute_prog('/var/www/html/test/./hello3');
function execute_prog($exe)
{
set_time_limit(1800);
$exe_command = escapeshellcmd($exe);
$descriptorspec = array(
0 => array("pipe", "r"), // stdin -> for execution
1 => array("pipe", "w"), // stdout -> for execution
2 => array("pipe", "w") // stderr
);
$process = proc_open($exe_command, $descriptorspec, $pipes);//creating child process
if (is_resource($process))
{
while(1)
{
$write = NULL;
$read = array($pipes[1]);
$err = NULL;
$except = NULL;
if (false === ($num_changed_streams = stream_select($read, $write, $except, 0)))
{
/* Error handling */
echo "Errors\n";
}
else if ($num_changed_streams > 0)
{
/* At least on one of the streams something interesting happened */
//echo "Data on ".$num_changed_streams." descriptor\n";
if($read)
{
echo "Data on child process STDOUT\n";
$s = fgets($pipes[1]);
print $s."</br>";
ob_flush();
flush();
}
else if($write)
{
echo "Data on child process STDIN\n";
}
else if($err)
{
echo "Data on child process STDERR\n";
}
$num_changed_streams = 0;
}
}
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
echo "exitcode: ".proc_close($process)."\n";
}
return $ret;
}
?>
Any assistance with this would be greatly appreciated.
This has to do with stdio stream buffering. When you run hello3 on the terminal you only observe immediate results because stdout is connected to a terminal and in terminals it is by default line buffered.
The stdio library is clever enough to detect that when run on a pipe no terminal is connected to stdout and turns it to fully buffered (for performance reasons). This is why adding fflush(stdout); updates your browser immediately.
if you want your browser to receive results immediately then either fflush(stdout); each time when you want an update or change buffering to line buffered or unbuffered.
setvbuf(stdout, NULL, _IONBF, 0); // no buffering
setvbuf(stdout, NULL, _IOLBF, 0); // line buffered
setvbuf(stdout, NULL, _IOFBF, 0); // fully buffered
Edit:
If you badly can't modify the C executable then you can inject a library that will set this option for you. Here is a minimal example:
// unbuf.c
#include <stdio.h>
void _nobuff_initialiser(void) {
setvbuf(stdout, NULL, _IONBF, 0); // no buffering
}
Compile with
cc -o unbuf.so -fPIC -shared -init __nobuff_initialiser unbuf.c # Linux
cc -o unbuf.dylib -dynamiclib -init __nobuff_initialiser unbuf.c # OS X
Run with environment variables set
LD_PRELOAD=<pathto unbuf.so> ./hello3 # Linux
DYLD_INSERT_LIBRARIES=<path to unbuf.dylib> ./hello3 # OS X
Or from PHP:
putenv("LD_PRELOAD=<pathto unbuf.so>"); # Linux
putenv("DYLD_INSERT_LIBRARIES=<path to unbuf.dylib>"); # OS X
You can use the unbuffer command in front of your program:
execute_prog('unbuffer /var/www/html/test/./hello3');
It will open a pseudo TTY and the libC will switch to line buffering instead of full buffering.
Detailed info available in man: http://linuxcommand.org/man_pages/unbuffer1.html

How to capture and feed telnet using php and shell scripting?

This is what i want to accomplish using php (possibly using exce()?):
telnet to a whois registrar using a program called proxychains:
proxychains telent whois.someregistrar 43
if failed -> try 1 again
feed a domain name to the connection:
somedomainname.com
capture data returned by the registrar to php
I have no experience with shell scripting so how do i capture the event
in which telnet is connected and hangs for input and how do i "feed" it?
Am i totaly off here or is this the right way to go about it?
EDIT: i see python have a good way to handel this using expect
Here is a basic working example.
<?php
$whois = 'whois.isoc.org.il'; // server to connect to for whois
$data = 'drew.co.il'; // query to send to whois server
$errFile = '/tmp/error-output.txt'; // where stderr gets written to
$command = "proxychains telnet $whois 43"; // command to run for making query
// variables to pass to proc_open
$cwd = '/tmp';
$env = null;
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
// process output goes here
$output = '';
// store return value on failure
$return_value = null;
// open the process
$process = proc_open($command, $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
echo "Opened process...\n";
$readBuf = '';
// infinite loop until process returns
for(;;) {
usleep(100000); // dont consume too many resources
// TODO: implement a timeout
$stat = proc_get_status($process); // get info on process
if ($stat['running']) { // still running
$read = fread($pipes[1], 4096);
if ($read) {
$readBuf .= $read;
}
// read output to determine if telnet connected successfully
if (strpos($readBuf, "Connected to $whois") !== false) {
// write our query to process and append newline to initiate
fwrite($pipes[0], $data . "\n");
// read the output of the process
$output = stream_get_contents($pipes[1]);
break;
}
} else {
// process finished before we could do anything
$output = stream_get_contents($pipes[1]); // get output of command
$return_value = $stat['exitcode']; // set exit code
break;
}
}
echo "Execution completed.\n";
if ($return_value != null) {
var_dump($return_value, file_get_contents($errFile));
} else {
var_dump($output);
}
// close pipes
fclose($pipes[1]);
fclose($pipes[0]);
// close process
proc_close($process);
} else {
echo 'Failed to open process.';
}
This is meant to be run from the command line, but it doesn't have to be. I tried to comment it fairly well. Basically at the beginning you can set the whois server, and the domain to query.
The script uses proc_open to open a proxychains process that calls telnet. It checks to see if the process was opened successfully, and if so check that its status is running. While its running, it reads the output from telnet into a buffer and looks for the string telnet outputs to indicate we are connected.
Once it detects telnet connected, it writes the data to the process followed by a newline (\n) and then reads the data from the pipe where the telnet data goes. Once that happens it breaks out of the loop and closes the process and handles.
You can view the output from proxychains from the file specified by $errFile. This contains the connection information as well as debug information in the event of a connection failure.
There is probably some additional error checking or process management that may need to be done to make it more robust, but if you put this into a function you should be able to easily call it and check the return value to see if the query was successful.
Hope that gives you a good starting point.
Also check out this answer of mine for another working example of proc_open, this example implements a timeout check so you can bail if the command hasn't completed in a certain amount of time: Creating a PHP Online Grading System on Linux: exec Behavior, Process IDs, and grep

Categories