I'm attempting to pipe contents from a node process into a PHP script, but for some reason it hangs in PHP and never seems to exit the while loop in test-stdin.php and therefore the final echo statement echo('Total input from stdin: ' . $text) is never run.
run.js
const { spawn } = require('child_process');
const php = spawn('php', ['test-stdin.php'], {});
php.stdin.write('some input');
php.stdin.write("\n"); // As I understand, EOL is needed to stop processing
// Also tried the below, didn't work.
// ls.stdin.write(require('os').EOL);
php.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
php.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
test-stdin.php
$input_stream = fopen("php://stdin","r");
stream_set_blocking($input_stream, 0); // Also tried: stream_set_blocking(STDIN, 0);
$text="";
// It never exits this loop, for some reason?
while(($line = fgets($input_stream,4096)) !== false) {
var_dump('Read from fgets: ', $line); // This dumps successfully "some input"
$text .= $line;
}
// The below code is never reached, as it seems it's hanging in the loop above.
fclose($input_stream);
echo('Total input from stdin: ' . $text);
Any ideas why it's hanging inside that loop and not hitting the final echo? I tried setting stream to "non blocking" mode and it didn't seem to have any effect.
This only hangs for me if I set the PHP stdin stream as blocking instead of unblocking as your example has e.g stream_set_blocking($input_stream, 1);.
With that set it hangs for ever as I would expect as nothing on the NodeJS side is ending the stdin stream.
Calling .end() on stdin from NodeJS seems to be all that's missing e.g:
const { spawn } = require('child_process');
const php = spawn('php', ['test-stdin.php'], {});
php.stdin.write('some input');
php.stdin.write("\n"); // As I understand, EOL is needed to stop processing
// Also tried the below, didn't work.
// ls.stdin.write(require('os').EOL);
php.stdin.end();
php.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
php.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
Related
I am trying to write a native PHP CLI application which reads data (log data) from stdin and does some handling after.
I got a first working version with a simple while-Loop:
while ($line = fgets(STDIN)) {
// Do my stuff here
}
When installing signal handling via
function signal_handler(int $signo, mixed $siginfo) {
// ...
}
pcntl_async_signals(TRUE);
pcntl_signal(SIGHUP, 'signal_handler');
this works partly: The signals are only processed after each fgets().
I tried to use stream_select() with NULL as timeout and some other stuff, but this lead to a massive system load :-)
Is there any best practice to use stream_select() and fgets() on stdin to read data until it's "ready" and wait/pause indefinitely else but let signals being processed?
You can use stream_set_blocking(STDIN, 0); to remove the blocking.
Example:
function signal_handler(int $signo, $siginfo) {
exit("signal caught\n");
}
stream_set_blocking(STDIN, 0);
pcntl_async_signals(TRUE);
pcntl_signal(SIGHUP, 'signal_handler');
echo "pid=" . posix_getpid() . "\n";
echo "Listening input...\n";
while (true) {
while ($line = fgets(STDIN)) {
echo "Input: '$line'\n";
}
}
Possible output:
Pid=46834
Listening input...
Input: 'test'
signal caught
See stream_set_blocking()
I have a small PHP-written CLI script which works as a front-end to CLI-based calc from Linux. The script gets mathematical expressions from user and passes them to calc. Then when user wants to quit he simply enters stop. In this case the script sends exit to calc. The problem with this script is that it displays output only in the end when user sends stop. But I need to have the output of each user's mathematical expression. The script is below:
<?php
define('BUFSIZ', 1024);
define('EXIT_CMD', 'stop');
function printOutput(&$fd) {
while (!feof($fd)) {
echo fgets($fd, BUFSIZ);
}
}
function &getDescriptorSpec()
{
$spec = array(
0 => array("pty"), // stdin
1 => array("pty"), // stdout
2 => array("pty") // stderr
);
return $spec;
}
function readInputLine(&$fd)
{
echo "Enter your input\n";
$line = trim(fgets($fd));
return $line;
}
function sendCmd(&$fd, $cmd)
{
fwrite($fd, "${cmd}\n");
}
function main() {
$spec = getDescriptorSpec();
$process = proc_open("calc", $spec, $pipes);
if (is_resource($process)) {
$procstdin = &$pipes[0];
$procstdout = &$pipes[1];
$fp = fopen('php://stdin', 'r');
while (TRUE) {
$line = readInputLine($fp);
if (0 === strcmp($line, EXIT_CMD)) {
break;
}
sendCmd($procstdin, $line);
}
sendCmd($procstdin, "exit");
fclose($procstdin);
printOutput($procstdout);
fclose($procstdout);
$retval = proc_close($process);
echo "retval = $retval\n";
fclose($fp);
}
}
main();
When using the CLI version of PHP, the output is still buffered - so the usual time that a page is sent to the user is at the end of the script.
As with any version of PHP - using flush() will force the output to be sent to the user.
Also - you should use PHP_EOL, it outputs the correct new lines for whatever platform your on (linux and Windows use different chars - \r\n or \n). PHP_EOL is a safe way of creating a new line.
Say, in PHP, I have bunch of unit tests.
Say they require some service to be running.
Ideally I want my bootstrap script to:
start up this service
wait for the service to attain a desired state
hand control to the unit-testing framework of choice to run the tests
clean up when the tests end, gracefully terminating the service as appropriate
set up some way of capturing all output from the service along the way for logging and debugging
I'm currently using proc_open() to initialize my service, capturing the output using the pipe mechanism, checking that the service is getting to the state I need by examining the output.
However at this point I'm stumped - how can I capture the rest of the output (including STDERR) for the rest of the duration of the script, while still allowing my unit tests to run?
I can think of a few potentially long-winded solutions, but before investing the time in investigating them, I would like to know if anyone else has come up against this problem and what solutions they found, if any, without influencing the response.
Edit:
Here is a cutdown version of the class I am initializing in my bootstrap script (with new ServiceRunner), for reference:
<?php
namespace Tests;
class ServiceRunner
{
/**
* #var resource[]
*/
private $servicePipes;
/**
* #var resource
*/
private $serviceProc;
/**
* #var resource
*/
private $temp;
public function __construct()
{
// Open my log output buffer
$this->temp = fopen('php://temp', 'r+');
fputs(STDERR,"Launching Service.\n");
$this->serviceProc = proc_open('/path/to/service', [
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
], $this->servicePipes);
// Set the streams to non-blocking, so stream_select() works
stream_set_blocking($this->servicePipes[1], false);
stream_set_blocking($this->servicePipes[2], false);
// Set up array of pipes to select on
$readables = [$this->servicePipes[1], $this->servicePipes[2]);
while(false !== ($streams = stream_select($read = $readables, $w = [], $e = [], 1))) {
// Iterate over pipes that can be read from
foreach($read as $stream) {
// Fetch a line of input, and append to my output buffer
if($line = stream_get_line($stream, 8192, "\n")) {
fputs($this->temp, $line."\n");
}
// Break out of both loops if the service has attained the desired state
if(strstr($line, 'The Service is Listening' ) !== false) {
break 2;
}
// If the service has closed one of its output pipes, remove them from those we're selecting on
if($line === false && feof($stream)) {
$readables = array_diff($readables, [$stream]);
}
}
}
/* SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED */
/* Set up the pipes to be redirected to $this->temp here */
register_shutdown_function([$this, 'shutDown']);
}
public function shutDown()
{
fputs(STDERR,"Closing...\n");
fclose($this->servicePipes[0]);
proc_terminate($this->serviceProc, SIGINT);
fclose($this->servicePipes[1]);
fclose($this->servicePipes[2]);
proc_close($this->serviceProc);
fputs(STDERR,"Closed service\n");
$logFile = fopen('log.txt', 'w');
rewind($this->temp);
stream_copy_to_stream($this->temp, $logFile);
fclose($this->temp);
fclose($logFile);
}
}
Suppose the service is implemented as service.sh shell script with the following contents:
#!/bin/bash -
for i in {1..4} ; do
printf 'Step %d\n' $i
printf 'Step Error %d\n' $i >&2
sleep 0.7
done
printf '%s\n' 'The service is listening'
for i in {1..4} ; do
printf 'Output %d\n' $i
printf 'Output Error %d\n' $i >&2
sleep 0.2
done
echo 'Done'
The script emulates startup process, prints the message indicating that the service is ready, and prints some output after startup.
Since you are not proceeding with the unit tests until the "service-ready marker" is read, I see no special reason to do this asynchronously. If you want to run some process (updating UI etc.) while waiting for the service, I would suggest using an extension featuring asynchronous functions (pthreads, ev, event etc.).
However, if there are only two things to be done asynchronously, then why not fork a process? The service can run in the parent process, and the unit tests can be launched in the child process:
<?php
$cmd = './service.sh';
$desc = [
1 => [ 'pipe', 'w' ],
2 => [ 'pipe', 'w' ],
];
$proc = proc_open($cmd, $desc, $pipes);
if (!is_resource($proc)) {
die("Failed to open process for command $cmd");
}
$service_ready_marker = 'The service is listening';
$got_service_ready_marker = false;
// Wait until service is ready
for (;;) {
$output_line = stream_get_line($pipes[1], PHP_INT_MAX, PHP_EOL);
echo "Read line: $output_line\n";
if ($output_line === false) {
break;
}
if ($output_line == $service_ready_marker) {
$got_service_ready_marker = true;
break;
}
if ($error_line = stream_get_line($pipes[2], PHP_INT_MAX, PHP_EOL)) {
$startup_errors []= $error_line;
}
}
if (!empty($startup_errors)) {
fprintf(STDERR, "Startup Errors: <<<\n%s\n>>>\n", implode(PHP_EOL, $startup_errors));
}
if ($got_service_ready_marker) {
echo "Got service ready marker\n";
$pid = pcntl_fork();
if ($pid == -1) {
fprintf(STDERR, "failed to fork a process\n");
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
} elseif ($pid) {
// parent process
// capture the output from the service
$output = stream_get_contents($pipes[1]);
$errors = stream_get_contents($pipes[2]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Use the captured output
if ($output) {
file_put_contents('/tmp/service.output', $output);
}
if ($errors) {
file_put_contents('/tmp/service.errors', $errors);
}
echo "Parent: waiting for child processes to finish...\n";
pcntl_wait($status);
echo "Parent: done\n";
} else {
// child process
// Cleanup
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Run unit tests
echo "Child: running unit tests...\n";
usleep(5e6);
echo "Child: done\n";
}
}
Sample Output
Read line: Step 1
Read line: Step 2
Read line: Step 3
Read line: Step 4
Read line: The service is listening
Startup Errors: <<<
Step Error 1
Step Error 2
Step Error 3
Step Error 4
>>>
Got service ready marker
Child: running unit tests...
Parent: waiting for child processes to finish...
Child: done
Parent: done
You can use the pcntl_fork() command to fork the current process to do both tasks and wait for the tests to finish:
<?php
// [launch service here]
$pid = pcntl_fork();
if ($pid == -1) {
die('error');
} else if ($pid) {
// [read output here]
// then wait for the unit tests to end (see below)
pcntl_wait($status);
// [gracefully finishing service]
} else {
// [unit tests here]
}
?>
What I ended up doing, having reached the point where the service had been initialized correctly, was to redirect the pipes from the already opened process as the standard input to a cat process per-pipe, also opened by proc_open() (helped by this answer).
This wasn't the whole story, as I got to this point and realised that the async process was hanging after a while due to the stream buffer filling up.
The key part that I needed (having set the streams to non-blocking previously) was to revert the streams to blocking mode, so that the buffer would drain into the receiving cat processes correctly.
To complete the code from my question:
// Iterate over the streams that are stil open
foreach(array_reverse($readables) as $stream) {
// Revert the blocking mode
stream_set_blocking($stream, true);
$cmd = 'cat';
// Receive input from an output stream for the previous process,
// Send output into the internal unified output buffer
$pipes = [
0 => $stream,
1 => $this->temp,
2 => array("file", "/dev/null", 'w'),
];
// Launch the process
$this->cats[] = proc_open($cmd, $pipes, $outputPipes = []);
}
I'm not an expert with PHP. I have a function which uses EXEC to run WINRS whcih then runs commands on remote servers. The problem is this function is placed into a loop which calls getservicestatus function dozens of times. Sometimes the WINRS command can get stuck or take longer than expected causing the PHP script to time out and throw a 500 error.
Temporarily I've lowered the set timeout value in PHP and created a custom 500 page in IIS and if the referring page is equal to the script name then reload the page (else, throw an error). But this is messy. And obviously it doesn't apply to each time the function is called as it's global. So it only avoids the page stopping at the HTTP 500 error.
What I'd really like to do is set a timeout of 5 seconds on the function itself. I've been searching quite a bit and have been unable to find an answer, even on stackoverflow. Yes, there are similar questions but I have not been able to find any that relate to my function. Perhaps there's a way to do this when executing the command such as an alternative to exec()? I don't know. Ideally I'd like the function to timeout after 5 seconds and return $servicestate as 0.
Code is commented to explain my spaghetti mess. And I'm sorry you have to see it...
function getservicestatus($servername, $servicename, $username, $password)
{
//define start so that if an invalid result is reached the function can be restarted using goto.
start:
//Define command to use to get service status.
$command = 'winrs /r:' . $servername . ' /u:' . $username . ' /p:' . $password . ' sc query ' . $servicename . ' 2>&1';
exec($command, $output);
//Defines the server status as $servicestate which is stored in the fourth part of the command array.
//Then the string "STATE" and any number is stripped from $servicestate. This will leave only the status of the service (e.g. RUNNING or STOPPED).
$servicestate = $output[3];
$strremove = array('/STATE/','/:/','/[0-9]+/','/\s+/');
$servicestate = preg_replace($strremove, '', $servicestate);
//Define an invalid output. Sometimes the array is invalid. Catch this issue and restart the function for valid output.
//Typically this can be caught when the string "SERVICE_NAME" is found in $output[3].
$badservicestate = "SERVICE_NAME" . $servicename;
if($servicestate == $badservicestate) {
goto start;
}
//Service status (e.g. Running, Stopped Disabled) is returned as $servicestate.
return $servicestate;
}
The most straightforward solution, since you are calling an external process, and you actually need its output in your script, is to rewrite exec in terms of proc_open and non-blocking I/O:
function exec_timeout($cmd, $timeout, &$output = '') {
$fdSpec = [
0 => ['file', '/dev/null', 'r'], //nothing to send to child process
1 => ['pipe', 'w'], //child process's stdout
2 => ['file', '/dev/null', 'a'], //don't care about child process stderr
];
$pipes = [];
$proc = proc_open($cmd, $fdSpec, $pipes);
stream_set_blocking($pipes[1], false);
$stop = time() + $timeout;
while(1) {
$in = [$pipes[1]];
$out = [];
$err = [];
stream_select($in, $out, $err, min(1, $stop - time()));
if($in) {
while(!feof($in[0])) {
$output .= stream_get_contents($in[0]);
break;
}
if(feof($in[0])) {
break;
}
} else if($stop <= time()) {
break;
}
}
fclose($pipes[1]); //close process's stdout, since we're done with it
$status = proc_get_status($proc);
if($status['running']) {
proc_terminate($proc); //terminate, since close will block until the process exits itself
return -1;
} else {
proc_close($proc);
return $status['exitcode'];
}
}
$returnValue = exec_timeout('YOUR COMMAND HERE', $timeout, $output);
This code:
uses proc_open to open a child process. We only specify the pipe for the child's stdout, since we have nothing to send to it, and don't care about its stderr output. if you do, you'll have to adjust the following code accordingly.
Loops on stream_select(), which will block for a period up to the $timeout set ($stop - time()).
If there is input, it will var_dump() the contents of the input buffer. This won't block, because we have stream_set_blocking($pipe[1], false) on the pipe. You will likely want to save the content into a variable (appending it rather than overwriting it), rather than printing out.
When we have read the entire file, or we have exceeded our timeout, stop.
Cleanup by closing the process we have opened.
Output is stored in the pass-by-reference string $output. The process's exit code is returned, or -1 in the case of a timeout.
This is what i want to accomplish using php (possibly using exce()?):
telnet to a whois registrar using a program called proxychains:
proxychains telent whois.someregistrar 43
if failed -> try 1 again
feed a domain name to the connection:
somedomainname.com
capture data returned by the registrar to php
I have no experience with shell scripting so how do i capture the event
in which telnet is connected and hangs for input and how do i "feed" it?
Am i totaly off here or is this the right way to go about it?
EDIT: i see python have a good way to handel this using expect
Here is a basic working example.
<?php
$whois = 'whois.isoc.org.il'; // server to connect to for whois
$data = 'drew.co.il'; // query to send to whois server
$errFile = '/tmp/error-output.txt'; // where stderr gets written to
$command = "proxychains telnet $whois 43"; // command to run for making query
// variables to pass to proc_open
$cwd = '/tmp';
$env = null;
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
// process output goes here
$output = '';
// store return value on failure
$return_value = null;
// open the process
$process = proc_open($command, $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
echo "Opened process...\n";
$readBuf = '';
// infinite loop until process returns
for(;;) {
usleep(100000); // dont consume too many resources
// TODO: implement a timeout
$stat = proc_get_status($process); // get info on process
if ($stat['running']) { // still running
$read = fread($pipes[1], 4096);
if ($read) {
$readBuf .= $read;
}
// read output to determine if telnet connected successfully
if (strpos($readBuf, "Connected to $whois") !== false) {
// write our query to process and append newline to initiate
fwrite($pipes[0], $data . "\n");
// read the output of the process
$output = stream_get_contents($pipes[1]);
break;
}
} else {
// process finished before we could do anything
$output = stream_get_contents($pipes[1]); // get output of command
$return_value = $stat['exitcode']; // set exit code
break;
}
}
echo "Execution completed.\n";
if ($return_value != null) {
var_dump($return_value, file_get_contents($errFile));
} else {
var_dump($output);
}
// close pipes
fclose($pipes[1]);
fclose($pipes[0]);
// close process
proc_close($process);
} else {
echo 'Failed to open process.';
}
This is meant to be run from the command line, but it doesn't have to be. I tried to comment it fairly well. Basically at the beginning you can set the whois server, and the domain to query.
The script uses proc_open to open a proxychains process that calls telnet. It checks to see if the process was opened successfully, and if so check that its status is running. While its running, it reads the output from telnet into a buffer and looks for the string telnet outputs to indicate we are connected.
Once it detects telnet connected, it writes the data to the process followed by a newline (\n) and then reads the data from the pipe where the telnet data goes. Once that happens it breaks out of the loop and closes the process and handles.
You can view the output from proxychains from the file specified by $errFile. This contains the connection information as well as debug information in the event of a connection failure.
There is probably some additional error checking or process management that may need to be done to make it more robust, but if you put this into a function you should be able to easily call it and check the return value to see if the query was successful.
Hope that gives you a good starting point.
Also check out this answer of mine for another working example of proc_open, this example implements a timeout check so you can bail if the command hasn't completed in a certain amount of time: Creating a PHP Online Grading System on Linux: exec Behavior, Process IDs, and grep