This is what i want to accomplish using php (possibly using exce()?):
telnet to a whois registrar using a program called proxychains:
proxychains telent whois.someregistrar 43
if failed -> try 1 again
feed a domain name to the connection:
somedomainname.com
capture data returned by the registrar to php
I have no experience with shell scripting so how do i capture the event
in which telnet is connected and hangs for input and how do i "feed" it?
Am i totaly off here or is this the right way to go about it?
EDIT: i see python have a good way to handel this using expect
Here is a basic working example.
<?php
$whois = 'whois.isoc.org.il'; // server to connect to for whois
$data = 'drew.co.il'; // query to send to whois server
$errFile = '/tmp/error-output.txt'; // where stderr gets written to
$command = "proxychains telnet $whois 43"; // command to run for making query
// variables to pass to proc_open
$cwd = '/tmp';
$env = null;
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
// process output goes here
$output = '';
// store return value on failure
$return_value = null;
// open the process
$process = proc_open($command, $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
echo "Opened process...\n";
$readBuf = '';
// infinite loop until process returns
for(;;) {
usleep(100000); // dont consume too many resources
// TODO: implement a timeout
$stat = proc_get_status($process); // get info on process
if ($stat['running']) { // still running
$read = fread($pipes[1], 4096);
if ($read) {
$readBuf .= $read;
}
// read output to determine if telnet connected successfully
if (strpos($readBuf, "Connected to $whois") !== false) {
// write our query to process and append newline to initiate
fwrite($pipes[0], $data . "\n");
// read the output of the process
$output = stream_get_contents($pipes[1]);
break;
}
} else {
// process finished before we could do anything
$output = stream_get_contents($pipes[1]); // get output of command
$return_value = $stat['exitcode']; // set exit code
break;
}
}
echo "Execution completed.\n";
if ($return_value != null) {
var_dump($return_value, file_get_contents($errFile));
} else {
var_dump($output);
}
// close pipes
fclose($pipes[1]);
fclose($pipes[0]);
// close process
proc_close($process);
} else {
echo 'Failed to open process.';
}
This is meant to be run from the command line, but it doesn't have to be. I tried to comment it fairly well. Basically at the beginning you can set the whois server, and the domain to query.
The script uses proc_open to open a proxychains process that calls telnet. It checks to see if the process was opened successfully, and if so check that its status is running. While its running, it reads the output from telnet into a buffer and looks for the string telnet outputs to indicate we are connected.
Once it detects telnet connected, it writes the data to the process followed by a newline (\n) and then reads the data from the pipe where the telnet data goes. Once that happens it breaks out of the loop and closes the process and handles.
You can view the output from proxychains from the file specified by $errFile. This contains the connection information as well as debug information in the event of a connection failure.
There is probably some additional error checking or process management that may need to be done to make it more robust, but if you put this into a function you should be able to easily call it and check the return value to see if the query was successful.
Hope that gives you a good starting point.
Also check out this answer of mine for another working example of proc_open, this example implements a timeout check so you can bail if the command hasn't completed in a certain amount of time: Creating a PHP Online Grading System on Linux: exec Behavior, Process IDs, and grep
Related
Say, in PHP, I have bunch of unit tests.
Say they require some service to be running.
Ideally I want my bootstrap script to:
start up this service
wait for the service to attain a desired state
hand control to the unit-testing framework of choice to run the tests
clean up when the tests end, gracefully terminating the service as appropriate
set up some way of capturing all output from the service along the way for logging and debugging
I'm currently using proc_open() to initialize my service, capturing the output using the pipe mechanism, checking that the service is getting to the state I need by examining the output.
However at this point I'm stumped - how can I capture the rest of the output (including STDERR) for the rest of the duration of the script, while still allowing my unit tests to run?
I can think of a few potentially long-winded solutions, but before investing the time in investigating them, I would like to know if anyone else has come up against this problem and what solutions they found, if any, without influencing the response.
Edit:
Here is a cutdown version of the class I am initializing in my bootstrap script (with new ServiceRunner), for reference:
<?php
namespace Tests;
class ServiceRunner
{
/**
* #var resource[]
*/
private $servicePipes;
/**
* #var resource
*/
private $serviceProc;
/**
* #var resource
*/
private $temp;
public function __construct()
{
// Open my log output buffer
$this->temp = fopen('php://temp', 'r+');
fputs(STDERR,"Launching Service.\n");
$this->serviceProc = proc_open('/path/to/service', [
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
], $this->servicePipes);
// Set the streams to non-blocking, so stream_select() works
stream_set_blocking($this->servicePipes[1], false);
stream_set_blocking($this->servicePipes[2], false);
// Set up array of pipes to select on
$readables = [$this->servicePipes[1], $this->servicePipes[2]);
while(false !== ($streams = stream_select($read = $readables, $w = [], $e = [], 1))) {
// Iterate over pipes that can be read from
foreach($read as $stream) {
// Fetch a line of input, and append to my output buffer
if($line = stream_get_line($stream, 8192, "\n")) {
fputs($this->temp, $line."\n");
}
// Break out of both loops if the service has attained the desired state
if(strstr($line, 'The Service is Listening' ) !== false) {
break 2;
}
// If the service has closed one of its output pipes, remove them from those we're selecting on
if($line === false && feof($stream)) {
$readables = array_diff($readables, [$stream]);
}
}
}
/* SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED SOLUTION REQUIRED */
/* Set up the pipes to be redirected to $this->temp here */
register_shutdown_function([$this, 'shutDown']);
}
public function shutDown()
{
fputs(STDERR,"Closing...\n");
fclose($this->servicePipes[0]);
proc_terminate($this->serviceProc, SIGINT);
fclose($this->servicePipes[1]);
fclose($this->servicePipes[2]);
proc_close($this->serviceProc);
fputs(STDERR,"Closed service\n");
$logFile = fopen('log.txt', 'w');
rewind($this->temp);
stream_copy_to_stream($this->temp, $logFile);
fclose($this->temp);
fclose($logFile);
}
}
Suppose the service is implemented as service.sh shell script with the following contents:
#!/bin/bash -
for i in {1..4} ; do
printf 'Step %d\n' $i
printf 'Step Error %d\n' $i >&2
sleep 0.7
done
printf '%s\n' 'The service is listening'
for i in {1..4} ; do
printf 'Output %d\n' $i
printf 'Output Error %d\n' $i >&2
sleep 0.2
done
echo 'Done'
The script emulates startup process, prints the message indicating that the service is ready, and prints some output after startup.
Since you are not proceeding with the unit tests until the "service-ready marker" is read, I see no special reason to do this asynchronously. If you want to run some process (updating UI etc.) while waiting for the service, I would suggest using an extension featuring asynchronous functions (pthreads, ev, event etc.).
However, if there are only two things to be done asynchronously, then why not fork a process? The service can run in the parent process, and the unit tests can be launched in the child process:
<?php
$cmd = './service.sh';
$desc = [
1 => [ 'pipe', 'w' ],
2 => [ 'pipe', 'w' ],
];
$proc = proc_open($cmd, $desc, $pipes);
if (!is_resource($proc)) {
die("Failed to open process for command $cmd");
}
$service_ready_marker = 'The service is listening';
$got_service_ready_marker = false;
// Wait until service is ready
for (;;) {
$output_line = stream_get_line($pipes[1], PHP_INT_MAX, PHP_EOL);
echo "Read line: $output_line\n";
if ($output_line === false) {
break;
}
if ($output_line == $service_ready_marker) {
$got_service_ready_marker = true;
break;
}
if ($error_line = stream_get_line($pipes[2], PHP_INT_MAX, PHP_EOL)) {
$startup_errors []= $error_line;
}
}
if (!empty($startup_errors)) {
fprintf(STDERR, "Startup Errors: <<<\n%s\n>>>\n", implode(PHP_EOL, $startup_errors));
}
if ($got_service_ready_marker) {
echo "Got service ready marker\n";
$pid = pcntl_fork();
if ($pid == -1) {
fprintf(STDERR, "failed to fork a process\n");
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
} elseif ($pid) {
// parent process
// capture the output from the service
$output = stream_get_contents($pipes[1]);
$errors = stream_get_contents($pipes[2]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Use the captured output
if ($output) {
file_put_contents('/tmp/service.output', $output);
}
if ($errors) {
file_put_contents('/tmp/service.errors', $errors);
}
echo "Parent: waiting for child processes to finish...\n";
pcntl_wait($status);
echo "Parent: done\n";
} else {
// child process
// Cleanup
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($proc);
// Run unit tests
echo "Child: running unit tests...\n";
usleep(5e6);
echo "Child: done\n";
}
}
Sample Output
Read line: Step 1
Read line: Step 2
Read line: Step 3
Read line: Step 4
Read line: The service is listening
Startup Errors: <<<
Step Error 1
Step Error 2
Step Error 3
Step Error 4
>>>
Got service ready marker
Child: running unit tests...
Parent: waiting for child processes to finish...
Child: done
Parent: done
You can use the pcntl_fork() command to fork the current process to do both tasks and wait for the tests to finish:
<?php
// [launch service here]
$pid = pcntl_fork();
if ($pid == -1) {
die('error');
} else if ($pid) {
// [read output here]
// then wait for the unit tests to end (see below)
pcntl_wait($status);
// [gracefully finishing service]
} else {
// [unit tests here]
}
?>
What I ended up doing, having reached the point where the service had been initialized correctly, was to redirect the pipes from the already opened process as the standard input to a cat process per-pipe, also opened by proc_open() (helped by this answer).
This wasn't the whole story, as I got to this point and realised that the async process was hanging after a while due to the stream buffer filling up.
The key part that I needed (having set the streams to non-blocking previously) was to revert the streams to blocking mode, so that the buffer would drain into the receiving cat processes correctly.
To complete the code from my question:
// Iterate over the streams that are stil open
foreach(array_reverse($readables) as $stream) {
// Revert the blocking mode
stream_set_blocking($stream, true);
$cmd = 'cat';
// Receive input from an output stream for the previous process,
// Send output into the internal unified output buffer
$pipes = [
0 => $stream,
1 => $this->temp,
2 => array("file", "/dev/null", 'w'),
];
// Launch the process
$this->cats[] = proc_open($cmd, $pipes, $outputPipes = []);
}
I'm not an expert with PHP. I have a function which uses EXEC to run WINRS whcih then runs commands on remote servers. The problem is this function is placed into a loop which calls getservicestatus function dozens of times. Sometimes the WINRS command can get stuck or take longer than expected causing the PHP script to time out and throw a 500 error.
Temporarily I've lowered the set timeout value in PHP and created a custom 500 page in IIS and if the referring page is equal to the script name then reload the page (else, throw an error). But this is messy. And obviously it doesn't apply to each time the function is called as it's global. So it only avoids the page stopping at the HTTP 500 error.
What I'd really like to do is set a timeout of 5 seconds on the function itself. I've been searching quite a bit and have been unable to find an answer, even on stackoverflow. Yes, there are similar questions but I have not been able to find any that relate to my function. Perhaps there's a way to do this when executing the command such as an alternative to exec()? I don't know. Ideally I'd like the function to timeout after 5 seconds and return $servicestate as 0.
Code is commented to explain my spaghetti mess. And I'm sorry you have to see it...
function getservicestatus($servername, $servicename, $username, $password)
{
//define start so that if an invalid result is reached the function can be restarted using goto.
start:
//Define command to use to get service status.
$command = 'winrs /r:' . $servername . ' /u:' . $username . ' /p:' . $password . ' sc query ' . $servicename . ' 2>&1';
exec($command, $output);
//Defines the server status as $servicestate which is stored in the fourth part of the command array.
//Then the string "STATE" and any number is stripped from $servicestate. This will leave only the status of the service (e.g. RUNNING or STOPPED).
$servicestate = $output[3];
$strremove = array('/STATE/','/:/','/[0-9]+/','/\s+/');
$servicestate = preg_replace($strremove, '', $servicestate);
//Define an invalid output. Sometimes the array is invalid. Catch this issue and restart the function for valid output.
//Typically this can be caught when the string "SERVICE_NAME" is found in $output[3].
$badservicestate = "SERVICE_NAME" . $servicename;
if($servicestate == $badservicestate) {
goto start;
}
//Service status (e.g. Running, Stopped Disabled) is returned as $servicestate.
return $servicestate;
}
The most straightforward solution, since you are calling an external process, and you actually need its output in your script, is to rewrite exec in terms of proc_open and non-blocking I/O:
function exec_timeout($cmd, $timeout, &$output = '') {
$fdSpec = [
0 => ['file', '/dev/null', 'r'], //nothing to send to child process
1 => ['pipe', 'w'], //child process's stdout
2 => ['file', '/dev/null', 'a'], //don't care about child process stderr
];
$pipes = [];
$proc = proc_open($cmd, $fdSpec, $pipes);
stream_set_blocking($pipes[1], false);
$stop = time() + $timeout;
while(1) {
$in = [$pipes[1]];
$out = [];
$err = [];
stream_select($in, $out, $err, min(1, $stop - time()));
if($in) {
while(!feof($in[0])) {
$output .= stream_get_contents($in[0]);
break;
}
if(feof($in[0])) {
break;
}
} else if($stop <= time()) {
break;
}
}
fclose($pipes[1]); //close process's stdout, since we're done with it
$status = proc_get_status($proc);
if($status['running']) {
proc_terminate($proc); //terminate, since close will block until the process exits itself
return -1;
} else {
proc_close($proc);
return $status['exitcode'];
}
}
$returnValue = exec_timeout('YOUR COMMAND HERE', $timeout, $output);
This code:
uses proc_open to open a child process. We only specify the pipe for the child's stdout, since we have nothing to send to it, and don't care about its stderr output. if you do, you'll have to adjust the following code accordingly.
Loops on stream_select(), which will block for a period up to the $timeout set ($stop - time()).
If there is input, it will var_dump() the contents of the input buffer. This won't block, because we have stream_set_blocking($pipe[1], false) on the pipe. You will likely want to save the content into a variable (appending it rather than overwriting it), rather than printing out.
When we have read the entire file, or we have exceeded our timeout, stop.
Cleanup by closing the process we have opened.
Output is stored in the pass-by-reference string $output. The process's exit code is returned, or -1 in the case of a timeout.
I recently tried to communicate with a binary on my Ubuntu webserver [1] using the PHP function proc_open. I can establish a connection and define the pipes STDIN, STDOUT, and STDERR. Nice.
Now the bimary I am talking to is an interactive computer algebra software - therefore I would like to keep both STDOUT and STDIN alive after the first command such that I can still use the application a few lines later in an interactive manner (direct user inputs from a web-frontend).
However, as it turns out, the PHP functions to read the STDOUT of the binary (either stream_get_contents or fgets) need a closed STDIN before they can work. Otherwise the program deadlocks.
This is a severe drawback since I can not just reopen the closed STDIN after closing it. So my question is: why does my script deadlock if I want to read the STDOUT when my STDIN is still alive?
Thanks
Jens
[1] proc_open returns false but does not write in error file - permissions issue?
my source:
$descriptorspec = array(
0 => array("pipe","r"),
1 => array("pipe","w"),
2 => array("file","./error.log","a")
) ;
// define current working directory where files would be stored
$cwd = './' ;
// open reduce
$process = proc_open('./reduce/reduce', $descriptorspec, $pipes, $cwd) ;
if (is_resource($process)) {
// some valid Reduce commands
fwrite($pipes[0], 'load excalc; operator x; x(0) := t; x(1) := r;');
// if the following line is removed, the script deadlocks
fclose($pipes[0]);
echo "output: " . stream_get_contents($pipes[1]);
// close pipes & close process
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($process);
}
EDIT:
This code kind of works. Kind of because it uses usleeps to wait for the non-blocked STDOUT to be filled with data. How do I do that more elegantly?
# Elias: By polling the $status['running'] entry you can only determine if the overall process is still running, but not if the process is busy or idling... That is why I have to include these usleeps.
define('TIMEOUT_IN_MS', '100');
define('TIMEOUT_STEPS', '100');
function getOutput ($pipes) {
$result = "";
$stage = 0;
$buffer = 0;
do {
$char = fgets($pipes[1], 4096);
if ($char != null) {
$buffer = 0;
$stage = 1;
$result .= $char;
} else if ($stage == "1") {
usleep(TIMEOUT_IN_MS/TIMEOUT_STEPS);
$buffer++;
if ($buffer > TIMEOUT_STEPS) {
$stage++;
}
}
} while ($stage < 2);
return $result;
}
$descriptorspec = array( 0 => array("pipe", "r"), 1 => array("pipe", "w") ) ;
// define current working directory where files would be stored
$cwd = './' ;
// open reduce
$process = proc_open('./reduce/reduce', $descriptorspec, $pipes, $cwd);
if (is_resource($process)) {
stream_set_blocking($pipes[1], 0);
echo "startup output:<br><pre>" . getOutput($pipes) . "</pre>";
fwrite($pipes[0], 'on output; load excalc; operator x; x(0) := t; x(1) := r;' . PHP_EOL);
echo "output 1:<br><pre>" . getOutput($pipes) . "</pre>";
fwrite($pipes[0], 'coframe o(t) = sqrt(1-2m/r) * d t, o(r) = 1/sqrt(1-2m/r) * d r with metric g = -o(t)*o(t) + o(r)*o(r); displayframe;' . PHP_EOL);
echo "output 2:<br><pre>" . getOutput($pipes) . "</pre>";
// close pipes & close process
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
proc_close($process);
}
This reminds me of a script I wrote a while back. While it might serve as inspiration to you (or others), it doesn't do what you need. What it does contain is an example of how you can read the output of a stream, without having to close any of the streams.
Perhaps you can apply the same logic to your situation:
$allInput = array(
'load excalc; operator x; x(0) := t; x(1) := r;'
);//array with strings to pass to proc
if (is_resource($process))
{
$output = '';
$input = array_shift($allInput);
do
{
usleep(200);//make sure the running process is ready
fwrite(
$pipes,
$input.PHP_EOL,//add EOL
strlen($input)+1
);
fflush($pipes[0]);//flush buffered data, write to stream
usleep(200);
$status = proc_get_status($process);
while($out = fread($pipes[1], 1024) && !feof($pipes[1]))
$output .= $out;
} while($status['running'] && $input = array_shift($allInput));
//proc_close & fclose calls here
}
Now, seeing as I don't know what it is exactly you are trying to do, this code will need to be tweaked quite a bit. You may, for example, find yourself having to set the STDIN and STDOUT pipes as non-blocking.
It's a simple matter of adding this, right after calling proc_open, though:
stream_set_blocking($pipes[0], 0);
stream_set_blocking($pipes[1], 0);
Play around, have fun, and perhaps let me know if this answer was helpful in any way...
My guess would be that you're doing everything correctly, except that the binary is never notified that it has received all the input and can start to work. By closing STDIN, you're kicking off the work process, because it's clear that there will be no more input. If you're not closing STDIN, the binary is waiting for more input, while your side is waiting for its output.
You probably need to end your input with a newline or whatever other protocol action is expected of you. Or perhaps closing STDIN is the action that's expected of you. Unless the process is specifically created to stay open and continue to stream input, you can't make it do it. If the process reads all input, processes it, returns output and then quits, there's no way you can make it stay alive to process more input later. If the process explicitly supports that behaviour, there should be a definition on how you need to delimit your input.
I am trying to make a website where people can compile and run their code online, thus we need to find an interactive way for users to send instructions.
Actually, what first comes to mind is exec() or system(), but when users want to input sth, this way won't work. So we have to use proc_open().
For instance, the following code
int main()
{
int a;
printf("please input a integer\n");
scanf("%d", &a);
printf("Hello World %d!\n", a);
return 0;
}
When I used proc_open(), like this
$descriptorspec = array(
0 => array( 'pipe' , 'r' ) ,
1 => array( 'pipe' , 'w' ) ,
2 => array( 'file' , 'errors' , 'w' )
);
$run_string = "cd ".$addr_base."; ./a.out 2>&1";
$process = proc_open($run_string, $descriptorspec, $pipes);
if (is_resource($process)) {
//echo fgets($pipes[1])."<br/>";
fwrite($pipes[0], '12');
fclose($pipes[0]);
while (!feof($pipes[1]))
echo fgets($pipes[1])."<br/>";
fclose($pipes[1]);
proc_close($process);
}
When running the C code, I want to get the first STDOUT stream, and input the number, then get the second STDOUT stream. But if I have the commented line uncommented, the page will be blocked.
Is there a way to solve the problem? How can I read from the pipe while not all data has been put there? Or is there a better way to write this kind of interactive program?
It is more a C or a glibc problem. You'll have to use fflush(stdout).
Why? And what's the difference between running a.out in a terminal and calling it from PHP?
Answer: If you run a.out in a terminal (being stdin a tty) then the glibc will use line buffered IO. But if you run it from another program (PHP in this case) and it's stdin is a pipe (or whatever but not a tty) than the glibc will use internal IO buffering. That's why the first fgets() blocks if uncommented. For more info check this article.
Good news: You can control this buffering using the stdbuf command. Change $run_string to:
$run_string = "cd ".$addr_base.";stdbuf -o0 ./a.out 2>&1";
Here comes a working example. Working even if the C code don't cares about fflush() as it is using the stdbuf command:
Starting subprocess
$cmd = 'stdbuf -o0 ./a.out 2>&1';
// what pipes should be used for STDIN, STDOUT and STDERR of the child
$descriptorspec = array (
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
// open the child
$proc = proc_open (
$cmd, $descriptorspec, $pipes, getcwd()
);
set all streams to non blocking mode
// set all streams to non blockin mode
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
stream_set_blocking(STDIN, 0);
// check if opening has succeed
if($proc === FALSE){
throw new Exception('Cannot execute child process');
}
get child pid. we need it later
// get PID via get_status call
$status = proc_get_status($proc);
if($status === FALSE) {
throw new Exception (sprintf(
'Failed to obtain status information '
));
}
$pid = $status['pid'];
poll until child terminates
// now, poll for childs termination
while(true) {
// detect if the child has terminated - the php way
$status = proc_get_status($proc);
// check retval
if($status === FALSE) {
throw new Exception ("Failed to obtain status information for $pid");
}
if($status['running'] === FALSE) {
$exitcode = $status['exitcode'];
$pid = -1;
echo "child exited with code: $exitcode\n";
exit($exitcode);
}
// read from childs stdout and stderr
// avoid *forever* blocking through using a time out (50000usec)
foreach(array(1, 2) as $desc) {
// check stdout for data
$read = array($pipes[$desc]);
$write = NULL;
$except = NULL;
$tv = 0;
$utv = 50000;
$n = stream_select($read, $write, $except, $tv, $utv);
if($n > 0) {
do {
$data = fread($pipes[$desc], 8092);
fwrite(STDOUT, $data);
} while (strlen($data) > 0);
}
}
$read = array(STDIN);
$n = stream_select($read, $write, $except, $tv, $utv);
if($n > 0) {
$input = fread(STDIN, 8092);
// inpput to program
fwrite($pipes[0], $input);
}
}
The answer is surprisingly simple: leave $descriptorspec empty. If you do so, the child process will simply use the STDIN/STDOUT/STDERR streams of the parent.
➜ ~ ✗ cat stdout_is_atty.php
<?php
var_dump(stream_isatty(STDOUT));
➜ ~ ✗ php -r 'proc_close(proc_open("php stdout_is_atty.php", [], $pipes));'
/home/chx/stdout_is_atty.php:3:
bool(true)
➜ ~ ✗ php -r 'passthru("php stdout_is_atty.php");'
/home/chx/stdout_is_atty.php:3:
bool(false)
➜ ~ ✗ php -r 'exec("php stdout_is_atty.php", $output); print_r($output);'
Array
(
[0] => /home/chx/stdout_is_atty.php:3:
[1] => bool(false)
)
Credit goes to John Stevenson, one of the maintainers of composer.
If you are interested why this happens: PHP does nothing for empty descriptors and uses the C / OS defaults which just happens to be the desired one.
So the C code responsible for proc_open always merely iterates the descriptors. If there are no descriptors specified then all that code does nothing. After that, the actual execution of the child -- at least on POSIX systems -- happens via calling fork(2) which makes the child inherit file descriptors (see this answer). And then the child calls one of execvp(3) / execle(3) / execl(3) . And as the manual says
The exec() family of functions replaces the current process image with a new process image.
Perhaps it's more understandable to say the memory region containing the parent is replaced by the new program. This is accessible as /proc/$pid/mem, see this answer for more. However, the system keeps a tally of the opened files outside of this region. You can see them in /proc/$pid/fd/ -- and STDIN/STDOUT/STDERR are just shorthands for file descriptors 0/1/2. So when the child replaces the memory, the file descriptors just stay in place.
Question: Is it possible to use php://memory on a exec or passthru command?
I can use php variables in the exec or passthru with no problem, but I am having trouble with php://memory
background:
I am trying to eliminate all of my temporary pdf file writing with PDFTK.
1)I am writing an temporary fdf file
2) form-fill a temporary pdf file using #1
3) repeat #1 and #2 for all the pdfs
4) merge all pdf's together.
This currently works - but it creates a lot of files, and is the bottleneck.
I would like to speed things up with pdftk by making use of the virtual file php://memory
First, I am trying to just virtualize the fdf file used in #1. Answering this alone is enough for a 'correct answer'. :)
The code is as follows:
$fdf = 'fdf file contents here';
$tempFdfVirtual= fopen("php://memory", 'r+');
if( $tempFdfVirtual ) {
fwrite( $tempFdfVirtual, $fdf);
} else {
echo "Failure to open temporary fdf file";
exit;
}
rewind( $tempFdfVirtual);
$url = "unfilled.pdf";
$temppdf_fn = "output.pdf";
$command = "pdftk $url fill_form $tempFdfVirtual output $temppdf_fn flatten";
$error="";
exec( $command, $error );
if ($error!="") {
$_SESSION['err'] = $error;
} else {
$_SESSION['err'] = 0;
}
I am getting an errorcode #1. If I do a stream_get_contents($tempFdfVirtual), it shows the contents.
Thanks for looking!
php://memory and php://temp (and in fact any file descriptor) are only available to the currently-running php process. Besides, $tempFdfVirtual is a resource handle so it makes no sense to put it in a string.
You should pass the data from your resource handle to the process through its standard-in. You can do this with proc-open, which gives you more control over input and output to the child process than exec.
Note that for some reason, you can't pass a 'php://memory' file descriptor to a process. PHP will complain:
Warning: proc_open(): cannot represent a stream of type MEMORY as a File Descriptor
Use php://temp instead, which is supposed to be exactly the same except it will use a temporary file once the stream gets big enough.
This is a tested example that illustrates the general pattern of code that uses proc_open(). This should be wrapped up in a function or other abstraction:
$testinput = "THIS IS A TEST STRING\n";
$fp = fopen('php://temp', 'r+');
fwrite($fp, $testinput);
rewind($fp);
$cmd = 'cat';
$dspec = array(
0 => $fp,
1 => array('pipe', 'w'),
);
$pp = proc_open($cmd, $dspec, $pipes);
// busywait until process is finished running.
do {
usleep(10000);
$stat = proc_get_status($pp);
} while($stat and $stat['running']);
if ($stat['exitcode']===0) {
// index in $pipes will match index in $dspec
// note only descriptors created by proc_open will be in $pipes
// i.e. $dspec indexes with an array value.
$output = stream_get_contents($pipes[1]);
if ($output == $testinput) {
echo "TEST PASSED!!";
} else {
echo "TEST FAILED!! Output does not match input.";
}
} else {
echo "TEST FAILED!! Process has non-zero exit status.";
}
// cleanup
// close pipes first, THEN close process handle.
foreach ($pipes as $pipe) {
fclose($pipe);
}
// Only file descriptors created by proc_open() will be in $pipes.
// We still need to close file descriptors we created ourselves and
// passed to it.
// We can do this before or after proc_close().
fclose($fp);
proc_close($pp);
Untested Example specific to your use of PDFTK:
// Command takes input from STDIN
$command = "pdftk unfilled.pdf fill_form - output tempfile.pdf flatten";
$descriptorspec = array(
0 => $tempFdfVirtual, // feed stdin of process from this file descriptor
// 1 => array('pipe', 'w'), // Note you can also grab stdout from a pipe, no need for temp file
);
$prochandle = proc_open($command, $descriptorspec, $pipes);
// busy-wait until it finishes running
do {
usleep(10000);
$stat = proc_get_status($prochandle);
} while ($stat and $stat['running']);
if ($stat['exitcode']===0) {
// ran successfully
// output is in that filename
// or in the file handle in $pipes if you told the command to write to stdout.
}
// cleanup
foreach ($pipes as $pipe) {
fclose($pipe);
}
proc_close($prochandle);
It's not just that you're using php://memory, it's any file handle. File handles only exist for the current process. For all intents and purposes, the handle you get back from fopen cannot be transferred to any other place outside of your script.
As long as you're working with an outside application, you're pretty much stuck using temporary files. Your only other option is to try and pass the data to pdftk on stdin, and retrieve the output on stdout (if it supports that). As far as I know the only way to invoke an external process with that kind of access to its descriptors (stdin/stdout) is using the proc_ family of functions, specifically proc_open.