call python script from PHP script - php

I need to call python script from PHP script and return result back to PHP.
I playing with proc_open function. But it does not work. Do you know why?
This is PHP script:
$msg = "this is a new message \n bla ble !##$%^&*%(*))(_+=-";
$descriptorspec = array(
0 => array("pipe","r"),
1 => array("pipe","w"),
2 => array("file","./error.log","a")
) ;
$cwd = './' ;
$command = 'python ./upper.py ';
$proc = proc_open($command, $descriptorspec, $pipes, $cwd) ;
if ( is_resource( $proc ) ) {
fwrite( $pipes[0], $msg );
fclose( $pipes[0] );
fclose( $pipes[1] );
proc_close($proc);
echo "proc is closed\n";
}
else {
echo 'proc is not a resource';
}
python `upper.py' script
import sys
print 'in python script'
data = sys.stdin.readlines()
print data
Output is :
in php script
proc is closed
I have error in error.log:
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr

Just in case the answer still matters to anyone:
The error occurs because you're closing $pipes[1] immediately, before the python script has a chance to write it, and quite likely even before it has even started running. The error on my system (a Mac) is:
close failed: [Errno 32] Broken pipe
(Just out of curiosity, what type of system are you running on that's giving you that strange message?)
Anyway, you can avoid the error by reading the pipe before closing it, e.g.,
stream_get_contents($pipes[1]);
which guarantees that the python script will get a chance to write the pipe (at least once) while the pipe is still open.
If you are the author of the python script, there's not much point in having it write output unless your PHP script is going to read what it writes.
On the other hand, if the python script isn't of your making, and it writes output that you don't care about, you might be able to avoid reading the output by playing some games with pcntl_wait() or pcntl_waitpid() -- though I wouldn't bet on it. A safer option is probably to just read the output streams and throw them on the floor.
(Kind of raises an interesting question, whether or not you care about the output: if you don't know a priori what the end of the output will look like, and the subprocess won't exit until you close the pipes and call proc_close(), how will you know when the subprocess has actually finished doing what you called it to do?)
Another option (untested by me so far) might be to connect any output streams you don't care about to a descriptor like:
array("file", "/dev/null", "w")
or whatever the equivalent of /dev/null is on your system.

Related

How to pipe to PHP process using the proc_open() function?

In this case there are two PHP files, stdin.php (child process component) and proc_open.php (parent process component), both are stored in the same folder of the public_html of a domain. There is also Program X, that pipes data into stdin.php.
stdin.php (this component works)
It is a process that should not be executed throught a browser, because it is destinated to receive input from Program X, and back it all up in a file (named stdin_backup). This process is working, because every time Program X pipes input, the process backs it up entirely. If this process is executed with input not passed (such is the case of executing it from a browser), the process creates a file (named stdin_error) with the text "ERROR".
Below, part of the code is ommited because the process works (as mentioned above). The code shown is just to illustrate:
#!/usr/bin/php -q
<?php
// Get the input
$fh_stdin = fopen('php://stdin', 'rb');
if (is_resource($fh_stdin)) {
// ... Code that backs up STDIN in a file ...
} else {
// ... Code that logs "ERROR" in a file ...
}
?>
proc_open.php (this component isn't working)
It is a process that must be executed throught a browser, and is destinated to pass input to stdin.php, as Program X does. This process is failing, because every time it is executed, there is no signal of stdin.php being executed: no stdin_error file, no stdin_backup file, not even PHP error_log file.
Code:
// Execute a PHP process and pipe input to it
// - Specify process command
$process_path = __DIR__ . '/stdin.php';
$process_execution_command = 'php ' . $process_path;
// - Specify process descriptor
$process_descriptor = array(
0 => array('pipe', 'r') // To child's STDIN
);
// - Specify pipes container
$pipes = [];
// - Open process
$process = proc_open($process_execution_command, $process_descriptor, $pipes);
if (is_resource($process)) {
// - Send data to the process STDIN
$process_STDIN = 'Data to be received by the process STDIN';
fwrite($pipes[0], $process_STDIN);
fclose($pipes[0]);
// - Close process
$process_termination_status = proc_close($process);
}
I am not sure if the command passed to proc_open() is correct, because I have not found examples for this case, and as mentioned above, this script is failing. I dont know what else can be incorrect with proc_open.php.
Also when I execute the proc_open.php process, an infinite loop produces, printing the next string over and over:
X-Powered-By: PHP/5.5.20 Content-type: text/html; charset=UTF-8
Tryed popen('php ' . __DIR__ . '/stdin.php', 'w') instead, and had exaclty the same result: the infinite loop error printing the same string of above, no errors, no logs, and no signals of stdin.php execution.
If I understand your question correctly, you want to open a process and write data into that process' STDIN stream. You can use the proc_open function for that:
$descriptors = array(
0 => array("pipe", "r"), // STDIN
1 => array("pipe", "w"), // STDOUT
2 => array("pipe", "w") // STDERR
);
$proc = proc_open("php script2.php", $descriptors, $pipes);
fwrite($pipes[0], "Your data here...");
fclose($pipes[0]);
$stdout = stream_get_contents($pipes[1]);
$stderr = stream_get_contents($pipes[2]);
fclose($pipes[1]);
fclose($pipes[2]);
$exitCode = proc_close($proc);
If you simply want to test your 2nd script, It would probably be easier to simply use a shell command:
$ echo "your data" | php script2.php
Or alternatively,
$ php script2.php < datafile.txt
Update, accounting for your edit to the question
When using the popen function, you can either open the process for reading or writing. That allows you to read the process' STDOUT stream or write into the STDIN stream (but not both; if that's a requirement, you'll need to use proc_open). If you want to write into the STDIN stream, specify "w" as 2nd parameter to popen to open the process for writing:
$fh_pipe = popen(
'php script1.php',
'w' // <- "w", not "r"!
);
fwrite($fh_pipe, 'EMAIL TEXT') ;
pclose($fh_pipe);

proc_open: Extending file descriptor numbers to enable "status" feedback from a Perl script

PHP's proc_open manual states:
The file descriptor numbers are not limited to 0, 1 and 2 - you may specify any valid file descriptor number and it will be passed to the child process. This allows your script to interoperate with other scripts that run as "co-processes". In particular, this is useful for passing passphrases to programs like PGP, GPG and openssl in a more secure manner. It is also useful for reading status information provided by those programs on auxiliary file descriptors.
What Happens: I call a Perl script in a PHP-based web application and pass parameters in the call. I have no future need to send data to the script. Through stdout [1] I receive from the Perl script json_encoded data that I use in my PHP application.
What I would like to add: The Perl script is progressing through a website collecting information depending on the parameters passed in it's initial call. I would like to send back to the PHP application a text string that I could use to display as a sort of Progress Bar.
How I think I should do it: I would expect to poll (every 1-2 seconds) the channel that has been setup for that "progression" update. I would use Javascript / jQuery to write into an html div container for the user to view. I do not think I should mix the "progress" channel with the more critical "json_encode(data)" channel as I would then need to decipher the stdout stream. (Is this thought logical, practical?)
My Main Question: How do you use additional "file descriptors?" I would image the setup of additional channels to be straightforward, such as the 3 => ... in the below:
$tunnels = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'),
3 => array('pipe', 'w')
);
$io = array();
$resource = proc_open("perl file/tomy/perl/code.pl $param1 $param2 $param3", $tunnels, $io);
if(!is_resource($resource)) {
$error = "No Resource";
}
fclose($io[0]);
$perlOutput = stream_get_contents($io[1]);
$output = json_decode($perlOutput);
$errors = stream_get_contents($io[2]);
print "$errors<p>";
fclose($io[1]);
fclose($io[2]);
$result = proc_close($resource);
if($result != 0) {
echo "you returned a $result result on proc_close";
}
But, in the Perl script I simply write to the stdout like:
my $json_terms = encode_json(\#terms);
print $json_terms;
If my understanding of setting up an additional channel is correct (above, the 3 => ...), then how would I write to it within the Perl script?
Thanks
Say you want to monitor the progress of a hello-world program, where each step is a dot written to the designated file descriptor.
#! /usr/bin/env perl
use warnings;
use strict;
die "Usage: $0 progress-fd\n" unless #ARGV == 1;
my $fd = shift;
open my $progress, ">&=", $fd or die "$0: dup $fd: $!";
# disable buffering on both handles
for ($progress, *STDOUT) {
select $_;
$| = 1;
}
my $output = "Hello, world!\n";
while ($output =~ s/^(.)(.*)\z/$2/s) {
my $next = $1;
print $next;
print $progress ".";
sleep 1;
}
Using bash syntax to open fd 3 on /tmp/progress and connect it to the program is
$ (exec 3>/tmp/progress; ./hello-world 3)
Hello, world!
$ cat /tmp/progress
..............
(It’s more amusing to watch live.)
To also see the dots on your terminal as they emerge, you could open your progress descriptor and effectively dup2 it onto the standard error—again using bash syntax and more fun in real time.
$ (exec 17>/dev/null; exec 17>&2; ./hello-world 17)
H.e.l.l.o.,. .w.o.r.l.d.!.
.
You could of course skip the extra step with
$ (exec 17>&2; ./hello-world 17)
to get the same effect.
If your Perl program dies with an error such as
$ ./hello-world 333
./hello-world: dup 333: Bad file descriptor at ./hello-world line 9.
then the write end of your pipe on the PHP side probably has its close-on-exec flag set.
You open a new filehandle and dup it to file descriptor 3:
open STD3, '>&3';
print STDERR "foo\n";
print STD3 "bar\n";
$ perl script.pl 2> file2 3> file3
$ cat file2
foo
$ cat file3
bar
Edit: per Greg Bacon's comment, open STD3, '>&=', 3 or open STD3, '>&=3' opens the file descriptor directly, like C's fdopen call, avoiding a dup call and saving you a file descriptor.

how to handle infinite loop process when using proc_open

I use proc_open to execute a program created by c language.
I was using file for the "stdout".
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("file", "/tmp/example.output"),
2 => array("file", "/tmp/example.error", "a")
);
Everything is fine when I was executing good program but problem occurred when I was executing infinite loop program like the code below :
#include "stdio.h"
int main(){
while(1){
printf("Example");
}
return 0
}
File example.output will make my hard disk full. So I need to delete the file and restart my computer. My question is how to handle something like this?
Thanks :)
The only thing you can do is slaughter the offending process without prejudice using proc_terminate (but you can be polite and allow it to run for a while first, in effect imposing a time limit for it to complete).
For example:
$proc = proc_open(...);
sleep(20); // give the process some time to run
$status = proc_get_status($proc); // see what it's doing
if($status['running']) {
proc_terminate($proc); // kill it forcefully
}
Don't forget to clean up any handles you still have in your hands afterwards.

PHP - Blocking File Read

I have a file that is getting added to remotely (file.txt). From SSH, I can call tail -f file.txt which will display the updated contents of the file. I'd like to be able to do a blocking call to this file that will return the last appended line. A pooling loop simply isn't an option. Here's what I'd like:
$cmd = "tail -f file.txt";
$str = exec($cmd);
The problem with this code is that tail will never return. Is there any kind of wrapper function for tail that will kill it when once it has returned content? Is there a better way to do this in a low overhead way?
The only solution I've found is somewhat dirty:
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open('tail -f -n 0 /tmp/file.txt',$descriptorspec,$pipes);
fclose($pipes[0]);
stream_set_blocking($pipes[1],1);
$read = fgets($pipes[1]);
fclose($pipes[1]);
fclose($pipes[2]);
//if I try to call proc_close($process); here, it fails / hangs untill a second line is
//passed to the file. Hence an inelegant kill in the next 2 line:
$status = proc_get_status($process);
exec('kill '.$status['pid']);
proc_close($process);
echo $read;
tail -n 1 file.txt will always return you the last line in the file, but I'm almost sure what you want instead is for PHP to know when file.txt has a new line, and display it, all without polling in a loop.
You will need a long running process anyway if it will check for new content, be it with a polling loop that checks for file modification time and compares to the last modification time saved somewhere else, or any other way.
You can even have php be run via cron to do the check if you don't want it running in a php loop (probably best), or via a shell script that does the loop and calls the php file if you need sub 1-minute runs that are cron's limit.
Another idea, though I haven't tried it, would be to open the file in a non-blocking stream and then use the quite efficient stream_select on it to have the system poll for changes.

how to redirect STDOUT to a file in PHP?

The code below almost works, but it's not what I really meant:
ob_start();
echo 'xxx';
$contents = ob_get_contents();
ob_end_clean();
file_put_contents($file,$contents);
Is there a more natural way?
It is possible to write STDOUT directly to a file in PHP, which is much easier and more straightforward than using output bufferering.
Do this in the very beginning of your script:
fclose(STDIN);
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null', 'r');
$STDOUT = fopen('application.log', 'wb');
$STDERR = fopen('error.log', 'wb');
Why at the very beginning you may ask? No file descriptors should be opened yet, because when you close the standard input, output and error file descriptors, the first three new descriptors will become the NEW standard input, output and error file descriptors.
In my example here I redirected standard input to /dev/null and the output and error file descriptors to log files. This is common practice when making a daemon script in PHP.
To write to the application.log file, this would suffice:
echo "Hello world\n";
To write to the error.log, one would have to do:
fwrite($STDERR, "Something went wrong\n");
Please note that when you change the input, output and error descriptors, the build-in PHP constants STDIN, STDOUT and STDERR will be rendered unusable. PHP will not update these constants to the new descriptors and it is not allowed to redefine these constants (they are called constants for a reason after all).
here's a way to divert OUTPUT which appears to be the original problem
$ob_file = fopen('test.txt','w');
function ob_file_callback($buffer)
{
global $ob_file;
fwrite($ob_file,$buffer);
}
ob_start('ob_file_callback');
more info here:
http://my.opera.com/zomg/blog/2007/10/03/how-to-easily-redirect-php-output-to-a-file
None of the answers worked for my particular case where I needed a cross platform way of redirecting the output as soon as it was echo'd out so that I could follow the logs with tail -f log.txt or another log viewing app.
I came up with the following solution:
$logFp = fopen('log.txt', 'w');
ob_start(function($buffer) use($logFp){
fwrite($logFp, $buffer);
}, 1); //notice the use of chunk_size == 1
echo "first output\n";
sleep(10)
echo "second output\n";
ob_end_clean();
I haven't noticed any performance issues but if you do, you can change chunk_size to greater values.
Now just tail -f the log file:
tail -f log.txt
No, output buffering is as good as it gets. Though it's slightly nicer to just do
ob_start();
echo 'xxx';
$contents = ob_get_flush();
file_put_contents($file,$contents);
Using eio pecl module eio is very easy, also you can capture PHP internal errors, var_dump, echo, etc. In this code, you can found some examples of different situations.
$fdout = fopen('/tmp/stdout.log', 'wb');
$fderr = fopen('/tmp/stderr.log', 'wb');
eio_dup2($fdout, STDOUT);
eio_dup2($fderr, STDERR);
eio_event_loop();
fclose($fdout);
fclose($fderr);
// output examples
echo "message to stdout\n";
$v2dump = array(10, "graphinux");
var_dump($v2dump);
// php internal error/warning
$div0 = 10/0;
// user errors messages
fwrite(STDERR, "user controlled error\n");
Call to eio_event_loop is used to be sure that previous eio requests have been processed. If you need append on log, on fopen call, use mode 'ab' instead of 'wb'.
Install eio module is very easy (http://php.net/manual/es/eio.installation.php). I tested this example with version 1.2.6 of eio module.
You can install Eio extension
pecl install eio
and duplicate a file descriptor
$temp=fopen('/tmp/my_stdout','a');
$my_data='my something';
$foo=eio_dup2($temp,STDOUT,EIO_PRI_MAX,function($data,$esult,$request){
var_dump($data,$esult,$request);
var_dump(eio_get_last_error($request));
},$my_data);
eio_event_loop();
echo "something to stdout\n";
fclose($temp);
this creates new file descriptor and rewrites target stream of STDOUT
this can be done with STDERR as well
and constants STD[OUT|ERR] are still usable
I understand that this question is ancient, but people trying to do what this question asks will likely end up here... Both of you.
If you are running under a particular environment...
Running under Linux (probably most other Unix like operating systems, untested)
Running via CLI (Untested on web servers)
You can actually close all of your file descriptors (yes all, which means it's probably best to do this at the very beginning of execution... for example just after a pcntl_fork() call to background the process in a daemon (which seems like the most common need for something like this)
fclose( STDIN ); // fd 3
fclose( STDERR); // fd 2
fclose( STDOUT ); // fd 1
And then re-open the file descriptors, assigning them to a variable that will not fall out of scope and thus be garbage collected. Because Linux will predictably open them in the proper order.
$kept_in_scope_variable_fd1 = fopen(...); // fd 1
$kept_in_scope_variable_fd2 = fopen(...); // fd 2
$kept_in_scope_variable_fd3 = fopen( '/dev/null', ... ); // fd 3
You can use whatever files or devices you want for this. I gave /dev/null as the example for STDIN (fd3) because that's probably the most common case for this kind of code.
Once this is done you should be able to do normal things like echo, print_r, var_dump, etc without specifically needing to write to a file with a function. Which is useful when you're trying to background code that you do not want to, or aren't able to, rewrite to be file-pointer-output-friendly.
YMMV for other environments and things like having other FD's open, etc. My advice is to start with a small test script to prove that it works, or doesn't, in your environment and then move on to integration from there.
Good luck.
Here is an ugly solution that was useful for a problem I had (need to debug).
if(file_get_contents("out.txt") != "in progress")
{
file_put_contents("out.txt","in progress");
$content = file_get_contents('http://'.$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']);
file_put_contents("out.txt",$content);
}
The main drawback of that is that you'd better not to use the $_POST variables.
But you dont have to put it in the very beggining.

Categories