Php proc_open() and stockfish chess engine anomaly depth 1 - php

Looking forward to integrate stockfish chess engine into a php-cli script.
There is an unexpected behavior, the stockfish program is quitting directly without "thinking", it returns only the position at depth 1, when called from php.
For better understanding, while running the stockfish program from the command line, here is the expected behavior (gif):
From php, the following is working (starting position, whites to play, asking for 50 depth), it returns a move a2a3, the position at depth 1, which is a pretty bad move!
The answer is instantaneous, where going trough all the depth levels should take at least many seconds.
It is identical with any FEN positions, always returning the depth 1.
$descr = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$pipes = array();
$process = proc_open("stockfish", $descr, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], "uci\n");
fwrite($pipes[0], "ucinewgame\n");
fwrite($pipes[0], "position fen rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - - 0 1\n");
fwrite($pipes[0], "go depth 50\n");
fclose($pipes[0]);
// Read all output from the pipe
while (!feof($pipes[1])) {
echo fgets($pipes[1]);
}
fclose($pipes[1]);
proc_close($process);
}
// RETURN last line: bestmove a2a3
All versions of stockfish had been tested, 8, 9, 10, with the same result.
I tried many options and different ways to run shell commands from php, including posix_mkfifo() piping, but none are working as expected, always returning a move at depth 1.
Another example, same behavior, return always "a2a3".
file_put_contents(".COMFISH","uci\nucinewgame\nsetoption name Threads value 1\nposition fen rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - - 0 1\ngo depth 50");
$COM = explode(" ",system("stockfish < .COMFISH"))[1];
var_dump($COM);
// RETURN a2a3
This might be directly linked on how that stockfish binary is written, (multi threadings), and not a php behavior, but I am looking for an explanation here?
From wiki:
Stockfish can use up to 512 CPU threads in multiprocessor systems.

Well, this was fairly simple, the pipe was closed too early.
Leaving here the base of a working code for future readers.
$descr = [0 => ["pipe", "r"], 1 => ["pipe", "w"]];
$pipes = [];
$process = proc_open("stockfish", $descr, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], "uci\n");
fwrite($pipes[0], "ucinewgame\n");
fwrite($pipes[0], "position fen rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - - 0 1\n");
fwrite($pipes[0], "setoption name Skill Level value 17\n"); // Set level between 1 and 20
fwrite($pipes[0], "go movetime 5000\n"); // Return bestmove after 5 seconds
while (!feof($pipes[1])) {
echo fgets($pipes[1]);
}
fclose($pipes[0]);
fclose($pipes[1]);
proc_close($process);
}
// RETURN last line: bestmove e2e4 ponder e7e6

Related

Run Node.js script from PHP - output is truncated to 512 characters

We run node.js CLI script from PHP with Symfony Process.
The script always print whole response as JSON in one line.
The response is somehow truncated on 512 characters.
I only found that xdebug.var_display_max_data => 512 => 512 in php.ini but don't see how this is related.
Adapter > Symfony Process > node script.js
A) Test Node script
from terminal node script $ node user-update.js parameters returns full result in all cases - like 629 chars.
from Symfony Process node script response is truncated to 512 chars.
B) Test Symfony Process
$process = new Process($cmd);
try {
$process->mustRun();
$response = $process->getOutput();
} catch (ProcessFailedException $e) {
$response = $e->getMessage();
}
echo $response;
echo PHP_EOL;
echo strlen($response);
$cmd = 'node user-update.js parameters'; - truncated to 512.
$cmd = 'php -r \'for($i=0; $i<520; $i++){ echo "."; }\''; - does not truncate.
$cmd = 'cat long_one_line.txt'; - print full file. 1650 chars in one line.
C) Try with PHP shell functions
$response = shell_exec($cmd); // response is truncated to 512
system($cmd, $returnVal); // print directly to STDOut, truncated to 512
What could be the cause and solution?
node v7.6.0
PHP 7.1.2
I suspect your process is ending before the buffer can be read by PHP.
As a work-around you can add something like this:
// The `| cat` at the end of this line means we wait for
// cat's process to end instead of node's process.
$process = new Process('node user-update.js parameters | cat');

php - Piping input to perl process automatically decodes url-encoded string

I'm using proc_open to pipe some text over to a perl script for faster processing. The text includes url-encoded strings as well as literal spaces. When a url-encoded space appears in the raw text, it seems to be decoded into a literal space by the time it reaches the perl script. In the perl script, I rely on the positioning of the literal spaces, so these unwanted spaces mess up my output.
Why is this happening, and is there a way to prevent it from happening?
Relevant code snippet:
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
);
$cmd = "perl script.pl";
$process = proc_open($cmd, $descriptorspec, $pipes);
$output = "";
if (is_resource($process)) {
fwrite($pipes[0], $raw_string);
fclose($pipes[0]);
while (!feof($pipes[1])) {
$output .= fgets($pipes[1]);
}
fclose($pipes[1]);
proc_close($process);
}
and a line of raw text input looks something like this:
key url\tvalue1\tvalue2\tvalue3
I might be able to avoid the issue by converting the formatting of my input, but for various reasons that is undesirable, and circumvents rather than solves, the key issue.
Furthermore, I know that the issue is occurring somewhere between the php script and the perl script because I have examined the raw text (with an echo) immediately before writing it to the perl scripts STDIN pipe, and I have tested my perl script directly on url-encoded raw strings.
I've now added the perl script below. It basically boils down to a mini map-reduce job.
use strict;
my %rows;
while(<STDIN>) {
chomp;
my #line = split(/\t/);
my $key = $line[0];
if (defined #rows{$key}) {
for my $i (1..$#line) {
$rows{$key}->[$i-1] += $line[$i];
}
} else {
my #new_row;
for my $i (1..$#line) {
push(#new_row, $line[$i]);
}
$rows{$key} = [ #new_row ];
}
}
my %newrows;
for my $key (keys %rows) {
my #temparray = split(/ /, $key);
pop(#temparray);
my $newkey = join(" ", #temparray);
if (defined #newrows{$newkey}) {
for my $i (0..$#{ $rows{$key}}) {
$newrows{$newkey}->[$i] += $rows{$key}->[$i] > 0 ? 1 : 0;
}
} else {
my #new_row;
for my $i (0..$#{ $rows{$key}}) {
push(#new_row, $rows{$key}->[$i] > 0 ? 1 : 0);
}
$newrows{$newkey} = [ #new_row ];
}
}
for my $key (keys %newrows) {
print "$key\t", join("\t", #{ $newrows{$key} }), "\n";
}
Note to self: always check your assumptions. It turns out that somewhere in my hundreds of millions of lines of input there were, in fact, literal spaces where there should have been url-encoded spaces. It took a while to find them, since there were hundreds of millions of correct literal spaces, but there they were.
Sorry guys!

How to parse Ruby's YAML in PHP (sfYaml and Spyc won't do it right)

I have the values like following in MySQL (made by ChiliProject):
---
author_id:
- 0
- 1
status_id:
- 0
- 1
subject:
- ""
- !binary |
0KHQtNC10LvQsNGC0Ywg0LPRgNCw0LzQvtGC0L3Ri9C5INCy0L3QtdGI0L3Q
uNC5INCy0LjQtCDQtNC70Y8g0LjQvNC10Y7RidC10LPQvtGB0Y8=
start_date:
-
- 2012-04-30
priority_id:
- 0
- 4
tracker_id:
- 0
- 2
description:
-
- ""
project_id:
- 0
- 2
created_on:
-
- 2012-04-30 17:51:08.596410 +04:00
sfYaml says: Unable to parse at line 11 (near " 0KHQtNC10LvQsNGC0Ywg0LPRgNCw0LzQvtGC0L3Ri9C5INCy0L3QtdGI0L3Q").
Spyc adds "-" items into the same level as author_id, status_id and so on. Looks reasonable (because no spacing), but it is being interpreted well by Ruby's YAML. Spyc also ignores base64.
Aren't sfYaml and Spyc reliable enough?
Any suggestions what to do? Which parser or trick could I use to work with this database from PHP?
Here's my solution:
RubyYaml.php:
<?php
class RubyYaml
{
static public function parse($data)
{
$descriptorSpec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
//2 => array("pipe", "a"), // stderr
);
$process = proc_open('ruby '.__DIR__.'/yaml2json.rb', $descriptorSpec, $pipes);
if (!is_resource($process))
throw new CException('Cannot start YAML parser');
fwrite($pipes[0], $data);
fclose($pipes[0]);
$json = stream_get_contents($pipes[1]);
fclose($pipes[1]);
proc_close($process);
$result = json_decode($json, true);
if ($result === null) // Don't your YAMLs contain plain NULL ever?
throw new CException('YAML parsing failed: '.$json);
return $result;
}
}
yaml2json.rb:
require "json"
require 'yaml'
def recursion(v)
if v.class == String
v.force_encoding('utf-8')
elsif v.class == Array
v.each do |vv|
recursion(vv)
end
elsif v.class == Hash
v.each do |k, vv|
recursion(vv)
end
end
end
thing = YAML.load(STDIN.read)
recursion(thing)
puts thing.to_json

Using zlib filter with a socket pair

For some reason, the zlib.deflate filter doesn't seem to be working with socket pairs generated by stream_socket_pair(). All that can be read from the second socket is the two-byte zlib header, and everything after that is NULL.
Example:
<?php
list($in, $out) = stream_socket_pair(STREAM_PF_UNIX,
STREAM_SOCK_STREAM,
STREAM_IPPROTO_IP);
$params = array('level' => 6, 'window' => 15, 'memory' => 9);
stream_filter_append($in, 'zlib.deflate', STREAM_FILTER_WRITE, $params);
stream_set_blocking($in, 0);
stream_set_blocking($out, 0);
fwrite($in, 'Some big long string.');
$compressed = fread($out, 1024);
var_dump($compressed);
fwrite($in, 'Some big long string, take two.');
$compressed = fread($out, 1024);
var_dump($compressed);
fwrite($in, 'Some big long string - third time is the charm?');
$compressed = fread($out, 1024);
var_dump($compressed);
Output:
string(2) "x�"
string(0) ""
string(0) ""
If I comment out the call to stream_filter_append(), the stream writing/reading functions correctly, with the data being dumped in its entirety all three times, and if I direct the zlib filtered stream into a file instead of through the socket pair, the compressed data is written correctly. So both parts function correctly separately, but not together. Is this a PHP bug that I should report, or an error on my part?
This question is branched from a solution to this related question.
I had worked on the PHP source code and found a fix.
To understand what happens I had traced the code during a
....
for ($i = 0 ; $i < 3 ; $i++) {
fwrite($s[0], ...);
fread($s[1], ...);
fflush($s[0], ...);
fread($s[1], ...);
}
loop and I found that the deflate function is never called with the Z_SYNC_FLUSH flag set because no new data are present into the backets_in brigade.
My fix is to manage the (PSFS_FLAG_FLUSH_INC flag is set AND no iterations are performed on deflate function case) extending the
if (flags & PSFS_FLAG_FLUSH_CLOSE) {
managing FLUSH_INC too:
if (flags & PSFS_FLAG_FLUSH_CLOSE || (flags & PSFS_FLAG_FLUSH_INC && to_be_flushed)) {
This downloadable patch is for debian squeeze version of PHP but the current git version of the file is closer to it so I suppose to port the fix is simply (few lines).
If some side effect arises please contact me.
Looking through the C source code, the problem is that the filter always lets zlib's deflate() function decide how much data to accumulate before producing compressed output. The deflate filter does not create a new data bucket to pass on unless deflate() outputs some data (see line 235) or the PSFS_FLAG_FLUSH_CLOSE flag bit is set (line 250). That's why you only see the header bytes until you close $in; the first call to deflate() outputs the two header bytes, so data->strm.avail_out is 2 and a new bucket is created for these two bytes to pass on.
Note that fflush() does not work because of a known issue with the zlib filter. See: Bug #48725 Support for flushing in zlib stream.
Unfortunately, there does not appear to be a nice work-around to this. I started writing a filter in PHP by extending php_user_filter, but quickly ran into the problem that php_user_filter does not expose the flag bits, only whether flags & PSFS_FLAG_FLUSH_CLOSE (the fourth parameter to the filter() method, a boolean argument commonly named $closing). You would need to modify the C sources yourself to fix Bug #48725. Alternatively, re-write it.
Personally I would consider re-writing it because there seems to be a few eyebrow-raising issues with the code:
status = deflate(&(data->strm), flags & PSFS_FLAG_FLUSH_CLOSE ? Z_FULL_FLUSH : (flags & PSFS_FLAG_FLUSH_INC ? Z_SYNC_FLUSH : Z_NO_FLUSH)); seems odd because when writing, I don't know why flags would be anything other than PSFS_FLAG_NORMAL. Is it possible to write & flush at the same time? In any case, handling the flags should be done outside of the while loop through the "in" bucket brigade, like how PSFS_FLAG_FLUSH_CLOSE is handled outside of this loop.
Line 221, the memcpy to data->strm.next_in seems to ignore the fact that data->strm.avail_in may be non-zero, so the compressed output might skip some data of a write. See, for example, the following text from the zlib manual:
If not all input can be processed (because there is not enough room in the output buffer), next_in and avail_in are updated and processing will resume at this point for the next call of deflate().
In other words, it is possible that avail_in is non-zero.
The if statement on line 235, if (data->strm.avail_out < data->outbuf_len) should probably be if (data->strm.avail_out) or perhaps if (data->strm.avail_out > 2).
I'm not sure why *bytes_consumed = consumed; isn't *bytes_consumed += consumed;. The example streams at http://www.php.net/manual/en/function.stream-filter-register.php all use += to update $consumed.
EDIT: *bytes_consumed = consumed; is correct. The standard filter implementations all use = rather than += to update the size_t value pointed to by the fifth parameter. Also, even though $consumed += ... on the PHP side effectively translates to += on the size_t (see lines 206 and 231 of ext/standard/user_filters.c), the native filter function is called with either a NULL pointer or a pointer to a size_t set to 0 for the fifth argument (see lines 361 and 452 of main/streams/filter.c).
You need to close the stream after the write to flush it before the data will come in from the read.
list($in, $out) = stream_socket_pair(STREAM_PF_UNIX,
STREAM_SOCK_STREAM,
STREAM_IPPROTO_IP);
$params = array('level' => 6, 'window' => 15, 'memory' => 9);
stream_filter_append($out, 'zlib.deflate', STREAM_FILTER_WRITE, $params);
stream_set_blocking($out, 0);
stream_set_blocking($in, 0);
fwrite($out, 'Some big long string.');
fclose($out);
$compressed = fread($in, 1024);
echo "Compressed:" . bin2hex($compressed) . "<br>\n";
list($in, $out) = stream_socket_pair(STREAM_PF_UNIX,
STREAM_SOCK_STREAM,
STREAM_IPPROTO_IP);
$params = array('level' => 6, 'window' => 15, 'memory' => 9);
stream_filter_append($out, 'zlib.deflate', STREAM_FILTER_WRITE, $params);
stream_set_blocking($out, 0);
stream_set_blocking($in, 0);
fwrite($out, 'Some big long string, take two.');
fclose($out);
$compressed = fread($in, 1024);
echo "Compressed:" . bin2hex($compressed) . "<br>\n";
list($in, $out) = stream_socket_pair(STREAM_PF_UNIX,
STREAM_SOCK_STREAM,
STREAM_IPPROTO_IP);
$params = array('level' => 6, 'window' => 15, 'memory' => 9);
stream_filter_append($out, 'zlib.deflate', STREAM_FILTER_WRITE, $params);
stream_set_blocking($out, 0);
stream_set_blocking($in, 0);
fwrite($out, 'Some big long string - third time is the charm?');
fclose($out);
$compressed = fread($in, 1024);
echo "Compressed:" . bin2hex($compressed) . "<br>\n";
That produces:
Compressed:789c0bcecf4d5548ca4c57c8c9cf4b57282e29cacc4bd70300532b079c
Compressed:789c0bcecf4d5548ca4c57c8c9cf4b57282e29cacc4bd7512849cc4e552829cfd70300b1b50b07
Compressed:789c0bcecf4d5548ca4c57c8c9cf4b57282e29ca0452ba0a25199945290a259940c9cc62202f55213923b128d71e008e4c108c
Also I switched the $in and $out because writing to $in confused me.

Proc_open read and parse output - real-time

When I run proc_open with a file.txt path for the stderr output, that file is constantly updated every couple seconds with output of the proc_open program I am running which is ffmpeg. I just want to redirect that output to a phpfile where it can be read and parsed but while the process is running (so send info every couple seconds) so I can update a database. IS this possible? I have been googling for hours so if there are any experts out there who actually know how to use this function of have experience in this I would greatly appreciate any help.
this puts output in a text doc. If I change to 2 => array("pipe","w") and the echo the output only comes on my screen at the end of the process
$descriptorspec = array(
0 => array("pipe","r"),
1 => array("pipe","w"),
2 => array("file","/home/g/Desktop/test.txt","a")
) ;
$cwd = './' ;
// open process /bin/sh
$process = proc_open("/usr/local/bin/ffmpeg -i /home/g/Desktop/vid.wmv /home/g/Desktop/vid.flv", $descriptorspec, $pipes, $cwd) ;
you could run ffmpeg like this:
ffmpeg ..optinons... 2>&1 | php script.php
But you could have in input STDERR and STDOUT of ffmpeg
And second normal decision:
<?
$cmd = 'ffmpeg -i /home/azat/Desktop/src.avi -y /home/azat/Desktop/dst.avi';
$pipes = array();
$descriptors = array(2 => array('file', '/tmp/atest.log', 'a'));
$p = proc_open($cmd, $descriptors, $pipes);
while (true) {
sleep(1);
$status = proc_get_status($p);
if (!$status['running']) break;
echo "STEEL RUN\n";
// some manupulations with "/tmp/atest.log"
}
Also you can see this class - it is a wrapper for exec processes

Categories