Capture/supress all output from php exec including stderr - php

I want to run several commands via exec() but I don't want any output to the screen. I do, however, want to hold on to the output so that I can control verbosity as my script runs.
Here is my class:
<?php
class System
{
public function exec($command, array &$output = [])
{
$returnVar = null;
exec($command, $output, $returnVar);
return $returnVar;
}
}
The problem is, most applications put an irritating amount of irrelevant stuff into stderr, which I don't seem to be able to block. For example, here's the output from running git clone through it:
Cloning into '/tmp/directory'...
remote: Counting objects: 649, done.
remote: Compressing objects: 100% (119/119), done.
remote: Total 649 (delta 64), reused 0 (delta 0), pack-reused 506
Receiving objects: 100% (649/649), 136.33 KiB | 0 bytes/s, done.
Resolving deltas: 100% (288/288), done.
Checking connectivity... done.
I've seen other questions claim that using the output buffer can work, however it doesn't seem to work
<?php
class System
{
public function exec($command, array &$output = [])
{
$returnVar = null;
ob_start();
exec($command, $output, $returnVar);
ob_end_clean();
return $returnVar;
}
}
This still produces the same results. I can fix the problem by routing stderr to stdout in the command, however, not only does this prevent me from differentiating from stdout and stderr, this application is designed to run in Windows and Linux, so this is now an unholy mess.
<?php
class System
{
public function exec($command, array &$output = [])
{
$returnVar = null;
// Is Windows
if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') {
exec($command, $output, $returnVar);
return $returnVar;
}
// Is not windows
exec("({$command}) 2>&1", $output, $returnVar);
return $returnVar;
}
}
Is there a way to capture and suppress both stderr and stdout separately?
Update / Answer example
On the advice of #hexasoft in the comments, I updated my method to look like this:
<?php
class System
{
public function exec($command, &$stdOutput = '', &$stdError = '')
{
$process = proc_open(
$command,
[
0 => ['pipe', 'r'],
1 => ['pipe', 'w'],
2 => ['pipe', 'w'],
],
$pipes
);
if (!is_resource($process)) {
throw new \RuntimeException('Could not create a valid process');
}
// This will prevent to program from continuing until the processes is complete
// Note: exitcode is created on the final loop here
$status = proc_get_status($process);
while($status['running']) {
$status = proc_get_status($process);
}
$stdOutput = stream_get_contents($pipes[1]);
$stdError = stream_get_contents($pipes[2]);
proc_close($process);
return $status['exitcode'];
}
}
This technique opens up much more advanced options, including asynchronous processes.

Command proc_exec() allows to deal with file descriptors of the exec-ed command, using pipes.
The function is: proc_open ( string $cmd , array $descriptorspec , array &$pipes […optional parameters] ) : resource
You give you command in $cmd (as in exec) and you give an array describing the filedescriptors to "install" for the command. This array is indexed by filedescriptor number (0=stdin, 1=stdout…) and contains the type (file, pipe) and the mode (r/w…) plus the filename for file type.
You then get in $pipes an array of filedescriptors, which can be used to read or write (depending of what requested).
You should not forget to close these descriptors after usage.
Please refer to PHP manual page (and in particular the examples): https://php.net/manual/en/function.proc-open.php
Note that read/write is related to the spawned command, not the PHP script.

Related

PHP shell_exec returns empty string after a while

I made a Symfony console command which uses pngquant to process and compress a long list of images. The images are read from a CSV file.
The batch is working fine until the end in local environment but in the stage environment it works for about 5 minutes and then it starts returning empty result from the shell_exec command. I even made a retry system but it's always returning empty result:
// escapeshellarg() makes this safe to use with any path
// errors are redirected to standard output
$command = sprintf(
'%s --quality %d-%d --output %s --force %s 2>&1',
$this->pngquantBinary,
$minQuality,
$maxQuality,
escapeshellarg($tempPath),
$path
);
// tries a few times
$data = null;
$attempt = 0;
do {
// command result
$data = shell_exec($command);
// error
if (null !== $data) {
$this->logger->warning('An error occurred while compressing the image with pngquant', [
'command' => $command,
'output' => $data,
'cpu' => sys_getloadavg(),
'attempt' => $attempt + 1,
'sleep' => self::SLEEP_BETWEEN_ATTEMPTS,
]);
sleep(self::SLEEP_BETWEEN_ATTEMPTS);
}
++$attempt;
} while ($attempt < self::MAX_NUMBER_OF_ATTEMPTS && null !== $data);
// verifies that the command has finished successfully
if (null !== $data) {
throw new \Exception(sprintf('There was an error compressing the file with command "%s": %s.', $command, $data));
}
The problem is that the same command executed in another shell in the same environment works fine! I mean, when I log the error, if I put the exactly same command in another instance on the same server, works fine.
Even from the Symfony logs I can't read any error, where should I look for a more detailed error?
What can be causing this? Memory and processor are fine during execution!
After many attempts I read this question:
Symfony2 Process component - unable to create pipe and launch a new process
The solution was to add a call to gc_collect_cycles after flush during the loop!
if ($flush || 0 === ($index % self::BATCH_SIZE)) {
$this->em->flush();
$this->em->clear();
// clears the temp directory after flushing
foreach ($this->tempImages as $tempImage) {
unlink($tempImage);
}
$this->tempImages = [];
// forces collection of any existing garbage cycles
gc_collect_cycles();
}
IMPORTANT: also keep an eye on the number of disk inodes.
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 125193 392 124801 1% /dev
tmpfs 127004 964 126040 1% /run
/dev/vda2 5886720 5831604 55116 100% /

Autoloading Classes Into PHP Interactive Shell

Am trying to run php Interactive shell from a php script. To be more specific, I want to be able to call my classes from interactive shell.
I manage to find this
# custom_interactive_shell.php
function proc($command, &$return_var = null, array &$stderr_output = null)
{
$return_var = null;
$stderr_output = [];
$descriptorspec = [
// Must use php://stdin(out) in order to allow display of command output
// and the user to interact with the process.
0 => ['file', 'php://stdin', 'r'],
1 => ['file', 'php://stdout', 'w'],
2 => ['pipe', 'w'],
];
$pipes = [];
$process = #proc_open($command, $descriptorspec, $pipes);
if (is_resource($process)) {
// Loop on process until it exits normally.
do {
$status = proc_get_status($process);
// If our stderr pipe has data, grab it for use later.
if (!feof($pipes[2])) {
// We're acting like passthru would and displaying errors as they come in.
$error_line = fgets($pipes[2]);
echo $error_line;
$stderr_output[] = $error_line;
}
} while ($status['running']);
// According to documentation, the exit code is only valid the first call
// after a process is finished. We can't rely on the return value of
// proc_close because proc_get_status will read the exit code first.
$return_var = $status['exitcode'];
proc_close($process);
}
}
proc('php -a -d auto_prepend_file=./vendor/autoload.php');
But its just not working, it tries to be interactive but freezes up a lot, and even after the lag it doesn't really execute commands properly.
Example:
> php custom_interactive_shell.php
Interactive shell
php > echo 1;
Warning: Use of undefined constant eo - assumed 'eo' (this will throw an Error in a future version of PHP) in php shell code on line 1
If you want to be able to run your PHP classes from an interactive shell then you can use the default one that ships with Terminal. From the terminal just type:
php -a
Then, in the below example I had created a file called Agency.php that had class Agency in it. I was able to require_once() it into the active shell and then call the class and its methods:
Interactive shell
php > require_once('Agency.php');
php > $a = new Agency();
php > $a->setname("some random name");
php > echo $a->getname();
some random name
You can also use the following in the interactive shell to autoload the files / classes in the current directory:
spl_autoload_register(function ($class_name) {
include $class_name . '.php';
});

proc_terminate() does not terminate process even after lots of time

I open a process using proc_open(), then usleep() for some time and after that check status of the process. If the process is still running then I proc_terminate() it.
The problem is when I use proc_terminate() the script continues without waiting for process to terminate (which is normal), but the process is not terminated even after lots of time.
The process is an .exe file which first prints hello to stdout, then enters an infinite loop.
My PHP script :
<pre>
<?php
$descriptorspec = array(
1 => array('pipe', 'w')
);
$process = proc_open("C:/wamp/www/my-project/run.exe", $descriptorspec, $pipes);
if (is_resource($process)) {
usleep(2.5*1000000); // wait 2.5 secs
$status = proc_get_status($process);
if ($status['running'] == true) {
proc_terminate($process);
echo "Terminated\n";
} else {
echo "Exited in time\n";
echo "EXIT CODE : {$status['exitcode']}\n";
echo "OUTPUT :\n";
while (!feof($pipes[1]))
echo fgets($pipes[1]);
fclose($pipes[1]);
proc_close($process);
}
}
?>
</pre>
I compile this C++ file and get the .exe :
#include <iostream>
using namespace std;
int main() {
cout << "hello";
while (1);
return 0;
}
Does anybody know why this happens? :(
proc_terminate() doesn't work well on Windows.
A good workaround is to call the taskkill command.
function kill($process) {
if (strncasecmp(PHP_OS, 'WIN', 3) == 0) {
$status = proc_get_status($process);
return exec('taskkill /F /T /PID '.$status['pid']);
} else {
return proc_terminate($process);
}
}
proc_terminate doesn't actually terminate a process, it sends a SIGTERM signal to the process asking it to terminate itself.
I think that the problem is that your test executable is not listening for the SIGTERM signal, so it is just ignored.
On POSIX systems, you can use the second parameter to send a SIGKILL, which will essentially ask the OS to terminate the process, so that might work better. On Windows, I don't know what, if anything, this would do.
But the process should be handling signals anyway. You can easily add a signal handler to your exe for testing:
#include <csignal>
#include <iostream>
#include <thread>
using namespace std;
void signal_handler(int signal)
{
cout << "received SIGTERM\n";
exit(0);
}
int main()
{
// Install a signal handler
std::signal(SIGTERM, signal_handler);
cout << "starting\n";
while (1)
std::this_thread::sleep_for(std::chrono::seconds(1));
return 0;
}
Note the addition of sleep_for also, so that the exe doesn't take 100% of the CPU.
There is also a discussion in the comments here about using posix_kill() to kill a process and its children, if the above does not work for you.

Running a php script via ajax, but only if it is not already running

My intention is this.
My client.html calls a php script check.php via ajax. I want check.php to check if another script task.php is already being run. If it is, I do nothing. If it is not, I need to run it in the background.
I have an idea what I want to do, but am unsure how to do it.
Part A. I know how to call check.php via ajax.
Part B. In check.php I might need to run task.php. I think I need something like:
$PID = shell_exec("php task.php > /dev/null & echo $!");
I think the "> /dev/null &" bit tells it to run in the background, but am unsure what the "$!" does.
Part C. The $PID I need as a tag of the process. I need to write this number (or whatever) to a file in the same directory, and need to read it every call to check.php. I can't work out how to do that. Could someone give me a link of how to read/write a file with a single number in to the same directory?
Part D. Then to check if the last launched task.php is still running I am going to use the function:
function is_process_running($PID)
{
exec("ps $PID", $ProcessState);
return(count($ProcessState) >= 2);
}
I think that is all the bits I need, but as you can see I am unsure on how to do a few of them.
I would use an flock() based mechanism to make sure that task.php runs only once.
Use a code like this:
<?php
$fd = fopen('lock.file', 'w+');
// try to get an exclusive lock. LOCK_NB let the operation not blocking
// if a process instance is already running. In this case, the else
// block will being entered.
if(flock($fd, LOCK_EX | LOCK_NB )) {
// run your code
sleep(10);
// ...
flock($fd, LOCK_UN);
} else {
echo 'already running';
}
fclose($fd);
Also note that flock() is, as the PHP documentation points out, portable across all supported operating systems.
!$
gives you the pid of the last executed program in bash. Like this:
command &
pid=$!
echo pid
Note that you will have to make sure your php code runs on a system with bash support. (Not windows)
Update (after comment of opener).
flock() will work on all operating systems (As I mentioned). The problem I see in your code when working with windows is the !$ (As I mentioned ;) ..
To obtain the pid of the task.php you should use proc_open() to start task.php. I've prepared two example scripts:
task.php
$fd = fopen('lock.file', 'w+');
// try to get an exclusive lock. LOCK_NB let the operation not blocking
// if a process instance is already running. In this case, the else
// block will being entered.
if(flock($fd, LOCK_EX | LOCK_NB )) {
// your task's code comes here
sleep(10);
// ...
flock($fd, LOCK_UN);
echo 'success';
$exitcode = 0;
} else {
echo 'already running';
// return 2 to let check.php know about that
// task.php is already running
$exitcode = 2;
}
fclose($fd);
exit($exitcode);
check.php
$cmd = 'php task.php';
$descriptorspec = array(
0 => array('pipe', 'r'), // STDIN
1 => array('pipe', 'w'), // STDOUT
2 => array('pipe', 'w') // STDERR
);
$pipes = array(); // will be set by proc_open()
// start task.php
$process = proc_open($cmd, $descriptorspec, $pipes);
if(!is_resource($process)) {
die('failed to start task.php');
}
// get output (stdout and stderr)
$output = stream_get_contents($pipes[1]);
$errors = stream_get_contents($pipes[2]);
do {
// get the pid of the child process and it's exit code
$status = proc_get_status($process);
} while($status['running'] !== FALSE);
// close the process
proc_close($process);
// get pid and exitcode
$pid = $status['pid'];
$exitcode = $status['exitcode'];
// handle exit code
switch($exitcode) {
case 0:
echo 'Task.php has been executed with PID: ' . $pid
. '. The output was: ' . $output;
break;
case 1:
echo 'Task.php has been executed with errors: ' . $output;
break;
case 2:
echo 'Cannot execute task.php. Another instance is running';
break;
default:
echo 'Unknown error: ' . $stdout;
}
You asked me why my flock() solution is the best. It's just because the other answer will not reliably make sure that task.php runs once. This is because the race condition I've mentioned in the comments below that answer.
You can realize it, using lock file:
if(is_file(__DIR__.'/work.lock'))
{
die('Script already run.');
}
else
{
file_put_contents(__DIR__.'/work.lock', '');
// YOUR CODE
unlink(__DIR__.'/work.lock');
}
Too bad I didn't see this before it was accepted..
I have written a class to do just this. ( using file locking ) and PID, process ID checking, on both windows and Linux.
https://github.com/ArtisticPhoenix/MISC/blob/master/ProcLock.php
I think your are really overdoing it with all the processes and background checks. If you run a PHP script without a session, then you are already essentially running it in the background. Because it will not block any other request from the user. So make sure you don't call session_start();
Then the next step would be to run it even when the user cancels the request, which is a basic function in PHP. ignore_user_abort
Last check is to make sure it's only runs once, which can be easily done with creating a file, since PHP doesnt have an easy application scope.
Combined:
<?php
// Ignore user aborts and allow the script
// to run forever
ignore_user_abort(true);
set_time_limit(0);
$checkfile = "./runningtask.tmp";
//LOCK_EX basicaly uses flock() to prevents racecondition in regards to a regular file open.
if(file_put_contents($checkfile, "running", LOCK_EX)===false) {
exit();
}
function Cleanup() {
global $checkfile;
unlink($checkfile);
}
/*
actual code for task.php
*/
//run cleanup when your done, make sure you also call it if you exit the code anywhere else
Cleanup();
?>
In your javascript you can now call the task.php directly and cancel the request when the connection to the server has been established.
<script>
function Request(url){
if (window.XMLHttpRequest) { // Mozilla, Safari, ...
httpRequest = new XMLHttpRequest();
} else if (window.ActiveXObject) { // IE
httpRequest = new ActiveXObject("Microsoft.XMLHTTP");
} else{
return false;
}
httpRequest.onreadystatechange = function(){
if (httpRequest.readyState == 1) {
//task started, exit
httpRequest.abort();
}
};
httpRequest.open('GET', url, true);
httpRequest.send(null);
}
//call Request("task.php"); whenever you want.
</script>
Bonus points: You can have the actual code for task.php write occasional updates to $checkfile to have a sense of what is going on. Then you can have another ajax file read the content and show the status to the user.
Lets make the whole process from B to D simple
Step B-D:
$rslt =array(); // output from first exec
$output = array(); // output of task.php execution
//Check if any process by the name 'task.php' is running
exec("ps -auxf | grep 'task.php' | grep -v 'grep'",$rslt);
if(count($rslt)==0) // if none,
exec('php task.php',$output); // run the task,
Explanation:
ps -auxf --> gets all running processes with details
grep 'task.php' --> filter the process by 'task.php' keyword
grep -v 'grep' --> filters the grep process out
NB:
Its also advisable to put the same check in task.php file.
If task.php is executed directly via httpd (webserver), it will only be displayed as a httpd process and cannot be identified by 'ps' command
It wouldn't work under load-balanced environment. [Edited: 17Jul17]
You can get an exclusive lock on the script itself for the duration of the script running
Any other attempts to run it will end as soon as the lock() function is invoked.
//try to set a global exclusive lock on the file invoking this function and die if not successful
function lock(){
$file = isset($_SERVER['SCRIPT_FILENAME'])?
realpath($_SERVER['SCRIPT_FILENAME']):
(isset($_SERVER['PHP_SELF'])?realpath($_SERVER['PHP_SELF']):false);
if($file && file_exists($file)){
//global handle stays alive for the duration if this script running
global $$file;
if(!isset($$file)){$$file = fopen($file,'r');}
if(!flock($$file, LOCK_EX|LOCK_NB)){
echo 'This script is already running.'."\n";
die;
}
}
}
Test
Run this in one shell and try to run it in another while it is waiting for input.
lock();
//this will pause execution until an you press enter
echo '...continue? [enter]';
$handle = fopen("php://stdin","r");
$line = fgets($handle);
fclose($handle);

getting the real exit code after proc_open

I'm using proc_open in php to launch a subprocess and send data back and forth.
At some point I'd like to wait for the process to end and retrieve the exit code.
The problem is that if the process has already finished, my call to proc_close returns -1. There is apparently much confusion over what proc_close does actually return and I haven't found a way to reliably determine the exit code of a process opened with proc_open.
I've tried using proc_get_status, but it seems to also return -1 when the process has already exited.
Update
I can't get proc_get_status to ever give me a valid exit code, no matter how or when it is called. Is it broken completely?.
My understanding is that proc_close will never give you a legit exit code.
You can only grab the legit exit code the first time you run proc_get_status after the process has ended. Here's a process class that I stole off the php.net user contributed notes. The answer to your question is in the is_running() method:
<?php
class process {
public $cmd = '';
private $descriptors = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w')
);
public $pipes = NULL;
public $desc = '';
private $strt_tm = 0;
public $resource = NULL;
private $exitcode = NULL;
function __construct($cmd = '', $desc = '')
{
$this->cmd = $cmd;
$this->desc = $desc;
$this->resource = proc_open($this->cmd, $this->descriptors, $this->pipes, NULL, $_ENV);
$this->strt_tm = microtime(TRUE);
}
public function is_running()
{
$status = proc_get_status($this->resource);
/**
* proc_get_status will only pull valid exitcode one
* time after process has ended, so cache the exitcode
* if the process is finished and $exitcode is uninitialized
*/
if ($status['running'] === FALSE && $this->exitcode === NULL)
$this->exitcode = $status['exitcode'];
return $status['running'];
}
public function get_exitcode()
{
return $this->exitcode;
}
public function get_elapsed()
{
return microtime(TRUE) - $this->strt_tm;
}
}
Hope this helps.
I also was getting unexpected results trying to get the return code via proc_get_status, until I realized I was getting the return code of the last command I had executed (I was passing a series of commands to proc_open, separated by ;).
Once I broke the commands into individual proc_open calls, I used the following loop to get the correct return code. Note that normally the code is executing proc_get_status twice, and the correct return code is being returned on the second execution. Also, the code below could be dangerous if the process never terminates. I'm just using it as an example:
$status = proc_get_status($process);
while ($status["running"]) {
sleep(1);
$status = proc_get_status($process);
}

Categories