Reading STDOUT from aria2c in PHP script - php

Here is simple PHP script:
<?php
is_resource($process = proc_open('aria2c http://speedtest.rinet.ru/file100m.dat', [
['pipe', 'r'],
['pipe', 'w'],
['file', '/tmp/error-output.txt', 'a']
], $pipes, '/tmp', [])) || die();
fclose($pipes[0]);
fpassthru($pipes[1]);
fclose($pipes[1]);
proc_close($process);
The problem is that progress data in output is "stalled" until aria2c is finished. When aria2c process ends, it immediately bursts all output to my script. It is not related with fpassthru(), I've tried plain fread() with the same result.
The flow:
[NOTICE] File already exists. Renamed to /tmp/file100m.dat.4.
<...huge delay and then burst...>
[#edb1dc 70MiB/100MiB(70%) CN:1 DL:8.4MiB ETA:3s]
[#edb1dc 81MiB/100MiB(81%) CN:1 DL:9.7MiB ETA:1s]
[#edb1dc 92MiB/100MiB(92%) CN:1 DL:10MiB]
I need to get lines like "[#edb1dc 92MiB/100MiB(92%) CN:1 DL:10MiB]" without having to wait aria2c to end to gather information about current progress.
Not to mention, if I run the exact same command in console aria2c works fine .

I ran into the same issue while trying to monitor the Aria2 download progress (readout) from PHP.
The issue is caused by a missing fflush(stdout) in the Aria2 code but luckily there's a fix you can use in your PHP code to reconfigure the STDOUT buffering mode. You can use the stdbuf utility under Linux or the unbuffer utility under OSX to disable buffering like so:
proc_open('stdbuf -o0 aria2c ...'); // on Linux
or:
proc_open('unbuffer aria2c ...'); // on OSX
(Thanks to #hek2mgl for this answer on another similar question)

Related

Capture exec() error output of background process

I'm using the following command to open a temporary ssh tunnel for making a mysql connection:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 > /dev/null');
$connection = #new \mysqli(127.0.0.1, $username, $password, $database, 3400);
This works splendidly. However, once in a while there may be another process using that port in which case it fails.
bind [127.0.0.1]:3400: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 3401
Could not request local forwarding.
What I'd like to do is capture the error output of exec() so that I can retry using a different port. If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
One solution I've come up with is to pipe output to a file instead of /dev/null:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 >temp.log 2>&1');
$output = file_get_contents('temp.log');
This works, but it feels messy. I'd prefer not to use the filesystem just to get the error response. Is there a way to capture the error output of this command without piping it to a file?
UPDATE: For the sake of clarity:
(a) Capturing result code using the second argument of exec() does not work in this case. Don't ask me why - but it will always return 0 (success)
(b) stdout must be redirected somewhere or php will not treat it as a background process and script execution will stop until it completes. (https://www.php.net/manual/en/function.exec.php#refsect1-function.exec-notes)
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
As far as i can tell, exec is not the right tool. For a more controlled approach, you may use proc_open. This may look something like this:
$process = proc_open(
'ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1',
[/*stdin*/ 0 => ["pipe", "r"], /*stdout*/ 1 => ["pipe", "w"], /*stderr*/2 => ["pipe", "w"]],
$pipes
);
// Set the streams to non-blocking
// This is required since any unread output on the pipes may result in the process still marked as running
// Note that this does not work on windows due to restrictions in the windows api (https://bugs.php.net/bug.php?id=47918)
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
// Wait a litte bit - you would probably have to loop here and check regulary
// Also note that you may need to read from stdout and stderr from time to time to allow the process to finish
sleep(2);
// The process should now be running as background task
// You can check if the process has finished like this
if (
!proc_get_status($process)["running"] ||
proc_get_status($process)["signaled"] ||
proc_get_status($process)["stopped"] ||
) {
// Process should have stopped - read the output
$stdout = stream_get_contents($pipes[1]) ?: "";
$stderr = stream_get_contents($pipes[2]) ?: "";
// Close everything
#fclose($pipes[1]);
#fclose($pipes[2]);
proc_close($process);
}
You can find more details on that the manual on proc_open
If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
You can redirect stdout to null and stderr to stdout. That would seem to me as the simpler way of doing what you want (minimal modification).
So instead of
>temp.log 2>&1
do:
2>&1 1>/dev/null
Note that the order of the redirects is important.
Test
First we exec without redirection, then we redirect as above to capture stderr.
<?php
$me = $argv[0];
$out = exec("ls -la no-such-file {$me}");
print("The output is '{$out}'\n");
print("\n----\n");
$out = exec("ls -la no-such-file {$me} 2>&1 1>/dev/null");
print("The output is '{$out}'\n");
print("\n");
~
$ php -q tmp.php
ls: cannot access 'no-such-file': No such file or directory
The output is 'The output is '-rw-r--r-- 1 lserni users 265 Oct 25 22:48 tmp.php'
----
The output is 'ls: cannot access 'no-such-file': No such file or directory'
Update
This requirement was not clear initially: "process must detach" (as if it went into the background). Now, the fact is, whatever redirection you do to the original stream via exec() will prevent the process from detaching, because at the time the detachment would happen, the process has not completed, its output is not delivered.
That is also why exec() reports a zero error code - there was no error in spawning. If you want the final result, someone must wait for the process to finalize. So, you have to redirect locally (that way it will be the local file that will wait), then reconnect with whoever it was that waited for the process to finalize and then read the results.
For what you want, exec will never work. You ought to use the proc_* functions.
You might however force detach even so using nohup (you have no control over the spawned pid, so this is less than optimal)
if (file_exists('nohup.out')) { unlink('nohup.out'); }
$out = shell_exec('nohup ssh ... 2>&1 1>/dev/null &');
...still have to wait for connection to be established...
...read nohup.out to verify...
...
...do your thing...
As I said, this is less than optimal. Using proc_*, while undoubtedly more complicated, would allow you to start the ssh connection in tunnel mode without a terminal, and terminate it as soon as you don't need it anymore.
Actually, however, no offense intended, but this is a "X-Y problem". What you want to do is open a SSH tunnel for MySQL. So I'd look into doing just that.

php - exec() run in background [duplicate]

I have a process intensive task that I would like to run in the background.
The user clicks on a page, the PHP script runs, and finally, based on some conditions, if required, then it has to run a shell script, E.G.:
shell_exec('php measurePerformance.php 47 844 email#yahoo.com');
Currently I use shell_exec, but this requires the script to wait for an output. Is there any way to execute the command I want without waiting for it to complete?
How about adding.
"> /dev/null 2>/dev/null &"
shell_exec('php measurePerformance.php 47 844 email#yahoo.com > /dev/null 2>/dev/null &');
Note this also gets rid of the stdio and stderr.
This will execute a command and disconnect from the running process. Of course, it can be any command you want. But for a test, you can create a php file with a sleep(20) command it.
exec("nohup /usr/bin/php -f sleep.php > /dev/null 2>&1 &");
You can also give your output back to the client instantly and continue processing your PHP code afterwards.
This is the method I am using for long-waiting Ajax calls which would not have any effect on client side:
ob_end_clean();
ignore_user_abort();
ob_start();
header("Connection: close");
echo json_encode($out);
header("Content-Length: " . ob_get_length());
ob_end_flush();
flush();
// execute your command here. client will not wait for response, it already has one above.
You can find the detailed explanation here: http://oytun.co/response-now-process-later
On Windows 2003, to call another script without waiting, I used this:
$commandString = "start /b c:\\php\\php.EXE C:\\Inetpub\\wwwroot\\mysite.com\\phpforktest.php --passmsg=$testmsg";
pclose(popen($commandString, 'r'));
This only works AFTER giving changing permissions on cmd.exe - add Read and Execute for IUSR_YOURMACHINE (I also set write to Deny).
Use PHP's popen command, e.g.:
pclose(popen("start c:\wamp\bin\php.exe c:\wamp\www\script.php","r"));
This will create a child process and the script will excute in the background without waiting for output.
Sure, for windows you can use:
$WshShell = new COM("WScript.Shell");
$oExec = $WshShell->Run("C:/path/to/php-win.exe -f C:/path/to/script.php", 0, false);
Note:
If you get a COM error, add the extension to your php.ini and restart apache:
[COM_DOT_NET]
extension=php_com_dotnet.dll
If it's off of a web page, I recommend generating a signal of some kind (dropping a file in a directory, perhaps) and having a cron job pick up the work that needs to be done. Otherwise, we're likely to get into the territory of using pcntl_fork() and exec() from inside an Apache process, and that's just bad mojo.
That will work but you will have to be careful not to overload your server because it will create a new process every time you call this function which will run in background. If only one concurrent call at the same time then this workaround will do the job.
If not then I would advice to run a message queue like for instance beanstalkd/gearman/amazon sqs.

redirects in php exec are broken in apache + windows7, were working on windowsXP

i have been using PHP to execute a legacy script in an Apache server. the legacy script writes debug data to STDERR and i have been redirecting that to a black-hole or STDOUT depending on the debug-settings.
the PHP looks a bit like this:
exec('perl -e "print 10; print STDERR 20" 2>&1', $output);
that was reliably working in XP. i got new hardware which now runs windows7 and coming back to this code it is broken. zero output. return-code 255. no idea why.
the only way i was able to get it going again was to remove the redirection. oh, redirection still works perfectly in a terminal-box.
now i have to retrieve my debug-data from the apache-error-log (where every STDERR output goes by default) which is inconvenient but not a problem.
i just want to understand why the redirect stopped working all of a sudden (and maybe help others running into the same problem). the apache is the same, in fact i just copied the XAMPP dir over from the old box. a bug? system-limitation? forbidden by OS-policy?
Instead of using exec and using filehandle redirection, use proc_open and actually capture the output of stdout and stderr. Unlike some of the process-related functions, the proc_ family is built in to all versions of PHP and work fine on Windows.
A c&p of their example for completeness:
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
$cwd = '/tmp';
$env = array('some_option' => 'aeiou');
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0], '<?php print_r($_ENV); ?>');
fclose($pipes[0]);
echo stream_get_contents($pipes[1]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
echo "command returned $return_value\n";
}
Be sure to browse the upvoted user-contributed notes on the documentation page for possible caveats.
Okay, i got the (well maybe at least my) solution:
use proc_open as Charles suggested
go back to the original principle of io_redirection
dumping stuff directly to STDERR and retrieving it from there via pipes does apparently not work under (my) windows7+PHP with my code. simple examples work, but that was it for me.
so while using 2>&1 broke my exec() - the initial problem - it works wonderfully with proc_open(). problem solved.
i wonder if i will now find something broken on our linux servers running the new code.
small caveat: if you don't want your code to print to STDERR and you use redirect-to-null e.g. for production, in windows it's 2>nul

Does PHP proc_open block the Web request?

By default, on Linux, does creating a process via proc_open() make the PHP script not terminate until the spawned process terminates? I don't want it to, and I close the process handle right away.
proc_open itself does not block, that's clear enough. But what about the overall HTTP request execution?
I had some time on the weekend, so I made a little research on proc_open() on *nix systems.
While proc_open() doesn't block PHP's script execution even if shell script is not run in background PHP automatically invokes proc_close() after PHP's script is completely executed if you don't invoke it yourself. So, we can imagine that we have always a line with proc_close() in the end of script.
The problem lies with unobvious, but logical proc_close() behaviour. Let's imagine we have a script like:
$proc = proc_open('top -b -n 10000',
array(
array('pipe', 'r'),
array('pipe', 'w')),
$pipes);
//Process some data outputted by our script, but not all data
echo fread($pipes[1],100);
//Don't wait till scipt execution ended - exit
//close pipes
array_map('fclose',$pipes);
//close process
proc_close($proc);
Strange, proc_close() whould wait before shell script execution ended but our script was soon terminated. It hapens because we closed pipes(it seems that PHP does it silently if we forgot) so as soon as that script tries to write something to already non-existent pipe - it gets an error and terminates.
Now, let's try without pipes (well, there will be, but they will use current tty without any link to PHP):
$proc = proc_open("top -b -n 10000", array(), $pipes);
proc_close($proc);
Now, our PHP script is waiting for our shell script to end. Can we avoid it? Luckily PHP spawns shell scripts with
sh -c 'shell_script'
so, we can just kill out sh process and leave our script running:
$proc = proc_open("top -b -n 10000", array(), $pipes);
$proc_status=proc_get_status($proc);
exec('kill -9 '.$proc_status['pid']);
proc_close($proc);
Of cource we could just have run the process in background like:
$proc = proc_open("top -b -n 10000 &", array(), $pipes);
proc_close($proc);
and not have any problems, but this feature leads us to the most complex question: can we run a process with proc_open() read some output and then force the process to background? Well, in a way - yes.
Main problem here is pipes: we can't close them or our process will die, but we need them to read some usefull data from that process. It turns out that we can use a magic trick here - gdb.
First, create a file somewhere(/usr/share/gdb_null_descr in my example) with following contents:
p dup2(open("/dev/null",0),1)
p dup2(open("/dev/null",0),2)
It will tell gdb to change descriptors 1 and 2(well, they are usually stdout and stderr) to new file handlers (/dev/null in this example, but you can change it).
Now, last thing: make sure gdb can connect to other running processes - it is default on some systems, but for example on ubuntu 10.10 you have to set /proc/sys/kernel/yama/ptrace_scope to 0 if you are don't run it as root.
Enjoy:
$proc = proc_open('top -b -n 10000',
array(
array('pipe', 'r'),
array('pipe', 'w'),
array('pipe', 'w')),
$pipes);
//Process some data outputted by our script, but not all data
echo fread($pipes[1],100);
$proc_status=proc_get_status($proc);
//Find real pid of our process(we need to go down one step in process tree)
$pid=trim(exec('ps h -o pid --ppid '.$proc_status['pid']));
//Kill parent sh process
exec('kill -s 9 '.$proc_status['pid']);
//Change stdin/stdout handlers in our process
exec('gdb -p '.$pid.' --batch -x /usr/share/gdb_null_descr');
array_map('fclose',$pipes);
proc_close($proc);
edit: I forgot to mention that PHP doesn't run your shell script instantly, so you have to wait a bit before executing other shell commands, but usually it is fast enough(or PHP is slow enough) and I'm to lazy to add that checks to my examples.
I ran into a similar problem and wrote a small script to handle it:
https://github.com/peeter-tomberg/php-shell-executer
What I did was background the process and still have access to the result of the backgrounded process (both stderr and stdout).

Is there a way to use shell_exec without waiting for the command to complete?

I have a process intensive task that I would like to run in the background.
The user clicks on a page, the PHP script runs, and finally, based on some conditions, if required, then it has to run a shell script, E.G.:
shell_exec('php measurePerformance.php 47 844 email#yahoo.com');
Currently I use shell_exec, but this requires the script to wait for an output. Is there any way to execute the command I want without waiting for it to complete?
How about adding.
"> /dev/null 2>/dev/null &"
shell_exec('php measurePerformance.php 47 844 email#yahoo.com > /dev/null 2>/dev/null &');
Note this also gets rid of the stdio and stderr.
This will execute a command and disconnect from the running process. Of course, it can be any command you want. But for a test, you can create a php file with a sleep(20) command it.
exec("nohup /usr/bin/php -f sleep.php > /dev/null 2>&1 &");
You can also give your output back to the client instantly and continue processing your PHP code afterwards.
This is the method I am using for long-waiting Ajax calls which would not have any effect on client side:
ob_end_clean();
ignore_user_abort();
ob_start();
header("Connection: close");
echo json_encode($out);
header("Content-Length: " . ob_get_length());
ob_end_flush();
flush();
// execute your command here. client will not wait for response, it already has one above.
You can find the detailed explanation here: http://oytun.co/response-now-process-later
On Windows 2003, to call another script without waiting, I used this:
$commandString = "start /b c:\\php\\php.EXE C:\\Inetpub\\wwwroot\\mysite.com\\phpforktest.php --passmsg=$testmsg";
pclose(popen($commandString, 'r'));
This only works AFTER giving changing permissions on cmd.exe - add Read and Execute for IUSR_YOURMACHINE (I also set write to Deny).
Use PHP's popen command, e.g.:
pclose(popen("start c:\wamp\bin\php.exe c:\wamp\www\script.php","r"));
This will create a child process and the script will excute in the background without waiting for output.
Sure, for windows you can use:
$WshShell = new COM("WScript.Shell");
$oExec = $WshShell->Run("C:/path/to/php-win.exe -f C:/path/to/script.php", 0, false);
Note:
If you get a COM error, add the extension to your php.ini and restart apache:
[COM_DOT_NET]
extension=php_com_dotnet.dll
If it's off of a web page, I recommend generating a signal of some kind (dropping a file in a directory, perhaps) and having a cron job pick up the work that needs to be done. Otherwise, we're likely to get into the territory of using pcntl_fork() and exec() from inside an Apache process, and that's just bad mojo.
That will work but you will have to be careful not to overload your server because it will create a new process every time you call this function which will run in background. If only one concurrent call at the same time then this workaround will do the job.
If not then I would advice to run a message queue like for instance beanstalkd/gearman/amazon sqs.

Categories