Does PHP proc_open block the Web request? - php

By default, on Linux, does creating a process via proc_open() make the PHP script not terminate until the spawned process terminates? I don't want it to, and I close the process handle right away.
proc_open itself does not block, that's clear enough. But what about the overall HTTP request execution?

I had some time on the weekend, so I made a little research on proc_open() on *nix systems.
While proc_open() doesn't block PHP's script execution even if shell script is not run in background PHP automatically invokes proc_close() after PHP's script is completely executed if you don't invoke it yourself. So, we can imagine that we have always a line with proc_close() in the end of script.
The problem lies with unobvious, but logical proc_close() behaviour. Let's imagine we have a script like:
$proc = proc_open('top -b -n 10000',
array(
array('pipe', 'r'),
array('pipe', 'w')),
$pipes);
//Process some data outputted by our script, but not all data
echo fread($pipes[1],100);
//Don't wait till scipt execution ended - exit
//close pipes
array_map('fclose',$pipes);
//close process
proc_close($proc);
Strange, proc_close() whould wait before shell script execution ended but our script was soon terminated. It hapens because we closed pipes(it seems that PHP does it silently if we forgot) so as soon as that script tries to write something to already non-existent pipe - it gets an error and terminates.
Now, let's try without pipes (well, there will be, but they will use current tty without any link to PHP):
$proc = proc_open("top -b -n 10000", array(), $pipes);
proc_close($proc);
Now, our PHP script is waiting for our shell script to end. Can we avoid it? Luckily PHP spawns shell scripts with
sh -c 'shell_script'
so, we can just kill out sh process and leave our script running:
$proc = proc_open("top -b -n 10000", array(), $pipes);
$proc_status=proc_get_status($proc);
exec('kill -9 '.$proc_status['pid']);
proc_close($proc);
Of cource we could just have run the process in background like:
$proc = proc_open("top -b -n 10000 &", array(), $pipes);
proc_close($proc);
and not have any problems, but this feature leads us to the most complex question: can we run a process with proc_open() read some output and then force the process to background? Well, in a way - yes.
Main problem here is pipes: we can't close them or our process will die, but we need them to read some usefull data from that process. It turns out that we can use a magic trick here - gdb.
First, create a file somewhere(/usr/share/gdb_null_descr in my example) with following contents:
p dup2(open("/dev/null",0),1)
p dup2(open("/dev/null",0),2)
It will tell gdb to change descriptors 1 and 2(well, they are usually stdout and stderr) to new file handlers (/dev/null in this example, but you can change it).
Now, last thing: make sure gdb can connect to other running processes - it is default on some systems, but for example on ubuntu 10.10 you have to set /proc/sys/kernel/yama/ptrace_scope to 0 if you are don't run it as root.
Enjoy:
$proc = proc_open('top -b -n 10000',
array(
array('pipe', 'r'),
array('pipe', 'w'),
array('pipe', 'w')),
$pipes);
//Process some data outputted by our script, but not all data
echo fread($pipes[1],100);
$proc_status=proc_get_status($proc);
//Find real pid of our process(we need to go down one step in process tree)
$pid=trim(exec('ps h -o pid --ppid '.$proc_status['pid']));
//Kill parent sh process
exec('kill -s 9 '.$proc_status['pid']);
//Change stdin/stdout handlers in our process
exec('gdb -p '.$pid.' --batch -x /usr/share/gdb_null_descr');
array_map('fclose',$pipes);
proc_close($proc);
edit: I forgot to mention that PHP doesn't run your shell script instantly, so you have to wait a bit before executing other shell commands, but usually it is fast enough(or PHP is slow enough) and I'm to lazy to add that checks to my examples.

I ran into a similar problem and wrote a small script to handle it:
https://github.com/peeter-tomberg/php-shell-executer
What I did was background the process and still have access to the result of the backgrounded process (both stderr and stdout).

Related

Capture exec() error output of background process

I'm using the following command to open a temporary ssh tunnel for making a mysql connection:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 > /dev/null');
$connection = #new \mysqli(127.0.0.1, $username, $password, $database, 3400);
This works splendidly. However, once in a while there may be another process using that port in which case it fails.
bind [127.0.0.1]:3400: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 3401
Could not request local forwarding.
What I'd like to do is capture the error output of exec() so that I can retry using a different port. If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
One solution I've come up with is to pipe output to a file instead of /dev/null:
exec('ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1 >temp.log 2>&1');
$output = file_get_contents('temp.log');
This works, but it feels messy. I'd prefer not to use the filesystem just to get the error response. Is there a way to capture the error output of this command without piping it to a file?
UPDATE: For the sake of clarity:
(a) Capturing result code using the second argument of exec() does not work in this case. Don't ask me why - but it will always return 0 (success)
(b) stdout must be redirected somewhere or php will not treat it as a background process and script execution will stop until it completes. (https://www.php.net/manual/en/function.exec.php#refsect1-function.exec-notes)
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
As far as i can tell, exec is not the right tool. For a more controlled approach, you may use proc_open. This may look something like this:
$process = proc_open(
'ssh -f -L 3400:127.0.0.1:3306 user#example.com sleep 1',
[/*stdin*/ 0 => ["pipe", "r"], /*stdout*/ 1 => ["pipe", "w"], /*stderr*/2 => ["pipe", "w"]],
$pipes
);
// Set the streams to non-blocking
// This is required since any unread output on the pipes may result in the process still marked as running
// Note that this does not work on windows due to restrictions in the windows api (https://bugs.php.net/bug.php?id=47918)
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
// Wait a litte bit - you would probably have to loop here and check regulary
// Also note that you may need to read from stdout and stderr from time to time to allow the process to finish
sleep(2);
// The process should now be running as background task
// You can check if the process has finished like this
if (
!proc_get_status($process)["running"] ||
proc_get_status($process)["signaled"] ||
proc_get_status($process)["stopped"] ||
) {
// Process should have stopped - read the output
$stdout = stream_get_contents($pipes[1]) ?: "";
$stderr = stream_get_contents($pipes[2]) ?: "";
// Close everything
#fclose($pipes[1]);
#fclose($pipes[2]);
proc_close($process);
}
You can find more details on that the manual on proc_open
If I add 2>&1 to my command the error output just goes nowhere since stdout is already being piped to /dev/null.
You can redirect stdout to null and stderr to stdout. That would seem to me as the simpler way of doing what you want (minimal modification).
So instead of
>temp.log 2>&1
do:
2>&1 1>/dev/null
Note that the order of the redirects is important.
Test
First we exec without redirection, then we redirect as above to capture stderr.
<?php
$me = $argv[0];
$out = exec("ls -la no-such-file {$me}");
print("The output is '{$out}'\n");
print("\n----\n");
$out = exec("ls -la no-such-file {$me} 2>&1 1>/dev/null");
print("The output is '{$out}'\n");
print("\n");
~
$ php -q tmp.php
ls: cannot access 'no-such-file': No such file or directory
The output is 'The output is '-rw-r--r-- 1 lserni users 265 Oct 25 22:48 tmp.php'
----
The output is 'ls: cannot access 'no-such-file': No such file or directory'
Update
This requirement was not clear initially: "process must detach" (as if it went into the background). Now, the fact is, whatever redirection you do to the original stream via exec() will prevent the process from detaching, because at the time the detachment would happen, the process has not completed, its output is not delivered.
That is also why exec() reports a zero error code - there was no error in spawning. If you want the final result, someone must wait for the process to finalize. So, you have to redirect locally (that way it will be the local file that will wait), then reconnect with whoever it was that waited for the process to finalize and then read the results.
For what you want, exec will never work. You ought to use the proc_* functions.
You might however force detach even so using nohup (you have no control over the spawned pid, so this is less than optimal)
if (file_exists('nohup.out')) { unlink('nohup.out'); }
$out = shell_exec('nohup ssh ... 2>&1 1>/dev/null &');
...still have to wait for connection to be established...
...read nohup.out to verify...
...
...do your thing...
As I said, this is less than optimal. Using proc_*, while undoubtedly more complicated, would allow you to start the ssh connection in tunnel mode without a terminal, and terminate it as soon as you don't need it anymore.
Actually, however, no offense intended, but this is a "X-Y problem". What you want to do is open a SSH tunnel for MySQL. So I'd look into doing just that.

php - exec() run in background [duplicate]

I have a process intensive task that I would like to run in the background.
The user clicks on a page, the PHP script runs, and finally, based on some conditions, if required, then it has to run a shell script, E.G.:
shell_exec('php measurePerformance.php 47 844 email#yahoo.com');
Currently I use shell_exec, but this requires the script to wait for an output. Is there any way to execute the command I want without waiting for it to complete?
How about adding.
"> /dev/null 2>/dev/null &"
shell_exec('php measurePerformance.php 47 844 email#yahoo.com > /dev/null 2>/dev/null &');
Note this also gets rid of the stdio and stderr.
This will execute a command and disconnect from the running process. Of course, it can be any command you want. But for a test, you can create a php file with a sleep(20) command it.
exec("nohup /usr/bin/php -f sleep.php > /dev/null 2>&1 &");
You can also give your output back to the client instantly and continue processing your PHP code afterwards.
This is the method I am using for long-waiting Ajax calls which would not have any effect on client side:
ob_end_clean();
ignore_user_abort();
ob_start();
header("Connection: close");
echo json_encode($out);
header("Content-Length: " . ob_get_length());
ob_end_flush();
flush();
// execute your command here. client will not wait for response, it already has one above.
You can find the detailed explanation here: http://oytun.co/response-now-process-later
On Windows 2003, to call another script without waiting, I used this:
$commandString = "start /b c:\\php\\php.EXE C:\\Inetpub\\wwwroot\\mysite.com\\phpforktest.php --passmsg=$testmsg";
pclose(popen($commandString, 'r'));
This only works AFTER giving changing permissions on cmd.exe - add Read and Execute for IUSR_YOURMACHINE (I also set write to Deny).
Use PHP's popen command, e.g.:
pclose(popen("start c:\wamp\bin\php.exe c:\wamp\www\script.php","r"));
This will create a child process and the script will excute in the background without waiting for output.
Sure, for windows you can use:
$WshShell = new COM("WScript.Shell");
$oExec = $WshShell->Run("C:/path/to/php-win.exe -f C:/path/to/script.php", 0, false);
Note:
If you get a COM error, add the extension to your php.ini and restart apache:
[COM_DOT_NET]
extension=php_com_dotnet.dll
If it's off of a web page, I recommend generating a signal of some kind (dropping a file in a directory, perhaps) and having a cron job pick up the work that needs to be done. Otherwise, we're likely to get into the territory of using pcntl_fork() and exec() from inside an Apache process, and that's just bad mojo.
That will work but you will have to be careful not to overload your server because it will create a new process every time you call this function which will run in background. If only one concurrent call at the same time then this workaround will do the job.
If not then I would advice to run a message queue like for instance beanstalkd/gearman/amazon sqs.

Reading STDOUT from aria2c in PHP script

Here is simple PHP script:
<?php
is_resource($process = proc_open('aria2c http://speedtest.rinet.ru/file100m.dat', [
['pipe', 'r'],
['pipe', 'w'],
['file', '/tmp/error-output.txt', 'a']
], $pipes, '/tmp', [])) || die();
fclose($pipes[0]);
fpassthru($pipes[1]);
fclose($pipes[1]);
proc_close($process);
The problem is that progress data in output is "stalled" until aria2c is finished. When aria2c process ends, it immediately bursts all output to my script. It is not related with fpassthru(), I've tried plain fread() with the same result.
The flow:
[NOTICE] File already exists. Renamed to /tmp/file100m.dat.4.
<...huge delay and then burst...>
[#edb1dc 70MiB/100MiB(70%) CN:1 DL:8.4MiB ETA:3s]
[#edb1dc 81MiB/100MiB(81%) CN:1 DL:9.7MiB ETA:1s]
[#edb1dc 92MiB/100MiB(92%) CN:1 DL:10MiB]
I need to get lines like "[#edb1dc 92MiB/100MiB(92%) CN:1 DL:10MiB]" without having to wait aria2c to end to gather information about current progress.
Not to mention, if I run the exact same command in console aria2c works fine .
I ran into the same issue while trying to monitor the Aria2 download progress (readout) from PHP.
The issue is caused by a missing fflush(stdout) in the Aria2 code but luckily there's a fix you can use in your PHP code to reconfigure the STDOUT buffering mode. You can use the stdbuf utility under Linux or the unbuffer utility under OSX to disable buffering like so:
proc_open('stdbuf -o0 aria2c ...'); // on Linux
or:
proc_open('unbuffer aria2c ...'); // on OSX
(Thanks to #hek2mgl for this answer on another similar question)

How can I capture the PID and output of a backgrounded PHP process at the same time?

I have one PHP script that has to start a second script in the background using exec. The parent script needs to capture the PID of this new process, but I also need to capture any potential output of the child script so it can be logged in the database. I can't just have the child script log to the database because I also need to capture fatal errors or anything that might indicate a problem with the script. The PID is used because the parent process needs to be able to check on the child and see when it finished. The child processes are sometimes run as crons, so forking isn't an option here either. I don't want two execution paths to debug if there are problems.
This was my first solution and it can capture the output, but fails to get the correct PID.
// RedirectToLog.php just reads stdin and logs it to the databse.
$cmd="php child.php </dev/null 2>&1 | php RedirectToLog.php >/dev/null 2>&1 & echo $!";
The problem here is that $! is the PID of the last process that was started in the background which ends up being RedirectToLog.php instead of child.php.
My other idea was to attempt to use a FIFO file (pipe).
$cmd1="php RedirectToLog.php </tmp/myFIFO >/dev/null 2>&1 &"
$cmd2="php child.php </dev/null >/tmp/myFIFO 2>&1 & echo $!"
This one didn't work because I couldn't get RedirectToLog to reliably consume the fifo and when it did, sometimes child.php failed to write EOF to the pipe which left both ends waiting on the other and both processes would hang until one was killed.
use proc_open to get full fd connectivity while starting a process. Take the resource returned by proc_open and use proc_get_status to get the pid.
$descriptors = array( ... ); // i suggest you read php.net on this
$pipes = array();
$res = proc_open($cmd,$descriptors,$pipes);
$info = proc_get_status($res);
echo $info['pid'];
I haven't fully grasped all your problems, but this should get you started.
I think you need to use
proc_open &
proc_get_status
I'm guessing the only reason you really want to capture the childs PID is to monitor it. If so, it might be easier to use proc_open. That way you can basically open file handles for stdin, stdout and stderr and monitor everything from the parent.

How to terminate a process

I am creating a process using proc_open in one PHP script.
How do i terminate this in another script . I am not able to pass the resource returned by the proc_open.
I also tried using proc_get_status() , it returns the ppid . I don't get the pid of the children .
development env : WAMP
Any inputs is appreciated .
I recommend that you re-examine your model to make certain that you actually have to kill the process from somewhere else. Your code will get increasingly difficult to debug and maintain in all but the most trivial circumstances.
To keep it encapsulated, you can signal the process you wish to terminate and gracefully exit in the process you want to kill. Otherwise, you can use normal IPC to send a message that says: "hey, buddy. shut down, please."
edit: for the 2nd paragraph, you may still end up launching a script to do this. that's fine. what you want to avoid is a kill -9 type of thing. instead, let the process exit gracefully.
To do that in pure PHP, here is the solution:
posix_kill($pid, 15); // SIGTERM = 15
You can use some methond to create process, this method usually returns the PID of the new process.
Does this works for You? :
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
$return_value = proc_close($process);
You're best off using something like this to launch your other process:
$pid = shell_exec("nohup $Command > /dev/null 2>&1 & echo $!");
That there would execute the process, and give you a running process ID.
exec("ps $pid", $pState);
$running = (count($pState) >= 2);
to terminate you can always use
exec("kill $pid");
However, you cant kill processes not owned by the user PHP runs at - if it runs as nobody - you'll start the new process as nobody, and only be able to kill processes running under the user nobody.

Categories