php exec() never ended - php

I'm using php exec() to run a executable file and it seems never ended.
But running this executable file in shell is ok
Here's the main things the executable file do:
fork();
child process does some time-wasting things.
And I setrlimit a CPU time
In parent process: listen signals and kill child process when the used_time calculated exceeds limit
How can I do to make php exec() work?
Update:
because the code is too long,I just select some of them
main function
child_pid = fork();
if(child_pid == 0)
{
compile();
exit(0);
}
else
{
int res = watch();
if(res)
puts("YES");
else
puts("NO");
}
child process
LIM.rlim_cur = LIM.rlim_max = COMPILE_TIME;
setrlimit(RLIMIT_CPU,&LIM);
alarm(0);
alarm(LIM.rlim_cur * 10);
switch(language)
{
//..... here is execl() to call compiler like gcc,g++,javac
}
parent process
int status = 0;
int used_time = 0;
struct timeval case_startv, case_nowv;
struct timezone case_startz, case_nowz;
gettimeofday(&case_startv, &case_startz);
while(1)
{
usleep(50000);
kill(child_pid,SIGKILL);
gettimeofday(&case_nowv, &case_nowz);
used_time = case_nowv.tv_sec - case_startv.tv_sec;
if(waitpid(child_pid,&status,WNOHANG) == 0) //still running
{
if(used_time > COMPILE_TIME)
{
report_log("Compile time limit exceed");
kill(child_pid,SIGKILL);
return 0;
}
}
else
{
//handle signals
}
}
For test,just the function exec() in php file
The situation what i said only occurred when :
use php exec() run the executable file to compile user code like:
#include "/dev/random"
//....

Php script on server has limited time to execute. It is generally not a good idea to execute long running scripts this way. It is recommended that they be run as background jobs.
Thi is defined in php.ini which is different for apache and shell

At last, I find out why this happened..
I just kill childpid but not kill other process cause by childpid
So php exec() will always run

Related

How to implement one PHP scheduled job to run separate process in XAMPP? [duplicate]

Is there a realistic way of implementing a multi-threaded model in PHP whether truly, or just simulating it. Some time back it was suggested that you could force the operating system to load another instance of the PHP executable and handle other simultaneous processes.
The problem with this is that when the PHP code finished executing the PHP instance remains in memory because there is no way to kill it from within PHP. So if you are simulating several threads you can imagine whats going to happen. So I am still looking for a way multi-threading can be done or simulated effectively from within PHP. Any ideas?
Warning: This extension is considered unmaintained and dead.
Warning: The pthreads extension cannot be used in a web server environment. Threading in PHP is therefore restricted to CLI-based applications only.
Warning: pthreads (v3) can only be used with PHP 7.2+: This is due to ZTS mode being unsafe in 7.0 and 7.1.
https://www.php.net/manual/en/intro.pthreads.php
Multi-threading is possible in php
Yes you can do multi-threading in PHP with pthreads
From the PHP documentation:
pthreads is an object-orientated API that provides all of the tools needed for multi-threading in PHP. PHP applications can create, read, write, execute and synchronize with Threads, Workers and Threaded objects.
Warning:
The pthreads extension cannot be used in a web server environment. Threading in PHP should therefore remain to CLI-based applications only.
Simple Test
#!/usr/bin/php
<?php
class AsyncOperation extends Thread {
public function __construct($arg) {
$this->arg = $arg;
}
public function run() {
if ($this->arg) {
$sleep = mt_rand(1, 10);
printf('%s: %s -start -sleeps %d' . "\n", date("g:i:sa"), $this->arg, $sleep);
sleep($sleep);
printf('%s: %s -finish' . "\n", date("g:i:sa"), $this->arg);
}
}
}
// Create a array
$stack = array();
//Initiate Multiple Thread
foreach ( range("A", "D") as $i ) {
$stack[] = new AsyncOperation($i);
}
// Start The Threads
foreach ( $stack as $t ) {
$t->start();
}
?>
First Run
12:00:06pm: A -start -sleeps 5
12:00:06pm: B -start -sleeps 3
12:00:06pm: C -start -sleeps 10
12:00:06pm: D -start -sleeps 2
12:00:08pm: D -finish
12:00:09pm: B -finish
12:00:11pm: A -finish
12:00:16pm: C -finish
Second Run
12:01:36pm: A -start -sleeps 6
12:01:36pm: B -start -sleeps 1
12:01:36pm: C -start -sleeps 2
12:01:36pm: D -start -sleeps 1
12:01:37pm: B -finish
12:01:37pm: D -finish
12:01:38pm: C -finish
12:01:42pm: A -finish
Real World Example
error_reporting(E_ALL);
class AsyncWebRequest extends Thread {
public $url;
public $data;
public function __construct($url) {
$this->url = $url;
}
public function run() {
if (($url = $this->url)) {
/*
* If a large amount of data is being requested, you might want to
* fsockopen and read using usleep in between reads
*/
$this->data = file_get_contents($url);
} else
printf("Thread #%lu was not provided a URL\n", $this->getThreadId());
}
}
$t = microtime(true);
$g = new AsyncWebRequest(sprintf("http://www.google.com/?q=%s", rand() * 10));
/* starting synchronization */
if ($g->start()) {
printf("Request took %f seconds to start ", microtime(true) - $t);
while ( $g->isRunning() ) {
echo ".";
usleep(100);
}
if ($g->join()) {
printf(" and %f seconds to finish receiving %d bytes\n", microtime(true) - $t, strlen($g->data));
} else
printf(" and %f seconds to finish, request failed\n", microtime(true) - $t);
}
why don't you use popen?
for ($i=0; $i<10; $i++) {
// open ten processes
for ($j = 0; $j < 10; $j++) {
$pipe[$j] = popen('script2.php', 'w');
}
// wait for them to finish
for ($j = 0; $j < 10; ++$j) {
pclose($pipe[$j]);
}
}
Threading isn't available in stock PHP, but concurrent programming is possible by using HTTP requests as asynchronous calls.
With the curl's timeout setting set to 1 and using the same session_id for the processes you want to be associated with each other, you can communicate with session variables as in my example below. With this method you can even close your browser and the concurrent process still exists on the server.
Don't forget to verify the correct session ID like this:
http://localhost/test/verifysession.php?sessionid=[the correct id]
startprocess.php
$request = "http://localhost/test/process1.php?sessionid=".$_REQUEST["PHPSESSID"];
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $request);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 1);
curl_exec($ch);
curl_close($ch);
echo $_REQUEST["PHPSESSID"];
process1.php
set_time_limit(0);
if ($_REQUEST["sessionid"])
session_id($_REQUEST["sessionid"]);
function checkclose()
{
global $_SESSION;
if ($_SESSION["closesession"])
{
unset($_SESSION["closesession"]);
die();
}
}
while(!$close)
{
session_start();
$_SESSION["test"] = rand();
checkclose();
session_write_close();
sleep(5);
}
verifysession.php
if ($_REQUEST["sessionid"])
session_id($_REQUEST["sessionid"]);
session_start();
var_dump($_SESSION);
closeprocess.php
if ($_REQUEST["sessionid"])
session_id($_REQUEST["sessionid"]);
session_start();
$_SESSION["closesession"] = true;
var_dump($_SESSION);
While you can't thread, you do have some degree of process control in php. The two function sets that are useful here are:
Process control functions
http://www.php.net/manual/en/ref.pcntl.php
POSIX functions
http://www.php.net/manual/en/ref.posix.php
You could fork your process with pcntl_fork - returning the PID of the child. Then you can use posix_kill to despose of that PID.
That said, if you kill a parent process a signal should be sent to the child process telling it to die. If php itself isn't recognising this you could register a function to manage it and do a clean exit using pcntl_signal.
using threads is made possible by the pthreads PECL extension
http://www.php.net/manual/en/book.pthreads.php
I know this is an old question but for people searching, there is a PECL extension written in C that gives PHP multi-threading capability now, it's located here https://github.com/krakjoe/pthreads
You can use exec() to run a command line script (such as command line php), and if you pipe the output to a file then your script won't wait for the command to finish.
I can't quite remember the php CLI syntax, but you'd want something like:
exec("/path/to/php -f '/path/to/file.php' | '/path/to/output.txt'");
I think quite a few shared hosting servers have exec() disabled by default for security reasons, but might be worth a try.
You could simulate threading. PHP can run background processes via popen (or proc_open). Those processes can be communicated with via stdin and stdout. Of course those processes can themselves be a php program. That is probably as close as you'll get.
You can have option of:
multi_curl
One can use system command for the same
Ideal scenario is, create a threading function in C language and compile/configure in PHP. Now that function will be the function of PHP.
How about pcntl_fork?
check our the manual page for examples: PHP pcntl_fork
<?php
$pid = pcntl_fork();
if ($pid == -1) {
die('could not fork');
} else if ($pid) {
// we are the parent
pcntl_wait($status); //Protect against Zombie children
} else {
// we are the child
}
?>
If you are using a Linux server, you can use
exec("nohup $php_path path/script.php > /dev/null 2>/dev/null &")
If you need pass some args
exec("nohup $php_path path/script.php $args > /dev/null 2>/dev/null &")
In script.php
$args = $argv[1];
Or use Symfony
https://symfony.com/doc/current/components/process.html
$process = Process::fromShellCommandline("php ".base_path('script.php'));
$process->setTimeout(0);
$process->disableOutput();
$process->start();
Depending on what you're trying to do you could also use curl_multi to achieve it.
pcntl_fork won't work in a web server environment if it has safe mode turned on. In this case, it will only work in the CLI version of PHP.
I know this is an old question, but this will undoubtedly be useful to many: PHPThreads
Code Example:
function threadproc($thread, $param) {
echo "\tI'm a PHPThread. In this example, I was given only one parameter: \"". print_r($param, true) ."\" to work with, but I can accept as many as you'd like!\n";
for ($i = 0; $i < 10; $i++) {
usleep(1000000);
echo "\tPHPThread working, very busy...\n";
}
return "I'm a return value!";
}
$thread_id = phpthread_create($thread, array(), "threadproc", null, array("123456"));
echo "I'm the main thread doing very important work!\n";
for ($n = 0; $n < 5; $n++) {
usleep(1000000);
echo "Main thread...working!\n";
}
echo "\nMain thread done working. Waiting on our PHPThread...\n";
phpthread_join($thread_id, $retval);
echo "\n\nOur PHPThread returned: " . print_r($retval, true) . "!\n";
Requires PHP extensions:
posix
pcntl
sockets
I've been using this library in production now for months. I put a LOT of effort into making it feel like using POSIX pthreads. If you're comfortable with pthreads, you can pick this up and use it very effectively in no time.
Computationally, the inner workings are quite different, but practically, the functionality is nearly the same including semantics and syntax.
I've used it to write an extremely efficient WebSocket server that supports high throughput rates. Sorry, I'm rambling. I'm just excited that I finally got it released and I want to see who it will help!
popen()/proc_open() works parallel even in Windows.
Most often pitfall is "fread/stream_get_contents" without while loop. Once you try to fread() from running process it will block output for processes that run after it (cause of fread() waits until at least one byte arrives)
Add stream_select(). Closest analogy is "foreach with timeout but for streams", you pass few arrays to read and write and each call of stream_select() one or more streams will be selected. Function updates original arrays by reference, so dont forget to restore it to all streams before next call. Function gives them some time to read or write. If no content - control returns allowing us to retry cycle.
// sleep.php
set_error_handler(function ($severity, $error, $file, $line) {
throw new ErrorException($error, -1, $severity, $file, $line);
});
$sleep = $argv[ 1 ];
sleep($sleep);
echo $sleep . PHP_EOL;
exit(0);
// run.php
<?php
$procs = [];
$pipes = [];
$cmd = 'php %cd%/sleep.php';
$desc = [
0 => [ 'pipe', 'r' ],
1 => [ 'pipe', 'w' ],
2 => [ 'pipe', 'a' ],
];
for ( $i = 0; $i < 10; $i++ ) {
$iCmd = $cmd . ' ' . ( 10 - $i ); // add SLEEP argument to each command 10, 9, ... etc.
$proc = proc_open($iCmd, $desc, $pipes[ $i ], __DIR__);
$procs[ $i ] = $proc;
}
$stdins = array_column($pipes, 0);
$stdouts = array_column($pipes, 1);
$stderrs = array_column($pipes, 2);
while ( $procs ) {
foreach ( $procs as $i => $proc ) {
// #gzhegow > [OR] you can output while script is running (if child never finishes)
$read = [ $stdins[ $i ] ];
$write = [ $stdouts[ $i ], $stderrs[ $i ] ];
$except = [];
if (stream_select($read, $write, $except, $seconds = 0, $microseconds = 1000)) {
foreach ( $write as $stream ) {
echo stream_get_contents($stream);
}
}
$status = proc_get_status($proc);
if (false === $status[ 'running' ]) {
$status = proc_close($proc);
unset($procs[ $i ]);
echo 'STATUS: ' . $status . PHP_EOL;
}
// #gzhegow > [OR] you can output once command finishes
// $status = proc_get_status($proc);
//
// if (false === $status[ 'running' ]) {
// if ($content = stream_get_contents($stderrs[ $i ])) {
// echo '[ERROR]' . $content . PHP_EOL;
// }
//
// echo stream_get_contents($stdouts[ $i ]) . PHP_EOL;
//
// $status = proc_close($proc);
// unset($procs[ $i ]);
//
// echo 'STATUS: ' . $status . PHP_EOL;
// }
}
usleep(1); // give your computer one tick to decide what thread should be used
}
// ensure you receive 1,2,3... but you've just run it 10,9,8...
exit(0);
Multithreading means performing multiple tasks or processes simultaneously, we can achieve this in php by using following code,although there is no direct way to achieve multithreading in php but we can achieve almost same results by following way.
chdir(dirname(__FILE__)); //if you want to run this file as cron job
for ($i = 0; $i < 2; $i += 1){
exec("php test_1.php $i > test.txt &");
//this will execute test_1.php and will leave this process executing in the background and will go
//to next iteration of the loop immediately without waiting the completion of the script in the
//test_1.php , $i is passed as argument .
}
Test_1.php
$conn=mysql_connect($host,$user,$pass);
$db=mysql_select_db($db);
$i = $argv[1]; //this is the argument passed from index.php file
for($j = 0;$j<5000; $j ++)
{
mysql_query("insert into test set
id='$i',
comment='test',
datetime=NOW() ");
}
This will execute test_1.php two times simultaneously and both process will run in the background simultaneously ,so in this way you can achieve multithreading in php.
This guy done really good work Multithreading in php
As of the writing of my current comment, I don't know about the PHP threads. I came to look for the answer here myself, but one workaround is that the PHP program that receives the request from the web server delegates the whole answer formulation to a console application that stores its output, the answer to the request, to a binary file and the PHP program that launched the console application returns that binary file byte-by-byte as the answer to the received request. The console application can be written in any programming language that runs on the server, including those that have proper threading support, including C++ programs that use OpenMP.
One unreliable, dirty, trick is to use PHP for executing a console application, "uname",
uname -a
and print the output of that console command to the HTML output to find out the exact version of the server software. Then install the exact same version of the software to a VirtualBox instance, compile/assemble whatever fully self-contained, preferably static, binaries that one wants and then upload those to the server. From that point onwards the PHP application can use those binaries in the role of the console application that has proper multi-threading. It's a dirty, unreliable, workaround to a situation, when the server administrator has not installed all needed programming language implementations to the server. The thing to watch out for is that at every request that the PHP application receives the console application(s) terminates/exit/get_killed.
As to what the hosting service administrators think of such server usage patterns, I guess it boils down to culture. In Northern Europe the service provider HAS TO DELIVER WHAT WAS ADVERTISED and if execution of console commands was allowed and uploading of non-malware files was allowed and the service provider has a right to kill any server process after a few minutes or even after 30 seconds, then the hosting service administrators lack any arguments for forming a proper complaint. In United States and Western Europe the situation/culture is very different and I believe that there's a great chance that in U.S. and/or Western Europe the hosting service provider will
refuse to serve hosting service clients that use the above described trick. That's just my guess, given my personal experience with U.S. hosting services and given what I have heard from others about Western European hosting services. As of the writing of my current comment(2018_09_01) I do not know anything about the cultural norms of the Southern-European hosting service providers, Southern-European network administrators.

dynamically get results from exec

I have a php script that calls a go script. It gets results every 1-2 seconds, and print's them. Using php's exec and output, I only get the results when the program finishes. Is there a way I can check the output to see when it changes and output that while it's still running?
Something like this, but pausing the execution?:
$return_status = 0;
$output = [];
$old_output = ["SOMETHING ELSE"];
while ($return_status == 0) {
exec($my_program,$output,$return_status); #somehow pause this?
if $output != $old_output {
echo($output);
$old_output = $output;
}
}
Yes. Use the popen() function to get a file handle for the command's output, then read from it a line at a time.

Why PHP repeatedly being executed while in cronjob?

I setup a cronjob at CentOS 6.4 (final) server (with PHP 5.5.9 and Apache httpd 2.4.4) installed):
30 15 * * * wget "http://10.15.1.2/calc.php" -O /dev/null
calc.php ftps several servers to download serveral logs files (using PHP built-in ftp functions), insert log records in logfiles into a single temp table, then count different-date log records, lastly insert counted results into another summary table. It's a very simple program.
calc.php starts at 15:30 and end at 15:46 (calc.php will write the time to log). When I check the log, I find calc.php is being executed again at 15:45. (almost when first-run near ends.) I've double check my calc.php. In the main logic, I doesn't use any loop statements, e.g. while, do-while, or for, etc. All tasks mentioned above are written as functions in same program file. I run same url on browser many times, they all work normally.
So what could be the reason that causes repeated execution while running in cron job?
Here's the main logic part (Note: myxxxx() are my own simple display-message functions. USEARRAY_MODE and TESTINSTMP_MODE aren't defined while running) :
myassess("Being-calcuated logfile list=\n".xpr($srclog_fn_list));
if (defined('USEARRAY_MODE')) {
// Insert outer srclog into temp array
if (false === ($srclog_calc_date = insert_srclog_to_array($srclog_get_date, $srclog_dir, $srclog_delta_days, $srclog_fn_list, $logtmp_sfile, $logtmp_dfile, $logtmp_result))) {
myerror("Insert outer srclog to temp array has problem!");
exit;
}
myassess("Actually calculated date = $srclog_calc_date.");
// If testinstemp mode, we stop here without doing statistics
if (defined('TESTINSTMP_MODE')) {
myinfo("Skipping statistics in testinstmp mode.");
exit;
}
// Do statistics and then insert into summary table
$total_cnt_list = do_statistics_from_array($_cnt_g2s_array, $userlist_tempfile, $rule_file, $srclog_calc_date, $logtmp_result);
} else { // use table
// Insert outer srclog 入 temp array
if (false === ($srclog_calc_date = insert_srclog_to_table($srclog_get_date, $srclog_dir, $srclog_delta_days, $srclog_fn_list))) {
myerror("Insert outer srclog to temp table has problem!");
exit;
}
myassess("Actually calculated date = $srclog_calc_date.");
// If testinstemp mode, we stop here without doing statistics
if (defined('TESTINSTMP_MODE')) {
myinfo("Skipping statistics in testinstmp mode.");
exit;
}
// Do statistics and then insert into summary table
$total_cnt_list = do_statistics_from_table($_cnt_g2s_array, $userlist_tempfile, $rule_file, $srclog_calc_date);
}
if (false === $total_cnt_list)
myerror("Calculate/Write outer summary has problem!");
else {
myinfo("Outer srclog actually calculated date = $srclog_calc_date.");
myinfo("Total summary count = ".array_sum($total_cnt_list));
myinfo("Insert-ok summary count = ".$total_cnt_list[0]);
myinfo("!!!Insert-fail summary count = ".$total_cnt_list[1]);
}
#mysql_close($wk_dbconn);
#oci_close($uodb_conn);
myinfo("### Running end at ".date(LOG_DATEFORMAT1).".");
myinfo("### total exectution time:".elapsed_time($_my_start_time, microtime(true)));
myinfo("############ END program ############");
exit;

waiting for all pids to exit in php

My issue is this. I am forking a process so that I can speed up access time to files on disk. I store any data from these files in a tmp file on local desk. ideally, after all processes have finished, I need to access that tmp file and get that data into an array. I then unlink the tmp file as it is no longer needed. My problem is that it would seem that pcntl_wait() does not acutally wait until all child processes are done before moving on to the final set of operations. So I end up unlinking that file before some random process can finish up.
I can't seem to find a solid way to wait for all processes to exit cleanly and then access my data.
$numChild = 0;
$maxChild = 20; // max number of forked processes.
// get a list of "availableCabs"
foreach ($availableCabs as $cab) {
// fork the process
$pids[$numChild] = pcntl_fork();
if (!$pids[$numChild]) {
// do some work
exit(0);
} else {
$numChild++;
if ($numChild == $maxChild) {
pcntl_wait($status);
$numChild--;
}
} // end fork
}
// Below is where things fall apart. I need to be able to print the complete serialized data. but several child processes don't actually exit before i unlink the file.
$dataFile = fopen($pid, 'r');
while(($values = fgetcsv($dataFile,',')) !== FALSE) {
$fvalues[] = $values;
}
print serialize($fvalues);
fclose($dataFile);
unlink($file);
please note that i'm leaving a lot of code out regarding what i'm actually doing, if we need that posted thats not issue.
Try restructuring you code so that you have two loops - one that spawns processes and one that waits for them to finish. You should also use pcntl_waitpid() to check for specific process IDs, rather than the simple child counting approach you are currently using.
Something like this:
<?php
$maxChildren = 20; // Max number of forked processes
$pids = array(); // Child process tracking array
// Get a list of "availableCabs"
foreach ($availableCabs as $cab) {
// Limit the number of child processes
// If $maxChildren or more processes exist, wait until one exits
if (count($pids) >= $maxChildren) {
$pid = pcntl_waitpid(-1, $status);
unset($pids[$pid]); // Remove PID that exited from the list
}
// Fork the process
$pid = pcntl_fork();
if ($pid) { // Parent
if ($pid < 0) {
// Unable to fork process, handle error here
continue;
} else {
// Add child PID to tracker array
// Use PID as key for easy use of unset()
$pids[$pid] = $pid;
}
} else { // Child
// If you aren't doing this already, consider using include() here - it
// will keep the code in the parent script more readable and separate
// the logic for the parent and children
exit(0);
}
}
// Now wait for the child processes to exit. This approach may seem overly
// simple, but because of the way it works it will have the effect of
// waiting until the last process exits and pretty much no longer
foreach ($pids as $pid) {
pcntl_waitpid($pid, $status);
unset($pids[$pid]);
}
// Now the parent process can do it's cleanup of the results

Creating a PHP Online Grading System on Linux: exec Behavior, Process IDs, and grep

Background
I am writing a simple online judge (a code grading system) using PHP and MySQL. It takes submitted codes in C++ and Java, compiles them, and tests them.
This is Apache running PHP 5.2 on an old version of Ubuntu.
What I am currently doing
I have a php program that loops infinitely, calling another php program by
//for(infinity)
exec("php -f grade.php");
//...
every tenth of a second. Let's call the first one looper.php and the second one grade.php. (Checkpoint: grade.php should completely finish running before the "for" loop continues, correct?)
grade.php pulls the earliest submitted code that needs to be graded from the MySQL database, puts that code in a file (test.[cpp/java]), and calls 2 other php programs in succession, named compile.php and test.php, like so:
//...
exec("php -f compile.php");
//...
//for([all tests])
exec("php -f test.php");
//...
(Checkpoint: compile.php should completely finish running before the "for" loop calling test.php even starts, correct?)
compile.php then compiles the program in test.[cpp/java] as a background process. For now, let's assume that it's compiling a Java program and that test.java is located in a subdirectory. I now have
//...
//$dir = "./sub/" or some other subdirectory; this may be an absolute path
$start_time = microtime(true); //to get elapsed compilation time later
exec("javac ".$dir."test.java -d ".$dir." 2> ".$dir
."compileError.txt 1> ".$dir."compileText.txt & echo $!", $out);
//...
in compile.php. It's redirecting the output from javac, so javac should be running as a background process... and it seems like it works. The $out should be grabbing the process id of javac in $out[0].
The real problem
I want to stop compiling if for some reason compiling takes more than 10 seconds, and I want to end compile.php if the program stops compiling before 10 seconds. Since the exec("javac... I called above is a background process (or is it?), I have no way of knowing when it has completed without looking at the process id, which should have been stored in $out earlier. Right after, in compile.php, I do this with a 10 second loop calling exec("ps ax | grep [pid].*javac"); and seeing if the pid still exists:
//...
$pid = (int)$out[0];
$done_compile = false;
while((microtime(true) - $start_time < 10) && !$done_compile) {
usleep(20000); // only sleep 0.02 seconds between checks
unset($grep);
exec("ps ax | grep ".$pid.".*javac", $grep);
$found_process = false;
//loop through the results from grep
while(!$found_process && list(, $proc) = each($grep)) {
$boom = explode(" ", $proc);
$npid = (int)$boom[0];
if($npid == $pid)
$found_process = true;
}
$done_compile = !$found_process;
}
if(!done_compile)
exec("kill -9 ".$pid);
//...
... which doesn't seem to be working. At least some of the time. Often, what happens is test.php starts running before the javac even stops, resulting in test.php not being able to find the main class when it tries to run the java program. I think that the loop is bypassed for some reason, though this may not be the case. At other times, the entire grading system works as intended.
Meanwhile, test.php also uses the same strategy (with the X-second loop and the grep) in running a program in a certain time limit, and it has a similar bug.
I think the bug lies in the grep not finding javac's pid even when javac is still running, resulting in the 10 second loop breaking early. Can you spot an obvious bug? A more discreet bug? Is there a problem with my usage of exec? Is there a problem with $out? Or is something entirely different happening?
Thank you for reading my long question. All help is appreciated.
I just came up with this code that will run a process, and terminate it if it runs longer than $timeout seconds. If it terminates before the timeout, it will have the program output in $output and the exit status in $return_value.
I have tested it and it seems to work well. Hopefully you can adapt it to your needs.
<?php
$command = 'echo Hello; sleep 30'; // the command to execute
$timeout = 5; // terminate process if it goes longer than this time in seconds
$cwd = '/tmp'; // working directory of executing process
$env = null; // environment variables to set, null to use same as PHP
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
// start the process
$process = proc_open($command, $descriptorspec, $pipes, $cwd, $env);
$startTime = time();
$terminated = false;
$output = '';
if (is_resource($process)) {
// process was started
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
// loop infinitely until timeout, or process finishes
for(;;) {
usleep(100000); // dont consume too many resources
$stat = proc_get_status($process); // get info on process
if ($stat['running']) { // still running
if (time() - $startTime > $timeout) { // check for timeout
// close descriptors
fclose($pipes[1]);
fclose($pipes[0]);
proc_terminate($process); // terminate process
$return_value = proc_close($process); // get return value
$terminated = true;
break;
}
} else {
// process finished before timeout
$output = stream_get_contents($pipes[1]); // get output of command
// close descriptors
fclose($pipes[1]);
fclose($pipes[0]);
proc_close($process); // close process
$return_value = $stat['exitcode']; // set exit code
break;
}
}
if (!$terminated) {
echo $output;
}
echo "command returned $return_value\n";
if ($terminated) echo "Process was terminated due to long execution\n";
} else {
echo "Failed to start process!\n";
}
References: proc_open(), proc_close(), proc_get_status(), proc_terminate()

Categories