I'm using proc_open in php to launch a subprocess and send data back and forth.
At some point I'd like to wait for the process to end and retrieve the exit code.
The problem is that if the process has already finished, my call to proc_close returns -1. There is apparently much confusion over what proc_close does actually return and I haven't found a way to reliably determine the exit code of a process opened with proc_open.
I've tried using proc_get_status, but it seems to also return -1 when the process has already exited.
Update
I can't get proc_get_status to ever give me a valid exit code, no matter how or when it is called. Is it broken completely?.
My understanding is that proc_close will never give you a legit exit code.
You can only grab the legit exit code the first time you run proc_get_status after the process has ended. Here's a process class that I stole off the php.net user contributed notes. The answer to your question is in the is_running() method:
<?php
class process {
public $cmd = '';
private $descriptors = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w')
);
public $pipes = NULL;
public $desc = '';
private $strt_tm = 0;
public $resource = NULL;
private $exitcode = NULL;
function __construct($cmd = '', $desc = '')
{
$this->cmd = $cmd;
$this->desc = $desc;
$this->resource = proc_open($this->cmd, $this->descriptors, $this->pipes, NULL, $_ENV);
$this->strt_tm = microtime(TRUE);
}
public function is_running()
{
$status = proc_get_status($this->resource);
/**
* proc_get_status will only pull valid exitcode one
* time after process has ended, so cache the exitcode
* if the process is finished and $exitcode is uninitialized
*/
if ($status['running'] === FALSE && $this->exitcode === NULL)
$this->exitcode = $status['exitcode'];
return $status['running'];
}
public function get_exitcode()
{
return $this->exitcode;
}
public function get_elapsed()
{
return microtime(TRUE) - $this->strt_tm;
}
}
Hope this helps.
I also was getting unexpected results trying to get the return code via proc_get_status, until I realized I was getting the return code of the last command I had executed (I was passing a series of commands to proc_open, separated by ;).
Once I broke the commands into individual proc_open calls, I used the following loop to get the correct return code. Note that normally the code is executing proc_get_status twice, and the correct return code is being returned on the second execution. Also, the code below could be dangerous if the process never terminates. I'm just using it as an example:
$status = proc_get_status($process);
while ($status["running"]) {
sleep(1);
$status = proc_get_status($process);
}
Related
I want to run several commands via exec() but I don't want any output to the screen. I do, however, want to hold on to the output so that I can control verbosity as my script runs.
Here is my class:
<?php
class System
{
public function exec($command, array &$output = [])
{
$returnVar = null;
exec($command, $output, $returnVar);
return $returnVar;
}
}
The problem is, most applications put an irritating amount of irrelevant stuff into stderr, which I don't seem to be able to block. For example, here's the output from running git clone through it:
Cloning into '/tmp/directory'...
remote: Counting objects: 649, done.
remote: Compressing objects: 100% (119/119), done.
remote: Total 649 (delta 64), reused 0 (delta 0), pack-reused 506
Receiving objects: 100% (649/649), 136.33 KiB | 0 bytes/s, done.
Resolving deltas: 100% (288/288), done.
Checking connectivity... done.
I've seen other questions claim that using the output buffer can work, however it doesn't seem to work
<?php
class System
{
public function exec($command, array &$output = [])
{
$returnVar = null;
ob_start();
exec($command, $output, $returnVar);
ob_end_clean();
return $returnVar;
}
}
This still produces the same results. I can fix the problem by routing stderr to stdout in the command, however, not only does this prevent me from differentiating from stdout and stderr, this application is designed to run in Windows and Linux, so this is now an unholy mess.
<?php
class System
{
public function exec($command, array &$output = [])
{
$returnVar = null;
// Is Windows
if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') {
exec($command, $output, $returnVar);
return $returnVar;
}
// Is not windows
exec("({$command}) 2>&1", $output, $returnVar);
return $returnVar;
}
}
Is there a way to capture and suppress both stderr and stdout separately?
Update / Answer example
On the advice of #hexasoft in the comments, I updated my method to look like this:
<?php
class System
{
public function exec($command, &$stdOutput = '', &$stdError = '')
{
$process = proc_open(
$command,
[
0 => ['pipe', 'r'],
1 => ['pipe', 'w'],
2 => ['pipe', 'w'],
],
$pipes
);
if (!is_resource($process)) {
throw new \RuntimeException('Could not create a valid process');
}
// This will prevent to program from continuing until the processes is complete
// Note: exitcode is created on the final loop here
$status = proc_get_status($process);
while($status['running']) {
$status = proc_get_status($process);
}
$stdOutput = stream_get_contents($pipes[1]);
$stdError = stream_get_contents($pipes[2]);
proc_close($process);
return $status['exitcode'];
}
}
This technique opens up much more advanced options, including asynchronous processes.
Command proc_exec() allows to deal with file descriptors of the exec-ed command, using pipes.
The function is: proc_open ( string $cmd , array $descriptorspec , array &$pipes […optional parameters] ) : resource
You give you command in $cmd (as in exec) and you give an array describing the filedescriptors to "install" for the command. This array is indexed by filedescriptor number (0=stdin, 1=stdout…) and contains the type (file, pipe) and the mode (r/w…) plus the filename for file type.
You then get in $pipes an array of filedescriptors, which can be used to read or write (depending of what requested).
You should not forget to close these descriptors after usage.
Please refer to PHP manual page (and in particular the examples): https://php.net/manual/en/function.proc-open.php
Note that read/write is related to the spawned command, not the PHP script.
I'm currently develpoing a deployment framework in PHP and got some problems regarding threads and streams.
I want to start a process, read its stdout and stderr (separately!), echo it and return the complete content of the streams when the process is terminated.
To get that functionaliy I'm using two threads, each is reading a different stream (stdout|stderr). Now the problem I got with this is that php is crashing when fgets gets called a second time. (Error code 0x5, Error offset 0x000610e7).
After a lot of trail and error I figured out that when I add a dummy array to the run function the crash is not always happening and it's working as expected.
Has anyone an idea why this is happening?
I'm using Windows 7, PHP 5.4.22, MSVC9, pthreads 2.0.9
private static $pipeSpecsSilent = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w")); // stderr
public function start()
{
$this->procID = proc_open($this->executable, $this::$pipeSpecsSilent, $this->pipes);
if (is_resource($this->procID))
{
$stdoutThread = new POutputThread($this->pipes[1]);
$stderrThread = new POutputThread($this->pipes[2]);
$stderrThread->start();
$stdoutThread->start();
$stdoutThread->join();
$stderrThread->join();
$stdout = trim($stdoutThread->getStreamValue());
$stderr = trim($stderrThread->getStreamValue());
$this->stop();
return array('stdout' => $stdout, 'stderr' => $stderr);
}
return null;
}
/**
* Closes all pipes and the process handle
*/
private function stop()
{
for ($x = 0; $x < count($this->pipes); $x++)
{
fclose($this->pipes[$x]);
}
$this->resultValue = proc_close($this->procID);
}
class POutputThread extends Thread
{
private $pipe;
private $content;
public function __construct($pipe)
{
$this->pipe = $pipe;
}
public function run()
{
$content = '';
// this line is requires as we get a crash without it.
// it seems like there is something odd happening?
$stackDummy = array('', '');
while (($line = fgets($this->pipe)))
{
PLog::i($line);
$content .= $line;
}
$this->content = $content;
}
/**
* Returns the value of the stream that was read
*
* #return string
*/
public function getStreamValue()
{
return $this->content;
}
}
I found the problem:
Although I'm closing the streams after all threads are terminated it seems like it is required to close the stream inside the thread that was reading from it.
So I replaced the $this->stop(); call with a fclose($this->pipe); inside the run function and everything works just fine.
My intention is this.
My client.html calls a php script check.php via ajax. I want check.php to check if another script task.php is already being run. If it is, I do nothing. If it is not, I need to run it in the background.
I have an idea what I want to do, but am unsure how to do it.
Part A. I know how to call check.php via ajax.
Part B. In check.php I might need to run task.php. I think I need something like:
$PID = shell_exec("php task.php > /dev/null & echo $!");
I think the "> /dev/null &" bit tells it to run in the background, but am unsure what the "$!" does.
Part C. The $PID I need as a tag of the process. I need to write this number (or whatever) to a file in the same directory, and need to read it every call to check.php. I can't work out how to do that. Could someone give me a link of how to read/write a file with a single number in to the same directory?
Part D. Then to check if the last launched task.php is still running I am going to use the function:
function is_process_running($PID)
{
exec("ps $PID", $ProcessState);
return(count($ProcessState) >= 2);
}
I think that is all the bits I need, but as you can see I am unsure on how to do a few of them.
I would use an flock() based mechanism to make sure that task.php runs only once.
Use a code like this:
<?php
$fd = fopen('lock.file', 'w+');
// try to get an exclusive lock. LOCK_NB let the operation not blocking
// if a process instance is already running. In this case, the else
// block will being entered.
if(flock($fd, LOCK_EX | LOCK_NB )) {
// run your code
sleep(10);
// ...
flock($fd, LOCK_UN);
} else {
echo 'already running';
}
fclose($fd);
Also note that flock() is, as the PHP documentation points out, portable across all supported operating systems.
!$
gives you the pid of the last executed program in bash. Like this:
command &
pid=$!
echo pid
Note that you will have to make sure your php code runs on a system with bash support. (Not windows)
Update (after comment of opener).
flock() will work on all operating systems (As I mentioned). The problem I see in your code when working with windows is the !$ (As I mentioned ;) ..
To obtain the pid of the task.php you should use proc_open() to start task.php. I've prepared two example scripts:
task.php
$fd = fopen('lock.file', 'w+');
// try to get an exclusive lock. LOCK_NB let the operation not blocking
// if a process instance is already running. In this case, the else
// block will being entered.
if(flock($fd, LOCK_EX | LOCK_NB )) {
// your task's code comes here
sleep(10);
// ...
flock($fd, LOCK_UN);
echo 'success';
$exitcode = 0;
} else {
echo 'already running';
// return 2 to let check.php know about that
// task.php is already running
$exitcode = 2;
}
fclose($fd);
exit($exitcode);
check.php
$cmd = 'php task.php';
$descriptorspec = array(
0 => array('pipe', 'r'), // STDIN
1 => array('pipe', 'w'), // STDOUT
2 => array('pipe', 'w') // STDERR
);
$pipes = array(); // will be set by proc_open()
// start task.php
$process = proc_open($cmd, $descriptorspec, $pipes);
if(!is_resource($process)) {
die('failed to start task.php');
}
// get output (stdout and stderr)
$output = stream_get_contents($pipes[1]);
$errors = stream_get_contents($pipes[2]);
do {
// get the pid of the child process and it's exit code
$status = proc_get_status($process);
} while($status['running'] !== FALSE);
// close the process
proc_close($process);
// get pid and exitcode
$pid = $status['pid'];
$exitcode = $status['exitcode'];
// handle exit code
switch($exitcode) {
case 0:
echo 'Task.php has been executed with PID: ' . $pid
. '. The output was: ' . $output;
break;
case 1:
echo 'Task.php has been executed with errors: ' . $output;
break;
case 2:
echo 'Cannot execute task.php. Another instance is running';
break;
default:
echo 'Unknown error: ' . $stdout;
}
You asked me why my flock() solution is the best. It's just because the other answer will not reliably make sure that task.php runs once. This is because the race condition I've mentioned in the comments below that answer.
You can realize it, using lock file:
if(is_file(__DIR__.'/work.lock'))
{
die('Script already run.');
}
else
{
file_put_contents(__DIR__.'/work.lock', '');
// YOUR CODE
unlink(__DIR__.'/work.lock');
}
Too bad I didn't see this before it was accepted..
I have written a class to do just this. ( using file locking ) and PID, process ID checking, on both windows and Linux.
https://github.com/ArtisticPhoenix/MISC/blob/master/ProcLock.php
I think your are really overdoing it with all the processes and background checks. If you run a PHP script without a session, then you are already essentially running it in the background. Because it will not block any other request from the user. So make sure you don't call session_start();
Then the next step would be to run it even when the user cancels the request, which is a basic function in PHP. ignore_user_abort
Last check is to make sure it's only runs once, which can be easily done with creating a file, since PHP doesnt have an easy application scope.
Combined:
<?php
// Ignore user aborts and allow the script
// to run forever
ignore_user_abort(true);
set_time_limit(0);
$checkfile = "./runningtask.tmp";
//LOCK_EX basicaly uses flock() to prevents racecondition in regards to a regular file open.
if(file_put_contents($checkfile, "running", LOCK_EX)===false) {
exit();
}
function Cleanup() {
global $checkfile;
unlink($checkfile);
}
/*
actual code for task.php
*/
//run cleanup when your done, make sure you also call it if you exit the code anywhere else
Cleanup();
?>
In your javascript you can now call the task.php directly and cancel the request when the connection to the server has been established.
<script>
function Request(url){
if (window.XMLHttpRequest) { // Mozilla, Safari, ...
httpRequest = new XMLHttpRequest();
} else if (window.ActiveXObject) { // IE
httpRequest = new ActiveXObject("Microsoft.XMLHTTP");
} else{
return false;
}
httpRequest.onreadystatechange = function(){
if (httpRequest.readyState == 1) {
//task started, exit
httpRequest.abort();
}
};
httpRequest.open('GET', url, true);
httpRequest.send(null);
}
//call Request("task.php"); whenever you want.
</script>
Bonus points: You can have the actual code for task.php write occasional updates to $checkfile to have a sense of what is going on. Then you can have another ajax file read the content and show the status to the user.
Lets make the whole process from B to D simple
Step B-D:
$rslt =array(); // output from first exec
$output = array(); // output of task.php execution
//Check if any process by the name 'task.php' is running
exec("ps -auxf | grep 'task.php' | grep -v 'grep'",$rslt);
if(count($rslt)==0) // if none,
exec('php task.php',$output); // run the task,
Explanation:
ps -auxf --> gets all running processes with details
grep 'task.php' --> filter the process by 'task.php' keyword
grep -v 'grep' --> filters the grep process out
NB:
Its also advisable to put the same check in task.php file.
If task.php is executed directly via httpd (webserver), it will only be displayed as a httpd process and cannot be identified by 'ps' command
It wouldn't work under load-balanced environment. [Edited: 17Jul17]
You can get an exclusive lock on the script itself for the duration of the script running
Any other attempts to run it will end as soon as the lock() function is invoked.
//try to set a global exclusive lock on the file invoking this function and die if not successful
function lock(){
$file = isset($_SERVER['SCRIPT_FILENAME'])?
realpath($_SERVER['SCRIPT_FILENAME']):
(isset($_SERVER['PHP_SELF'])?realpath($_SERVER['PHP_SELF']):false);
if($file && file_exists($file)){
//global handle stays alive for the duration if this script running
global $$file;
if(!isset($$file)){$$file = fopen($file,'r');}
if(!flock($$file, LOCK_EX|LOCK_NB)){
echo 'This script is already running.'."\n";
die;
}
}
}
Test
Run this in one shell and try to run it in another while it is waiting for input.
lock();
//this will pause execution until an you press enter
echo '...continue? [enter]';
$handle = fopen("php://stdin","r");
$line = fgets($handle);
fclose($handle);
I have a function that needs to go over around 20K rows from an array, and apply an external script to each. This is a slow process, as PHP is waiting for the script to be executed before continuing with the next row.
In order to make this process faster I was thinking on running the function in different parts, at the same time. So, for example, rows 0 to 2000 as one function, 2001 to 4000 on another one, and so on. How can I do this in a neat way? I could make different cron jobs, one for each function with different params: myFunction(0, 2000), then another cron job with myFunction(2001, 4000), etc. but that doesn't seem too clean. What's a good way of doing this?
If you'd like to execute parallel tasks in PHP, I would consider using Gearman. Another approach would be to use pcntl_fork(), but I'd prefer actual workers when it's task based.
The only waiting time you suffer is between getting the data and processing the data. Processing the data is actually completely blocking anyway (you just simply have to wait for it). You will not likely gain any benefits past increasing the number of processes to the number of cores that you have. Basically I think this means the number of processes is small so scheduling the execution of 2-8 processes doesn't sound that hideous. If you are worried about not being able to process data while retrieving data, you could in theory get your data from the database in small blocks, and then distribute the processing load between a few processes, one for each core.
I think I align more with the forking child processes approach for actually running the processing threads. There is a brilliant demonstration in the comments on the pcntl_fork doc page showing an implementation of a job daemon class
http://php.net/manual/en/function.pcntl-fork.php
<?php
declare(ticks=1);
//A very basic job daemon that you can extend to your needs.
class JobDaemon{
public $maxProcesses = 25;
protected $jobsStarted = 0;
protected $currentJobs = array();
protected $signalQueue=array();
protected $parentPID;
public function __construct(){
echo "constructed \n";
$this->parentPID = getmypid();
pcntl_signal(SIGCHLD, array($this, "childSignalHandler"));
}
/**
* Run the Daemon
*/
public function run(){
echo "Running \n";
for($i=0; $i<10000; $i++){
$jobID = rand(0,10000000000000);
while(count($this->currentJobs) >= $this->maxProcesses){
echo "Maximum children allowed, waiting...\n";
sleep(1);
}
$launched = $this->launchJob($jobID);
}
//Wait for child processes to finish before exiting here
while(count($this->currentJobs)){
echo "Waiting for current jobs to finish... \n";
sleep(1);
}
}
/**
* Launch a job from the job queue
*/
protected function launchJob($jobID){
$pid = pcntl_fork();
if($pid == -1){
//Problem launching the job
error_log('Could not launch new job, exiting');
return false;
}
else if ($pid){
// Parent process
// Sometimes you can receive a signal to the childSignalHandler function before this code executes if
// the child script executes quickly enough!
//
$this->currentJobs[$pid] = $jobID;
// In the event that a signal for this pid was caught before we get here, it will be in our signalQueue array
// So let's go ahead and process it now as if we'd just received the signal
if(isset($this->signalQueue[$pid])){
echo "found $pid in the signal queue, processing it now \n";
$this->childSignalHandler(SIGCHLD, $pid, $this->signalQueue[$pid]);
unset($this->signalQueue[$pid]);
}
}
else{
//Forked child, do your deeds....
$exitStatus = 0; //Error code if you need to or whatever
echo "Doing something fun in pid ".getmypid()."\n";
exit($exitStatus);
}
return true;
}
public function childSignalHandler($signo, $pid=null, $status=null){
//If no pid is provided, that means we're getting the signal from the system. Let's figure out
//which child process ended
if(!$pid){
$pid = pcntl_waitpid(-1, $status, WNOHANG);
}
//Make sure we get all of the exited children
while($pid > 0){
if($pid && isset($this->currentJobs[$pid])){
$exitCode = pcntl_wexitstatus($status);
if($exitCode != 0){
echo "$pid exited with status ".$exitCode."\n";
}
unset($this->currentJobs[$pid]);
}
else if($pid){
//Oh no, our job has finished before this parent process could even note that it had been launched!
//Let's make note of it and handle it when the parent process is ready for it
echo "..... Adding $pid to the signal queue ..... \n";
$this->signalQueue[$pid] = $status;
}
$pid = pcntl_waitpid(-1, $status, WNOHANG);
}
return true;
}
}
you can use "PTHREADS"
very easy to install and works great on windows
download from here -> http://windows.php.net/downloads/pecl/releases/pthreads/2.0.4/
Extract the zip file and then
move the file 'php_pthreads.dll' to php\ext\ directory.
move the file 'pthreadVC2.dll' to php\ directory.
then add this line in your 'php.ini' file:
extension=php_pthreads.dll
save the file.
you just done :-)
now lets see example of how to use it:
class ChildThread extends Thread {
public $data;
public function run() {
/* Do some expensive work */
$this->data = 'result of expensive work';
}
}
$thread = new ChildThread();
if ($thread->start()) {
/*
* Do some expensive work, while already doing other
* work in the child thread.
*/
// wait until thread is finished
$thread->join();
// we can now even access $thread->data
}
for more information about PTHREADS read php docs here:
PHP DOCS PTHREADS
if you'r using WAMP like me, then you should add 'pthreadVC2.dll' into
\wamp\bin\apache\ApacheX.X.X\bin
and also edit the 'php.ini' file (same path) and add the same line as before
extension=php_pthreads.dll
GOOD LUCK!
What you are looking for is parallel which is a succinct concurrency API for PHP 7.2+
$runtime = new \parallel\Runtime();
$future = $runtime->run(function() {
for ($i = 0; $i < 500; $i++) {
echo "*";
}
return "easy";
});
for ($i = 0; $i < 500; $i++) {
echo ".";
}
printf("\nUsing \\parallel\\Runtime is %s\n", $future->value());
Output:
.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*..*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*
Using \parallel\Runtime is easy
Have a look at pcntl_fork. This allows you to spawn child processes which can then do the separate work that you need.
Not sure if a solution for your situation but you can redirect the output of system calls to a file, thus PHP will not wait until the program is finished. Although this may result in overloading your server.
http://www.php.net/manual/en/function.exec.php - If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
There's Guzzle with its concurrent requests
use GuzzleHttp\Client;
use GuzzleHttp\Promise;
$client = new Client(['base_uri' => 'http://httpbin.org/']);
$promises = [
'image' => $client->getAsync('/image'),
'png' => $client->getAsync('/image/png'),
'jpeg' => $client->getAsync('/image/jpeg'),
'webp' => $client->getAsync('/image/webp')
];
$responses = Promise\Utils::unwrap($promises);
There's the overhead of promises; but more importantly Guzzle only works with HTTP requests and it works with version 7+ and frameworks like Laravel.
Is there a way you can abort a block of code if it's taking too long in PHP? Perhaps something like:
//Set the max time to 2 seconds
$time = new TimeOut(2);
$time->startTime();
sleep(3)
$time->endTime();
if ($time->timeExpired()){
echo 'This function took too long to execute and was aborted.';
}
It doesn't have to be exactly like above, but are there any native PHP functions or classes that do something like this?
Edit: Ben Lee's answer with pcnt_fork would be the perfect solution except that it's not available for Windows. Is there any other way to accomplish this with PHP that works for Windows and Linux, but doesn't require an external library?
Edit 2: XzKto's solution works in some cases, but not consistently and I can't seem to catch the exception, no matter what I try. The use case is detecting a timeout for a unit test. If the test times out, I want to terminate it and then move on to the next test.
You can do this by forking the process, and then using the parent process to monitor the child process. pcntl_fork is a method that forks the process, so you have two nearly identical programs in memory running in parallel. The only difference is that in one process, the parent, pcntl_fork returns a positive integer which corresponds to the process id of the child process. And in the other process, the child, pcntl_fork returns 0.
Here's an example:
$pid = pcntl_fork();
if ($pid == 0) {
// this is the child process
} else {
// this is the parent process, and we know the child process id is in $pid
}
That's the basic structure. Next step is to add a process expiration. Your stuff will run in the child process, and the parent process will be responsible only for monitoring and timing the child process. But in order for one process (the parent) to kill another (the child), there needs to be a signal. Signals are how processes communicate, and the signal that means "you should end immediately" is SIGKILL. You can send this signal using posix_kill. So the parent should just wait 2 seconds then kill the child, like so:
$pid = pcntl_fork();
if ($pid == 0) {
// this is the child process
// run your potentially time-consuming method
} else {
// this is the parent process, and we know the child process id is in $pid
sleep(2); // wait 2 seconds
posix_kill($pid, SIGKILL); // then kill the child
}
You can't really do that if you script pauses on one command (for example sleep()) besides forking, but there are a lot of work arounds for special cases: like asynchronous queries if you programm pauses on DB query, proc_open if you programm pauses at some external execution etc. Unfortunately they are all different so there is no general solution.
If you script waits for a long loop/many lines of code you can do a dirty trick like this:
declare(ticks=1);
class Timouter {
private static $start_time = false,
$timeout;
public static function start($timeout) {
self::$start_time = microtime(true);
self::$timeout = (float) $timeout;
register_tick_function(array('Timouter', 'tick'));
}
public static function end() {
unregister_tick_function(array('Timouter', 'tick'));
}
public static function tick() {
if ((microtime(true) - self::$start_time) > self::$timeout)
throw new Exception;
}
}
//Main code
try {
//Start timeout
Timouter::start(3);
//Some long code to execute that you want to set timeout for.
while (1);
} catch (Exception $e) {
Timouter::end();
echo "Timeouted!";
}
but I don't think it is very good. If you specify the exact case I think we can help you better.
This is an old question, and has probably been solved many times by now, but for people looking for an easy way to solve this problem, there is a library now: PHP Invoker.
You can use declare function if the execution time exceeds the limits. http://www.php.net/manual/en/control-structures.declare.php
Here a code example of how to use
define("MAX_EXECUTION_TIME", 2); # seconds
$timeline = time() + MAX_EXECUTION_TIME;
function check_timeout()
{
if( time() < $GLOBALS['timeline'] ) return;
# timeout reached:
print "Timeout!".PHP_EOL;
exit;
}
register_tick_function("check_timeout");
$data = "";
declare( ticks=1 ){
# here the process that might require long execution time
sleep(5); // Comment this line to see this data text
$data = "Long process result".PHP_EOL;
}
# Ok, process completed, output the result:
print $data;
With this code you will see the timeout message.
If you want to get the Long process result inside the declare block you can just remove the sleep(5) line or increase the Max Execution Time declared at the start of the script
What about set-time-limit if you are not in the safe mode.
Cooked this up in about two minutes, I forgot to call $time->startTime(); so I don't really know exactly how long it took ;)
class TimeOut{
public function __construct($time=0)
{
$this->limit = $time;
}
public function startTime()
{
$this->old = microtime(true);
}
public function checkTime()
{
$this->new = microtime(true);
}
public function timeExpired()
{
$this->checkTime();
return ($this->new - $this->old > $this->limit);
}
}
And the demo.
I don't really get what your endTime() call does, so I made checkTime() instead, which also serves no real purpose but to update the internal values. timeExpired() calls it automatically because it would sure stink if you forgot to call checkTime() and it was using the old times.
You can also use a 2nd script that has the pause code in it that is executed via a curl call with a timeout set. The other obvious solution is to fix the cause of the pause.
Here is my way to do that. Thanks to others answers:
<?php
class Timeouter
{
private static $start_time = FALSE, $timeout;
/**
* #param integer $seconds Time in seconds
* #param null $error_msg
*/
public static function limit($seconds, $error_msg = NULL)
: void
{
self::$start_time = microtime(TRUE);
self::$timeout = (float) $seconds;
register_tick_function([ self::class, 'tick' ], $error_msg);
}
public static function end()
: void
{
unregister_tick_function([ self::class, 'tick' ]);
}
public static function tick($error)
: void
{
if ((microtime(TRUE) - self::$start_time) > self::$timeout) {
throw new \RuntimeException($error ?? 'You code took too much time.');
}
}
public static function step()
: void
{
usleep(1);
}
}
Then you can try like this:
<?php
try {
//Start timeout
Timeouter::limit(2, 'You code is heavy. Sorry.');
//Some long code to execute that you want to set timeout for.
declare(ticks=1) {
foreach (range(1, 100000) as $x) {
Timeouter::step(); // Not always necessary
echo $x . "-";
}
}
Timeouter::end();
} catch (Exception $e) {
Timeouter::end();
echo $e->getMessage(); // 'You code is heavy. Sorry.'
}
I made a script in php using pcntl_fork and lockfile to control the execution of external calls doing the kill after the timeout.
#!/usr/bin/env php
<?php
if(count($argv)<4){
print "\n\n\n";
print "./fork.php PATH \"COMMAND\" TIMEOUT\n"; // TIMEOUT IN SECS
print "Example:\n";
print "./fork.php /root/ \"php run.php\" 20";
print "\n\n\n";
die;
}
$PATH = $argv[1];
$LOCKFILE = $argv[1].$argv[2].".lock";
$TIMEOUT = (int)$argv[3];
$RUN = $argv[2];
chdir($PATH);
$fp = fopen($LOCKFILE,"w");
if (!flock($fp, LOCK_EX | LOCK_NB)) {
print "Already Running\n";
exit();
}
$tasks = [
"kill",
"run",
];
function killChilds($pid,$signal) {
exec("ps -ef| awk '\$3 == '$pid' { print \$2 }'", $output, $ret);
if($ret) return 'you need ps, grep, and awk';
while(list(,$t) = each($output)) {
if ( $t != $pid && $t != posix_getpid()) {
posix_kill($t, $signal);
}
}
}
$pidmaster = getmypid();
print "Add PID: ".(string)$pidmaster." MASTER\n";
foreach ($tasks as $task) {
$pid = pcntl_fork();
$pidslave = posix_getpid();
if($pidslave != $pidmaster){
print "Add PID: ".(string)$pidslave." ".strtoupper($task)."\n";
}
if ($pid == -1) {
exit("Error forking...\n");
}
else if ($pid == 0) {
execute_task($task);
exit();
}
}
while(pcntl_waitpid(0, $status) != -1);
echo "Do stuff after all parallel execution is complete.\n";
unlink($LOCKFILE);
function execute_task($task_id) {
global $pidmaster;
global $TIMEOUT;
global $RUN;
if($task_id=='kill'){
print("SET TIMEOUT = ". (string)$TIMEOUT."\n");
sleep($TIMEOUT);
print("FINISHED BY TIMEOUT: ". (string)$TIMEOUT."\n");
killChilds($pidmaster,SIGTERM);
die;
}elseif($task_id=='run'){
###############################################
### START EXECUTION CODE OR EXTERNAL SCRIPT ###
###############################################
system($RUN);
################################
### END ###
################################
killChilds($pidmaster,SIGTERM);
die;
}
}
Test Script run.php
<?php
$i=0;
while($i<25){
print "test... $i\n";
$i++;
sleep(1);
}