Get all user processes via script run by cron job - php

I have a corn job that executes a PHP script every minute and execute processchecker.php. The script processchecker.php will then check from user process, which one contains the filename backgroundprocess.php.
This works perfectly if iam triggering these files manually by visiting their respective URLs.
Problem comes in when i automate the process as a cron job which for some reason does not return the processes that i am looking for. Cron jobs seem to be running with no user account and i am suspecting i need a method of listing all processes especially those started by the cron job itself.
processcheck.php
<?php
exec("ps aux", $output, $result);
$found=0;
foreach ($output AS $line) if(strpos($line, "backgroundprocess.php")){ $found=$found+1;};
if($found==0){
//service not running start it all over again
if (!$pid = shell_exec("nohup php backgroundprocess.php > /dev/null 2>&1 & echo $!")) return false;
}else{
//service is Already running
}
?>
From what i am seeing exec("ps aux", $output, $result); is not fetching processes started by the cron job itself......and therefore my background process will always be started over and over.
Please note, all this is on a remote vps server and i am using cpanel.
EDIT
Result is 0
Output is
Array
(
[0] => USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
[1] => rycjptbb 1 0.0 0.0 6324 600 ? SN 14:51 0:00 jailshell (rycjptbb) [init] ell -c nohup php public_html/processchecker.php > /dev/null & echo $!
[2] => rycjptbb 3 1.0 0.0 248940 12308 ? SN 14:51 0:00 php public_html/processchecker.php
[3] => rycjptbb 4 0.0 0.0 110236 1112 ? RN 14:51 0:00 ps aux
)

From what I understand, you just want to check if there is not other same process currently running.
For example, if process1.php runs every 1 minute, and the runtime can take lets say 3 minutes, you dont want another process to run in the second minute.
If that is your case then you can check if there is another process with the same name exists with this:
function is_other_process_exists()
{
$my_name = 'xxx'; //your process name, e.g. process1.php
$ps = `ps gax`;
$psLines = explode("\n", $ps);
array_pop($psLines);
$myLines = array();
foreach ( $psLines as $psLine )
{
if (strstr($psLine, $my_name))
{
$myLines[] = $psLine;
}
}
if ( count($myLines) > 1 ) {
$myPid = posix_getpid();
echo "process is already running with process id {$myPid}";
exit;
}
}

Related

nohup process keep shutting down

I am trying to run 10.000 processes to create asterisks phone accounts.
This is to stress-test our Asterisk server.
I called with php an exec() function to create a Linux command.
nohup /usr/src/pjproject-2.3/pjsip-apps/bin/pjsua-x86_64-unknown-linux-gnu --id=sip:%s#13.113.163.3 --registrar=sip:127.0.0.1:25060 --realm=* --username=%s --password=123456 --local-port=%s --null-audio --no-vad --max-calls=32 --no-tcp >>/dev/null 2>>/dev/null & $(echo -ne \'\r\')"
Everything works perfect and the script does exactly what I am expecting.
But here comes also the next problem; after creating the 10.000 accounts, the processes are suddenly all getting killed.
Why is this?
Isn't it so that the nohup function keeps the processes alive?
After calling the nohup function I am also calling the disown function.
Thank you for the help
[edit]
I also tried this project with the function screen, the screen functions work like a charm, but the problem is the cpu usage. To create 10.000 screens, makes a linux server go nuts, this is why I choose for nohup.
The full php code:
<?php
# count
$count_screens = 0;
# port count start
$port_count = 30000;
# register accounts number
$ext_number = 1000;
# amount of times you want this loop to go
$min_accounts = 0;
$max_accounts = 1000;
Class shell {
const CREATE_SESSION = 'screen -dmS stress[%s]';
const RUN_PJSUA = 'screen -S stress[%s] -p 0 -rX stuff "nohup /usr/src/pjproject-2.3/pjsip-apps/bin/pjsua-x86_64-unknown-linux-gnu --id=sip:%s#13.113.163.3 --registrar=sip:127.0.0.1:25060 --realm=* --username=%s --password=123456 --local-port=%s --null-audio --no-vad --max-calls=32 --no-tcp >>/dev/null 2>>/dev/null &"';
const DISOWN_PJSUA = 'screen -S stress[%s] -p 0 -rX stuff "disown -l $(echo -ne \'\r\')"';
public function openShell($count_screens) {
# creating a new screen to make the second call
$command = sprintf(static:: CREATE_SESSION, $count_screens);
$string = exec($command);
return var_dump($string);
}
public function runPJSUA($count_screens, $ext_number, $ext_number, $port_count) {
# register a new pjsua client
$command = sprintf(static:: RUN_PJSUA, $count_screens, $ext_number, $ext_number, $port_count);
$string = exec($command);
usleep(20000);
return var_dump($string);
}
public function disownPJSUA($count_screens) {
# register a new pjsua client
$command = sprintf(static:: DISOWN_PJSUA, $count_screens);
$string = exec($command);
return var_dump($string);
}
}
while ($min_accounts < $max_accounts) {
$shell = new shell();
if ($count_screens == '0') {
$count_screens++;
echo $shell->openShell($count_screens);
} else {
$count_screens = 1;
}
$port_count++;
$ext_number++;
$min_accounts++;
echo $shell->runPJSUA($count_screens, $ext_number, $ext_number, $port_count);
echo $shell->disownPJSUA($count_screens);
}
?>
Pjsua is relatively heavy application, definitely too heavy to run 10000 instances, not intended for this kind of testing. As you are configuring it for 32 calls even running out of ports would be a problem (there are two ports per call reserved plus port for SIP). If you want to stay with pjsua you might at least optimize test by configuring multiple accounts for single pjsua instance. It might be limited by command line length, but ~30 accounts per instance might work.

PHP running multiple scripts concurrently

I have an array with object server like this:
Array
(
[0](
(
[id] => 1
[version] => 1
[server_addr] => 192.168.5.210
[server_name] => server1
)
)
[1](
(
[id] => 2
[server_addr] => 192.168.5.211
[server_name] => server2
)
)
)
By running the code below, I'm able to get the desired output
foreach ($model as $server) {
$cpu_usage = shell_exec('sudo path/to/total_cpu_usage.sh '.$server->server_addr);
$memory_usage = shell_exec('sudo path/to/total_memory_usage.sh '.$server->server_addr);
$disk_space = shell_exec('sudo path/to/disk_space.sh '.$server->server_addr);
$inode_space = shell_exec('sudo path/to/inode_space.sh '.$server->server_addr);
$network = shell_exec('sudo path/to/network.sh '.$server->server_addr);
exec('sudo path/to/process.sh '.$server->server_addr, $processString);
$processArray = array();
foreach ($processString as $i) {
$row = explode(" ", preg_replace('/\s+/', ' ', $i));
array_push($processArray,$row);
}
$datetime = shell_exec('sudo path/to/datetime.sh '.$server->server_addr);
echo $cpu_usage;
echo $mem_usage;
echo $disk_space;
......
}
My scripts are similar like:
#!/bin/bash
if [ "$1" == "" ]
then
echo "To start monitor, please provide the server ip:"
read IP
else
IP=$1
fi
ssh root#$IP "date"
But the whole process took like 10 sec for 5 servers compared to 1 server for less than 2 sec. Why is that? Is there anyway to lessen the time? My guess is that the exec command was waiting for the output to be assign to the variable before going to next loop? I tried to google a little bit but most of the answer are for without returning any output at all... I need the output though
You can run your scripts simultaneously with popen() and grab the output later with fread().
//execute
foreach ($model as $server) {
$server->handles = [
popen('sudo path/to/total_cpu_usage.sh '.$server->server_addr, 'r'),
popen('sudo path/to/total_memory_usage.sh '.$server->server_addr, 'r'),
popen('sudo path/to/disk_space.sh '.$server->server_addr, 'r'),
popen('sudo path/to/inode_space.sh '.$server->server_addr, 'r'),
popen('sudo path/to/network.sh '.$server->server_addr, 'r'),
];
}
//grab and store the output, then close the handles
foreach ($model as $server) {
$server->cpu_usage = fread($server->handles[0], 4096);
$server->mem_usage = fread($server->handles[1], 4096);
$server->disk_space = fread($server->handles[2], 4096);
$server->inode_space = fread($server->handles[3], 4096);
$server->network = fread($server->handles[4], 4096);
foreach($server->handles as $h) pclose($h);
}
//print everything
print_r($model);
I tested a similar code to execute 5 scripts that sleep for 2 seconds and the whole thing took only 2.12 seconds
instead of 10.49 seconds with shell_exec().
Update 1: Big thanks to Markus AO for pointing out an optimization potential.
Update 2: Modified the code to remove the possibility of overwrite.
The results are now inside $model.
This can also show which server refused the connection, in case that issue about sshd is affecting you.
All you need to do is add an > /dev/null & at the end on Linux, you wont get the output though, but it will run as a background ( async ) process.
shell_exec('sudo path/to/datetime.sh '.$server->server_addr.' > /dev/null &');
see also this Background process script from my GitHub, ( it has windows compatible background processes )
https://github.com/ArtisticPhoenix/MISC/blob/master/BgProcess.php
Cheers!
I don't know how to make your logic faster but I can tell you how I use to track time of running when I have scripts. At the begin of the script put some var $start = date('c'); and at the end just simple echo ' start='.$start; echo ' end='.date(c);
Yes you're correct: your PHP script is waiting for each response before moving onward.
I presume you're hoping to run the requests to all servers simultaneously, instead of waiting for each server to respond. In that case, assuming you're running a thread-safe version of PHP, look into pthreads. One option is to use cURL multi-exec for making asynchronous requests. Then there's also pcntl_fork that may help you out. Also see this & this thread for possible thread/async approaches.
Aside that, do test and benchmark the shell scripts individually to see where the bottlenecks are, and whether you can speed them up. That may be easier than thread/async setups in PHP. If you have issues with network latency, then write an aggregator shell script that executes the other scripts and returns the results in one request, and only call that in your PHP script.

How to limit 1 instance of a PHP script, what's wrong with my solution

<?PHP
define('TEMPPATH', '/tmp/');
$fp = fopen(TEMPPATH.'abc.php.lock', 'a+');
if(!flock($fp, LOCK_EX | LOCK_NB))
exit;
I use the above codes to limit the script from running multiple instances, and it's OK when I test it, but today when I logged in to my server (CentOS 6) and I saw two instances running, how could that happen?
[root#server user]# ps aux | grep abc
root 21061 0.0 0.1 103284 2016 pts/0 S+ 15:45 0:00 grep abc
user 22560 0.0 1.2 154608 12788 ? Ss Nov10 1:35 /usr/bin/php /path/abc.php
user 25106 0.0 1.3 154896 13336 ? Ss Nov06 2:51 /usr/bin/php /path/abc.php
The script was started from crontab jobs, crontab tries to run it every minute, is there any better ways to do this ?
Very elaborate instance control :).
Take a look at mine:
function par_processes($name = null,$ownPids = array(),$dbg = false){
exec("ps x | grep $name", $ProcessState);
$modifier = 0;
while (list($idp,$procs) = each($ProcessState)){
if (preg_match("/grep/i",$procs)) {
unset($ProcessState[$idp]);
}else{
if (!empty($ownPids)){
if (in_array(strstr(trim($procs)," ",true),$ownPids)){
unset($ProcessState[$idp]);
}
}
}
}
if (!$dbg){
return (count($ProcessState) - $modifier);
}else{
return array(0=>(count($ProcessState) - $modifier),1=>$ProcessState,2=>$ownPids);
}
}
And in the php daemon before I go any further I use this:
# INSTANCE CHECK
if (par_processes("MYSOFTWARENAME",array(getmypid()))){
error_log("SOME ERROR"); die();
}
Quite frankly it can be done quicker and more effective (probably) but it does the job.
Of course you need to be able to run exec() for this to work.

PHP - How to check if process is running with multiple parameters

I know how to check if one instance of process is running but how do I check a particular process running with different parameters for example
/usr/local/bin/foo --config /home/config1.txt
/usr/local/bin/foo --config /home/config2.txt
Following code checks only process name, how do I check if a process is running with a particular parameter?
function is_process_running ($process_name) {
$result = array();
exec("/sbin/pidof {$process_name}", $result);
if(is_array($result) && isset($result[0]) && $result[0] >= 1) {
return true;
}
return false;
}
is_process_running('/usr/local/bin/foo --config /home/config1.txt') returns true
is_process_running('/usr/local/bin/foo --config /home/config3.txt') returns false
function is_process_running ($process_name) {
$result = array();
exec("ps -Af | grep {$process_name}", $result);
// A loop that checks for your result and also checks
// that the result isn't the grep command called
// ps -ax | grep firefox asdfasd
// returns grep --color=auto firefox asdfasd
return false;
}
Give it a try. The flag 'f' modifies the output so includes the full call.
Try this bash command to get more details about the processes:
ps ax | grep YourProcesName
I know that at least java processes would should its Paramerers

standard output impact SIGKILL?

I have a script to limit the execution time length of commands.
limit.php
<?php
declare(ticks = 1);
if ($argc<2) die("Wrong parameter\n");
$cmd = $argv[1];
$tl = isset($argv[2]) ? intval($argv[2]) : 3;
$pid = pcntl_fork();
if (-1 == $pid) {
die('FORK_FAILED');
} elseif ($pid == 0) {
exec($cmd);
posix_kill(posix_getppid(), SIGALRM);
} else {
pcntl_signal(SIGALRM, create_function('$signo',"die('EXECUTE_ENDED');"));
sleep($tl);
posix_kill($pid, SIGKILL);
die("TIMEOUT_KILLED : $pid");
}
Then I test this script with some commands.
TEST A
php limit.php "php -r 'while(1){sleep(1);echo PHP_OS;}'" 3
After 3s, we can find the processes were killed as we expected.
TEST B
Remove the output code and run again.
php limit.php "php -r 'while(1){sleep(1);}'" 3
Result looks not good, the process created by function "exec" was not killed like TEST A.
[alix#s4 tmp]$ ps aux | grep whil[e]
alix 4433 0.0 0.1 139644 6860 pts/0 S 10:32 0:00 php -r while(1){sleep(1);}
System info
[alix#s4 tmp]$ uname -a
Linux s4 2.6.18-308.1.1.el5 #1 SMP Wed Mar 7 04:16:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[alix#s4 tmp]$ php -v
PHP 5.3.9 (cli) (built: Feb 15 2012 11:54:46)
Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies
Why the processes killed in TEST A but not in TEST B? Does the output impact the SIGKILL?
Any suggestion?
There is a PIPE between php -r 'while(1){sleep(1);echo PHP_OS;} (process C) and it's parent (process B), posix_kill($pid, SIGKILL) sends KILL signal to process B, then process B is terminated, but process C doesn't know anything about the signal and continues to run and outputs something to the broken pipe, when process C receives the SIGPIPE signal but has no idea how to handle it so it exits.
You can verify it with strace (run php limit.php "strace php -r 'while(1){sleep(1); echo PHP_OS;};'" 1), and you will see something like this:
14:43:49.254809 write(1, "Linux", 5) = -1 EPIPE (Broken pipe)
14:43:49.254952 --- SIGPIPE (Broken pipe) # 0 (0) ---
14:43:49.255110 close(2) = 0
14:43:49.255212 close(1) = 0
14:43:49.255307 close(0) = 0
14:43:49.255402 munmap(0x7fb0762f2000, 4096) = 0
14:43:49.257781 munmap(0x7fb076342000, 1052672) = 0
14:43:49.258100 munmap(0x7fb076443000, 266240) = 0
14:43:49.258268 munmap(0x7fb0762f3000, 323584) = 0
14:43:49.258555 exit_group(0) = ?
As to php -r 'while(1){sleep(1);}, because there is no broken pipe occurs after it's parent dies, so it continues to run as expected.
Generally speaking, you should kill the whole process group but not only the process itself if you want to kill it's children too, with PHP you can add process B to its own process group, and kill the whole group then, here is the diff with your code:
--- limit.php 2012-08-11 20:50:22.000000000 +0800
+++ limit-new.php 2012-08-11 20:50:39.000000000 +0800
## -9,11 +9,13 ##
if (-1 == $pid) {
die('FORK_FAILED');
} elseif ($pid == 0) {
+ $_pid = posix_getpid();
+ posix_setpgid($_pid, $_pid);
exec($cmd);
posix_kill(posix_getppid(), SIGALRM);
} else {
pcntl_signal(SIGALRM, create_function('$signo',"die('EXECUTE_ENDED');"));
sleep($tl);
- posix_kill($pid, SIGKILL);
+ posix_kill(-$pid, SIGKILL);
die("TIMEOUT_KILLED : $pid");
}
You send the kill signall to your forked process, but that does not propagate to it's children or grandchildren. As such they are orphaned and continue running until something stops them from doing so. (In this case, any attempts to write to stdout should cause an error that then forces them to exit. Redirection of output would also probably result in indefinitely-running orphans.)
You want to send a kill signal to the process and all it's children. Unfortunately I lack the knowledge to tell you a good way to do that. I'm not very familiar with the process control functionality of PHP. Could parse the output of ps.
One simple way I found that works though is to send a kill signal to the whole process group with the kill command. It's messy, and it adds an extra "Killed" message to output on my machine, but it seems to work.
<?php
declare(ticks = 1);
if ($argc<2) die("Wrong parameter\n");
$cmd = $argv[1];
$tl = isset($argv[2]) ? intval($argv[2]) : 3;
$pid = pcntl_fork();
if (-1 == $pid) {
die('FORK_FAILED');
} elseif ($pid == 0) {
exec($cmd);
posix_kill(posix_getppid(), SIGALRM);
} else {
pcntl_signal(SIGALRM, create_function('$signo',"die('EXECUTE_ENDED');"));
sleep($tl);
$gpid = posix_getpgid($pid);
echo("TIMEOUT_KILLED : $pid");
exec("kill -KILL -{$gpid}"); //This will also cause the script to kill itself.
}
For more information see: Best way to kill all child processes

Categories