I have a script to get server health from multiple servers like this:
#!/bin/bash
for ip
do
ssh 192.168.1.209 ssh root#$ip cat /proc/loadavg | awk '{print $1}' #CPU Usage
free | grep Mem | awk '{print $3/$2 * 100.0}' #Memory Usage
df -khP | awk '{print $3 "/" $2}' | awk 'FNR == 2' #Disk Space
df -kihP | awk '{print $3 "/" $2}' | awk 'FNR == 2' #Inode Space
date +'%d %b %Y %r %Z' #Datetime
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5 #Process
done
The 209 is acting like a portal on my network so I have to ssh to it 1st in order to access other servers. By typing this command on terminal:
./my_script.sh 192.168.1.210 192.168.1.211 192.168.1.212
I would like to get each of the command output (ps,date etc) from each server. Expected output for 2 servers should be like:
0.11 #health from server 1
4.82577
1.7G/49G
46K/49M
27 Dec 2016 05:34:57 PM HKT
root 93 0.0 0.0 [kauditd]
root 9 0.0 0.0 [rcuob/0]
root 8740 0.0 0.0 ifstat --scan=100
root 829 0.0 0.0 /usr/sbin/wpa_supplicant -u -f /var/log/wpa_supplicant.log -c /etc/wpa_supplicant/wpa_supplicant.conf -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
0.00 #health from server 2
4.82422
1.7G/49G
46K/49M
27 Dec 2016 05:34:57 PM HKT
root 93 0.0 0.0 [kauditd]
root 9 0.0 0.0 [rcuob/0]
root 8740 0.0 0.0 ifstat --scan=100
root 829 0.0 0.0 /usr/sbin/wpa_supplicant -u -f /var/log/wpa_supplicant.log -c /etc/wpa_supplicant/wpa_supplicant.conf -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
The problem that I'm facing is that It seems like it's only getting the health info from one server only. Why is that? Is it because I cannot do SSH like this? I'm using PHP exec() function to execute the script btw, to further format and display it on my local page.
The best way to do that in bash is to use Here Documents << and run a loop over each of the arguments ($#) passed to the script as
for ip in "$#"
do
ssh 192.168.1.209 ssh root#"$ip" <<-EOF
cat /proc/loadavg | awk '{print $1}'
free | grep Mem | awk '{print $3/$2 * 100.0}'
df -khP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
df -kihP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
date +'%d %b %Y %r %Z'
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5
EOF
done
Remember to NOT have leading or trailing whitespaces before header <<-EOF and the final EOF, use tab-space for the terminating EOF.
You can run the script now as
./my_script.sh 192.168.1.210 192.168.1.211 192.168.1.212
Also you can wrap the contents of the script in a simple bash script and run it in one shot as
#!/bin/bash
cat /proc/loadavg | awk '{print $1}'
free | grep Mem | awk '{print $3/$2 * 100.0}'
df -khP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
df -kihP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
date +'%d %b %Y %r %Z'
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5
Call the above script as commandlist.sh and call it inside the for-loop as
ssh 192.168.1.209 ssh root#"$ip" 'bash -s ' < /path-to/commandlist.sh
Well, I’d do something completely different.
First I’d write a commandlist.sh script as follows (not forgetting to make it executable...):
#!/bin/bash
echo "# Health from $(hostname)"
cat /proc/loadavg | cut -d' ' -f1 #CPU Usage
free | grep Mem | awk '{print $3/$2 * 100.0}' #Memory Usage
df -khP | awk 'FNR == 2 {print $3 "/" $2}' #Disk Space
df -kihP | awk 'FNR == 2 {print $3 "/" $2}' #Inode Space
date +'%d %b %Y %r %Z' #Datetime
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5 #Process
(mmh.. I would produce the same output with very different code, but since it’s your script I’ve only edited out a couple of superfluous awk invocations.)
Then, I’d place it on the 209, where I’d also install GNU parallel.
Separately, I’d write the file ~/.parallel/sshloginfile as follows:
1/192.168.1.210
1/192.168.1.211
1/192.168.1.212
and I’d run the command
ssh 192.168.0.209 parallel -S .. --nonall --bf commandlist.sh ./commandlist.sh
Have a good look at man parallel for more possibilities.
Related
I'm running a php yii2 cron from cronjob through a sh file every minute, so if previous job doesn't complete within a minute so it won't execute more than 1 time. But my problem is that, it got stuck some time and i have to manually kill this process. How can i kill a cronjob if it running from past 30 minutes.
Currently my sh file is like below:
if ps -ef | grep -v grep | grep cron/my-cron-action; then
exit 0
else
php /home/public_html/full-path-to-project/yii cron/my-cron-action
exit 0
fi
I write a test . Use check_test_demo.sh to detect test_demo.sh
#test_demo.sh
#!/usr/bin/env bash
echo $(date +%s) > ~/temp_timestamp.log
echo process start
while [ 1 ]; do
date
sleep 1
done
#check_test_demo.sh
#!/usr/bin/env bash
process_num=$(ps -ef | grep -v grep | grep test_demo.sh | wc -l)
if [ $process_num -gt 0 ]; then
now_time=$(date +%s)
start_time=$(cat ~/temp_timestamp.log)
run_time=$((now_time - $start_time))
if [ $run_time -gt 18 ]; then
ps aux | grep -v grep | grep test_demo.sh | awk '{print "kill -9 " $2}' | sh
nohup ~/dev-project/wwwroot/remain/file_temp/test_demo.sh > /dev/null 2>&1 &
echo "restart success"
exit
fi
echo running time $run_time seconds
else
nohup ~/dev-project/wwwroot/remain/file_temp/test_demo.sh > /dev/null 2>&1 &
echo "start success "
fi
answer, I don't know if it's right. You can try.
I think you should record the start time of a process first
#!/usr/bin/env bash
process_num=$(ps -ef | grep -v grep | grep cron/my-cron-action | wc -l)
if [ $process_num -gt 0 ]; then
now_time=$(date +%s)
start_time=$(cat ~/temp_timestamp.log)
run_time=$((now_time - $start_time))
if [ $run_time -gt 1800 ]; then
ps aux | grep -v grep | grep cron/my-cron-action | awk '{print "kill -9 " $2}' | sh
echo $(date +%s) > ~/temp_timestamp.log
php /home/public_html/full-path-to-project/yii cron/my-cron-action
echo "restart success"
exit
fi
echo "process running time : $run_time seconds ."
else
echo $(date +%s) > ~/temp_timestamp.log
php /home/public_html/full-path-to-project/yii cron/my-cron-action
echo "process start success"
fi
I'm trying to wipe sensitive history commands daily off my server via Laravel scheduler
When I run this code:
$schedule->exec(`for h in $(history | grep mysql | awk '{print $1-N}' | tac) ; do history -d $h ; done; history -d $(history | tail -1 | awk '{print $1}') ; history`)->{$interval}()->emailOutputTo($email)->environments('prod');
I get this error:
⚡️ app2020 php artisan schedule:run
ErrorException
Undefined variable: h
at app/Console/Kernel.php:44
Noted that this is a perfect working command
for h in $(history | grep mysql | awk '{print $1-N}' | tac) ; do history -d $h ; done; history -d $(history | tail -1 | awk '{print $1}') ; history
Note:
I know I can achieve it by adding the command above to the crontab, but the goal is to keep all cron activities organize in the Laravel project
Since bash and PHP use $ for their variable, how would one go above and make similar bash command works ?
Any hints, I will take.
You're wrapping your command in backticks, which will both interpolate variables and execute the value as a command. You want neither of these things.
Use single quotes to build the command, and then pass it to exec().
$cmd = 'for h in $(history | grep mysql | awk \'{print $1-N}\' | tac) ;';
$cmd .= ' do history -d $h ;';
$cmd .= 'done;';
$cmd .= 'history -d $(history | tail -1 | awk \'{print $1}\') ;';
$cmd .= 'history';
$schedule
->exec($cmd)
->{$interval}()
->emailOutputTo($email)
->environments('prod');
For a somewhat more efficient approach, you might try just editing the history file directly.
$cmd = 'sed -i /mysql/d "$HOME/.bash_history"';
There's no need to erase the last entry in the file, as it's a history of interactive shell commands.
Obviously the best thing to do would be not putting "sensitive" things in the command line. For MySQL, this can be done by adding credentials to the .my.cnf file.
i'm trying to get the value of my vg space disk.
When i run the command in my shell i don't have any problem that returns the correct value, but when i include it in a PHP file i have a return of a NULL value.
I have see a lot of threads discuting about it but none of the answers solved my problem :(
What commands works ? This one :
$variable = shell_exec("df -h | grep "username" | awk '{print $3}'");
I don't think it's a permission problem cause this commands works..
For the test script has 777 and root permission.
The code who has the problem :
$test = "VG Size";
$tailletotaleduserveur = shell_exec("vgdisplay vghome | grep ".$test." | awk '{print $3}'");
echo json_encode($tailletotaleduserveur);
The actual result in network tab : null -> http://prntscr.com/nehsmb
It should return "5.85"
Thanks for help :)
Your command fails in a shell as well:
bash$ vgdisplay vghome | grep VG Size | awk '{print $3}'
grep: Size: No such file or directory
You have to quote 'VG Size':
$tailletotaleduserveur = shell_exec("vgdisplay vghome | grep '".$test."' | awk '{print $3}'");
Final answer for my problem :) :
$test = 'VG Size';
$tailletotaleduserveur = shell_exec("/usr/bin/sudo vgdisplay vghome | grep '".$test."' | awk '{print $3}'");
So that other guy ;) were in part right ;) i will confirm your answer, thanks
I have a php script that runs via a cron job.
I have an exec command in the script like so:
exec("ps -u bob -o user:20,%cpu,cmd | awk 'NR>1' | grep vlc | tr -s ' ' | cut -d ' ' -f 2",$cpu,$return)
This gets me the cpu form a process run by a specific user, if the process exists. When run via the command line I get say 21 or nothing at all depending on if the process is running or not. However, when running vai the PHP script, I get the following:
[0] => bob 0.0 /bin/sh -c php /home/bob/restart.php bob
[1] => bob 0.0 php /home/bob/restartStream.php bob
[2] => bob 0.0 sh -c ps -u bob -o user:20,%cpu,cmd | awk NR
It seems to be returning all the recent commands executed as opposed to the result of the command executed.
I have seen some posts which show the use of 2>&1 which I believe redirects the stdin and stdout or soemthing similar. However I have tried this in my command like so:
ps -u bob -o user:20,%cpu,cmd | awk 'NR>1' | grep vlc | tr -s ' ' | cut -d ' ' -f 2 2>&1
But it does not seem to make a difference. Can any give me any pointers as to why this is occurring and what can possibly be done to resolve this.
You need to clear out $cpu before you call exec. It appends the new output to the end of the array, it doesn't overwrite it.
You can also get rid of grep, tr, and cut and do all the processing of the output in awk
$cpu = array();
exec("ps -u bob -o user:20,%cpu,cmd | awk 'NR>1 && /vlc/ && !/awk/ {print $2}'",$cpu,$return);
The !/awk/ keeps it from matching the awk line, since that contains vlc.
I have a button in my program that when pressed, runs this line of code.
ps -eo pid,command | grep "test-usb20X" | grep -v grep | awk '{print $1}'
It gives me the correct pid output. For example, "3243".
What I want to do now is kill that pid. I would just write
exec("kill -9 3232");
But the pid changes, so how do I save that number to a variable and then use it in the kill line?
You may write the response of your first command to a variable
$pid = exec("ps -eo pid,command | grep \"test-usb20X\" | grep -v grep | awk '{print $1}'");
and use this variable for the next command
exec("kill -9 ".$pid);