I have a button in my program that when pressed, runs this line of code.
ps -eo pid,command | grep "test-usb20X" | grep -v grep | awk '{print $1}'
It gives me the correct pid output. For example, "3243".
What I want to do now is kill that pid. I would just write
exec("kill -9 3232");
But the pid changes, so how do I save that number to a variable and then use it in the kill line?
You may write the response of your first command to a variable
$pid = exec("ps -eo pid,command | grep \"test-usb20X\" | grep -v grep | awk '{print $1}'");
and use this variable for the next command
exec("kill -9 ".$pid);
Related
I'm trying to wipe sensitive history commands daily off my server via Laravel scheduler
When I run this code:
$schedule->exec(`for h in $(history | grep mysql | awk '{print $1-N}' | tac) ; do history -d $h ; done; history -d $(history | tail -1 | awk '{print $1}') ; history`)->{$interval}()->emailOutputTo($email)->environments('prod');
I get this error:
⚡️ app2020 php artisan schedule:run
ErrorException
Undefined variable: h
at app/Console/Kernel.php:44
Noted that this is a perfect working command
for h in $(history | grep mysql | awk '{print $1-N}' | tac) ; do history -d $h ; done; history -d $(history | tail -1 | awk '{print $1}') ; history
Note:
I know I can achieve it by adding the command above to the crontab, but the goal is to keep all cron activities organize in the Laravel project
Since bash and PHP use $ for their variable, how would one go above and make similar bash command works ?
Any hints, I will take.
You're wrapping your command in backticks, which will both interpolate variables and execute the value as a command. You want neither of these things.
Use single quotes to build the command, and then pass it to exec().
$cmd = 'for h in $(history | grep mysql | awk \'{print $1-N}\' | tac) ;';
$cmd .= ' do history -d $h ;';
$cmd .= 'done;';
$cmd .= 'history -d $(history | tail -1 | awk \'{print $1}\') ;';
$cmd .= 'history';
$schedule
->exec($cmd)
->{$interval}()
->emailOutputTo($email)
->environments('prod');
For a somewhat more efficient approach, you might try just editing the history file directly.
$cmd = 'sed -i /mysql/d "$HOME/.bash_history"';
There's no need to erase the last entry in the file, as it's a history of interactive shell commands.
Obviously the best thing to do would be not putting "sensitive" things in the command line. For MySQL, this can be done by adding credentials to the .my.cnf file.
What i want to achieve
I want to execute some script it it's process in not started on the server. so for that i am preparing the command in shell script and executing it in single line.
Command with php variable
$cmd = "if [[ `ps auxww | grep -v grep | grep ".$process_file." | grep '".$find."'` == '' ]] ; then ".$cmd2." fi";
echo $cmd."\n";
Executed command, once variables are replaced (what will actually run on bash):
if [[ `ps auxww | grep -v grep | grep /home/new_jig.php | grep 'test_51 1714052'` == '' ]] ; then php /home/new_jig.php test_51 1714052 & fi;
executing command
exec($cmd,$out,$res);
Please note that, I have also split the problem in to two statement and execute those. But it is time consuming. It is causing problems when I have more than 2000 in list, and the command is executed for all. This takes about 1 or more than 1 minute to reach to the last number.
I want to achieve this within 10 seconds. Please help me to reach optimum output.
Thanks
Jignesh
somehow I am able to make it execute with the following command
$process_file = phpfile which executing some functionality
$cmd2 = " php ".$process_file." 1212 >/dev/null 2>/dev/null & ";
$cmd11 ="if ps -auxw | grep -v grep | grep '".$process_file."' | grep '".$find."' &> /dev/null ; then echo 1;".$cmd2."; fi";
shell_exec($cmd11." >/dev/null 2>/dev/null &");
Before this: for 1100 request the process was taking about 60+ seconds
After this: it is getting completed between 20 to 30 seconds
I have a script to get server health from multiple servers like this:
#!/bin/bash
for ip
do
ssh 192.168.1.209 ssh root#$ip cat /proc/loadavg | awk '{print $1}' #CPU Usage
free | grep Mem | awk '{print $3/$2 * 100.0}' #Memory Usage
df -khP | awk '{print $3 "/" $2}' | awk 'FNR == 2' #Disk Space
df -kihP | awk '{print $3 "/" $2}' | awk 'FNR == 2' #Inode Space
date +'%d %b %Y %r %Z' #Datetime
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5 #Process
done
The 209 is acting like a portal on my network so I have to ssh to it 1st in order to access other servers. By typing this command on terminal:
./my_script.sh 192.168.1.210 192.168.1.211 192.168.1.212
I would like to get each of the command output (ps,date etc) from each server. Expected output for 2 servers should be like:
0.11 #health from server 1
4.82577
1.7G/49G
46K/49M
27 Dec 2016 05:34:57 PM HKT
root 93 0.0 0.0 [kauditd]
root 9 0.0 0.0 [rcuob/0]
root 8740 0.0 0.0 ifstat --scan=100
root 829 0.0 0.0 /usr/sbin/wpa_supplicant -u -f /var/log/wpa_supplicant.log -c /etc/wpa_supplicant/wpa_supplicant.conf -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
0.00 #health from server 2
4.82422
1.7G/49G
46K/49M
27 Dec 2016 05:34:57 PM HKT
root 93 0.0 0.0 [kauditd]
root 9 0.0 0.0 [rcuob/0]
root 8740 0.0 0.0 ifstat --scan=100
root 829 0.0 0.0 /usr/sbin/wpa_supplicant -u -f /var/log/wpa_supplicant.log -c /etc/wpa_supplicant/wpa_supplicant.conf -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
The problem that I'm facing is that It seems like it's only getting the health info from one server only. Why is that? Is it because I cannot do SSH like this? I'm using PHP exec() function to execute the script btw, to further format and display it on my local page.
The best way to do that in bash is to use Here Documents << and run a loop over each of the arguments ($#) passed to the script as
for ip in "$#"
do
ssh 192.168.1.209 ssh root#"$ip" <<-EOF
cat /proc/loadavg | awk '{print $1}'
free | grep Mem | awk '{print $3/$2 * 100.0}'
df -khP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
df -kihP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
date +'%d %b %Y %r %Z'
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5
EOF
done
Remember to NOT have leading or trailing whitespaces before header <<-EOF and the final EOF, use tab-space for the terminating EOF.
You can run the script now as
./my_script.sh 192.168.1.210 192.168.1.211 192.168.1.212
Also you can wrap the contents of the script in a simple bash script and run it in one shot as
#!/bin/bash
cat /proc/loadavg | awk '{print $1}'
free | grep Mem | awk '{print $3/$2 * 100.0}'
df -khP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
df -kihP | awk '{print $3 "/" $2}' | awk 'FNR == 2'
date +'%d %b %Y %r %Z'
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5
Call the above script as commandlist.sh and call it inside the for-loop as
ssh 192.168.1.209 ssh root#"$ip" 'bash -s ' < /path-to/commandlist.sh
Well, I’d do something completely different.
First I’d write a commandlist.sh script as follows (not forgetting to make it executable...):
#!/bin/bash
echo "# Health from $(hostname)"
cat /proc/loadavg | cut -d' ' -f1 #CPU Usage
free | grep Mem | awk '{print $3/$2 * 100.0}' #Memory Usage
df -khP | awk 'FNR == 2 {print $3 "/" $2}' #Disk Space
df -kihP | awk 'FNR == 2 {print $3 "/" $2}' #Inode Space
date +'%d %b %Y %r %Z' #Datetime
ps -eo user,pid,pcpu,pmem,args|sort -nr -k3|head -5 #Process
(mmh.. I would produce the same output with very different code, but since it’s your script I’ve only edited out a couple of superfluous awk invocations.)
Then, I’d place it on the 209, where I’d also install GNU parallel.
Separately, I’d write the file ~/.parallel/sshloginfile as follows:
1/192.168.1.210
1/192.168.1.211
1/192.168.1.212
and I’d run the command
ssh 192.168.0.209 parallel -S .. --nonall --bf commandlist.sh ./commandlist.sh
Have a good look at man parallel for more possibilities.
I have a php script that runs via a cron job.
I have an exec command in the script like so:
exec("ps -u bob -o user:20,%cpu,cmd | awk 'NR>1' | grep vlc | tr -s ' ' | cut -d ' ' -f 2",$cpu,$return)
This gets me the cpu form a process run by a specific user, if the process exists. When run via the command line I get say 21 or nothing at all depending on if the process is running or not. However, when running vai the PHP script, I get the following:
[0] => bob 0.0 /bin/sh -c php /home/bob/restart.php bob
[1] => bob 0.0 php /home/bob/restartStream.php bob
[2] => bob 0.0 sh -c ps -u bob -o user:20,%cpu,cmd | awk NR
It seems to be returning all the recent commands executed as opposed to the result of the command executed.
I have seen some posts which show the use of 2>&1 which I believe redirects the stdin and stdout or soemthing similar. However I have tried this in my command like so:
ps -u bob -o user:20,%cpu,cmd | awk 'NR>1' | grep vlc | tr -s ' ' | cut -d ' ' -f 2 2>&1
But it does not seem to make a difference. Can any give me any pointers as to why this is occurring and what can possibly be done to resolve this.
You need to clear out $cpu before you call exec. It appends the new output to the end of the array, it doesn't overwrite it.
You can also get rid of grep, tr, and cut and do all the processing of the output in awk
$cpu = array();
exec("ps -u bob -o user:20,%cpu,cmd | awk 'NR>1 && /vlc/ && !/awk/ {print $2}'",$cpu,$return);
The !/awk/ keeps it from matching the awk line, since that contains vlc.
I know this is simple but I just cant figure it out.
I have a bunch of files output by "svn st" that I want php to do a syntax check on the command line.
This outputs the list of files: svn st | awk '{print $2}'
And this checks a php script: php -l somefile.php
But this, or variants of, doesn't work: svn st | php -l '{print $2}'
Any ideas? Thanks!
Use xargs:
svn st | awk '{print $2}' | xargs -L 1 php -l
The xargs -L 1 command reads items from standard input, one per line, and runs the given command for each item separately. See the xargs(1) man page for more info.