I have a simple script used to recreate and format partitions in a disk that runs fine from the command line, but fails complaining about device being busy at the first command that changes the partition table when called from php. I've done a lot of reading in stackoverflow and other sources and most of the times in similar situations the problem is a relative path and/or permissions that don't let the script run but in my case the script runs but isn't able to do the partitioning/formating.
Things I've checked/tried:
Paths: I use absolute paths/chdir and the script executes.
Permissions: all set to rwx to be on the safe side while testing.
Sudoers file: granted all permissions for testing.
Relevant (imho) php.ini directives: (also for testing purposes)
disable_functions =
safe_mode =
fuser command reports nothing.
I've used parted/fdisk/sfdisk and a mix of them just in case, same results.
The script runs as expected with sudo from the command line.
The php code is this:
$path="/usr/share/test";
$logFile=$path . "/partition.log";
chdir(path);
if($_GET['run']) {
$command="sudo ./partition.sh 2>&1 | tee $logFile";
shell_exec($command);
}
<a href=?run=true>Run script</a>
And the partitioning script goes like this:
umount -v -f -l /dev/sda1
umount -v -f -l /dev/sda2
umount -v -f -l /dev/sda3
sfdisk --delete /dev/sda
sleep 5
/sbin/partprobe
/sbin/parted -a optimal -s /dev/sda mkpart primary ext4 32.3kB 409600MB
/sbin/parted -a optimal -s /dev/sda mkpart primary ext4 409600MB 614400MB
/sbin/parted -a optimal -s /dev/sda mkpart primary ext4 614400MB 1024GB
sleep 5
/sbin/partprobe
umount -v -f -l /dev/sda1
umount -v -f -l /dev/sda2
umount -v -f -l /dev/sda3
echo "***formating partitions"
/sbin/mkfs.ext4 -F /dev/sda1
/sbin/mkfs.ext4 -F /dev/sda2
/sbin/mkfs.ext4 -F /dev/sda3
echo "***mounting"
mount /dev/sda1 /media/vms
mount /dev/sda2 /media/apps
mount /dev/sda3 /media/novos
I'm unmounting my partitions twice because I discovered the hard way parted mount them automatically and occasionally they've been reported mounted by umount (ref: https://unix.stackexchange.com/questions/432850/is-parted-auto-mounting-new-partitions)
Sample output: (please omit the warning messages, optimization comes later!)
+ umount -v -l /dev/sda1
umount: /media/vms (/dev/sda1) unmounted
+ umount -v -l /dev/sda2
umount: /media/apps (/dev/sda2) unmounted
+ umount -v -l /dev/sda3
umount: /media/novos (/dev/sda3) unmounted
+ sfdisk --delete /dev/sda
The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy
The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).
Syncing disks.
+ sleep 5
+ /sbin/partprobe
Error: Partition(s) 1, 2, 3 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
+ /sbin/parted -a optimal -s /dev/sda mkpart primary ext4 32.3kB 409600MB
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 2, 3 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
+ /sbin/parted -a optimal -s /dev/sda mkpart primary ext4 409600MB 614400MB
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 3 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
+ /sbin/parted -a optimal -s /dev/sda mkpart primary ext4 614400MB 1024GB
Warning: The resulting partition is not properly aligned for best performance.
+ sleep 5
+ /sbin/partprobe
+ umount -v -l /dev/sda1
umount: /dev/sda1: not mounted
+ umount -v -l /dev/sda2
umount: /dev/sda2: not mounted
+ umount -v -l /dev/sda3
umount: /dev/sda3: not mounted
+ echo '***formating partitions'
***formating partitions
+ /sbin/mkfs.ext4 -F /dev/sda1
mke2fs 1.43.4 (31-Jan-2017)
/dev/sda1 is apparently in use by the system; will not make a filesystem here!
+ /sbin/mkfs.ext4 -F /dev/sda2
mke2fs 1.43.4 (31-Jan-2017)
/dev/sda2 is apparently in use by the system; will not make a filesystem here!
+ /sbin/mkfs.ext4 -F /dev/sda3
mke2fs 1.43.4 (31-Jan-2017)
/dev/sda3 is apparently in use by the system; will not make a filesystem here!
+ echo '***mounting'
***mounting
+ mount /dev/sda1 /media/vms
+ mount /dev/sda2 /media/apps
+ mount /dev/sda3 /media/novos
My system runs from a CFast card mounted on /dev/sdc and /dev/sda is used for logging. All tests where made ssh to the box.
Linux test 4.9.0-12-rt-amd64 #1 SMP PREEMPT RT Debian
4.9.210-1 (2020-01-20) x86_64 GNU/Linux
PHP 7.0.33-0+deb9u7 (cli) (built: Feb 16 2020 15:11:40) ( NTS )
parted (GNU parted) 3.2
util-linux 2.29.2 (fdisk, sfdisk)
Any pointers/comments will be very appreciated.
Thanks in advance.
Related
I can use both ini_set('memory_limit', '512M'); in the file and php -d memory_limit=512M from the command line, but can also trace memory usage from terminal?
I know I can use memory_get_usage() inside a PHP file, but how to trace it from the command line?
Try:
$ watch -n 5 'php -r "var_dump(memory_get_usage());"'
This will watch every 5 seconds the memory state
Or may be you can use the 'ps' tool:
$ ps -F -C php-cgi
Output:
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
http 10794 10786 0 4073 228 0 Jun09 ? 00:00:00 /usr/bin/php-cgi
RSS is the Real-memory (resident set) size in kilobytes of the process.
The solution I was looking for with a simple output is
watch -n 5 'php -r "echo (string) memory_get_usage(true)/pow(10, 6);"'
Will return how many MB the PHP process is using.
2.097152
NOTICE: Solution like ps -F -C php-cgi on macOS machines will fail with
ps: illegal option -- F
I'm trying to start a bash script later in PHP so I allowed it in visudo.
www-data ALL = (root) NOPASSWD: /sbin/iptables
www-data ALL = (root) NOPASSWD: /usr/bin/at
The script removeuserIP is just doing sudo iptables ... and is working:
#!/bin/bash
sudo iptables -t nat -D PREROUTING -s $1 -j ACCEPT;
sudo iptables -D FORWARD -s $1 -j ACCEPT;
and in the PHP code, I put this line:
$msg=exec("echo /var/www/scripts/removeuserIP $ipaddress | at now + 1 minutes");
but the issue is it's starting the script right now. I checked in /log/var/auth.log and indeed, it's starting the command right now.
I tried it in a terminal directly and there was no issue, it is starting later (with an argument of course):
echo /var/www/scripts/removeuserIP $ipaddress | at now + 1 minutes
I also tried to do it like this in a terminal but this one is not working too because it doesn't understand there is an argument for the file:
sudo at now +1 minutes -f /var/www/scripts/removeuserIP 172.24.1.115
I really don't understand why it is starting right now even if it should start 1 minute later and not now.
Would it be acceptable to put a time delay in removeuserIP script?
#!/bin/bash
sleep 1m
sudo iptables -t nat -D PREROUTING -s $1 -j ACCEPT;
sudo iptables -D FORWARD -s $1 -j ACCEPT;
Solution: Finally, after checking /var/log/apache2/error.log, I saw that it doesn't have the permission to use at.
In fact you have to go /etc/at.deny and remove the line www-date with at. There is probably a security reason for why it's forbidden by default and a better way to do this, but at least it's working.
I have a nodejs script named script.js.
var util = require('util');
var net = require("net");
process.on("uncaughtException", function(e) {
console.log(e);
});
var proxyPort = "40000";
var serviceHost = "1.2.3.4";
var servicePort = "50000";
net.createServer(function (proxySocket) {
var connected = false;
var buffers = new Array();
var serviceSocket = new net.Socket();
serviceSocket.connect(parseInt(servicePort), serviceHost);
serviceSocket.pipe(proxySocket).pipe(serviceSocket);
proxySocket.on("error", function (e) {
serviceSocket.end();
});
serviceSocket.on("error", function (e) {
console.log("Could not connect to service at host "
+ serviceHost + ', port ' + servicePort);
proxySocket.end();
});
proxySocket.on("close", function(had_error) {
serviceSocket.end();
});
serviceSocket.on("close", function(had_error) {
proxySocket.end();
});
}).listen(proxyPort);
I am runing it normally like nodejs script.js, but now i want to include forever or pm2 functionalities as well. When i am root everything works smootly:
chmod -R 777 /home/nodejs/forever/;
-- give rights
watch -n 0.1 'ps ax | grep forever | grep -v grep'
-- watch forwarders (where i see if a forever is opened)
/usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file
-- open with forever
forever list
-- it is there, i can see it
forever stopall
-- kill them all
The problem is when i want to run the script from a PHP script with the system or exec functions :
sudo -u www-data /usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file
-- open as www-data (or i can do this just by accessing `http://1.2.3.4/test.php`, it is the same thing)
forever list
-- see if it is there, and it is not (i see it in watch)
forever stopall
-- says no forever is opened
kill PID_ID
-- the only way is to kill it by pid ... and on another server all of this works very well, can create and kill forevers from a php script when accessing it from web ... not know why
-- everything is in /etc/sudoers including /usr/local/bin/forever
Why is that? How can i solve this?
I also made some trick, created a user 'forever2', i created a script.sh with this content :
sudo su forever2 user123; /usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file;
where user123 is not existent, is just a trick to exit the shell after execution. The script works, runs forever, i can close all forevers with the command forever stopall from root. When i try the same thing running the http://1.2.3.4/test.php or as www-data user i cannot close it from root or www-data, so not even this works.
I tried from Ubuntu 14.04.3 LTS, Ubuntu 14.04 LTS , Debian GNU/Linux 8 ... still the same thing.
Any ideeas?
Thanks.
If you are starting the process from within Apache or the web-server you are already as the www-data user, so doing a sudo su to the user context you already have is likely not necessary.
When you start this forever task you may also be required to shut the terminals/inputs and directly send to background. Something like this:
// Assemble command
$cmd = '/usr/bin/forever';
$cmd.= ' -d -v --pidfile /tmp/my.pid'; // add other options
$cmd.= ' start';
$cmd.= ' /etc/dynamic_ip/nodejs/proxy.js';
// "magic" to get details
$cmd.= ' 2>&1 1>/tmp/output.log'; // Route STDERR to STDOUT; STDOUT to file
$cmd.= ' &'; // Send whole task to background.
system($cmd);
Now, there won't be any output here but you should have something in /tmp/output.log which could show why forever failed, or the script crashed.
If you've been running the script sometimes as root, then trying the same command as www-data you may also be running into a permissions on one or more files/directories created from the execution as root which now conflict when running as www-data.
This is part of PHP security you say you're running it from a php script and your not your running it from Apache via a php script.
PHP web scripts should not have root access as such they run under the same permissions as Apache user www-data.
There are ways to prevent php running as root but run a task as root but it's a little hacky and I'm not going to share the code but I will explain so you can look into it. here is where to start
http://php.net/manual/en/function.proc-open.php
With a Proccess like this you can then execute a proc. Like your script.js via nodeJS using SUDO and then read stdOut and stdErr wait for password request then provide it by writing to stdIn for that process.
Don't forget in doing this the user www-data has to have a password and be in the sudoers list
Per the OPs Comment
Due to the way SUDO works the PATH does not appear to contain the path to the node executables npm, node so your best of building a .sh (bash script) and using sudo to run that.
You still need to monitor this process as it will still ask for a password
#!/bin/bash
sudo -u ec2-user -i
# change this to the path you want to run from
cd ~
/usr/local/bin/pm2 -v
This is pretty weird, and I searched and tried everything, but I think I'm just making a dumb syntax error here.
I'm trying to run a stress test on the CPU , then immediately limit it's cpu usage to 30% , all this via PHP. The test is also run under another user and with a specified name so it can be limited. The stress test starts fine, but I can see the PHP file still loading, and it ends the second the stress test ends.
Here's some of the ways I tried doing it
$output = exec('sudo runuser -l test -c "exec -a MyUniqueProcessName stress -c 1 -t 60s & cpulimit -e MyUniqueProcessName -l 30"');
$output = exec('sudo runuser -l test -c "exec -a MyUniqueProcessName stress -c 1 -t 60s > /dev/null & cpulimit -e MyUniqueProcessName -l 30"');
The whole purpose of this is because I am writing a script for a game hosting website, and I want to limit the resource consumption of each server to improve quality and not let someone hog all the resources.
Basically, instead of the stress test, a game server will run.
edit::
here's what I have now:
I need to run the stress under "test" , but the cpulimit under either sudo apache or root, because it requires special permissions. The stress still starts fine but it eats 99.9%
passthru('sudo runuser -l test -c "exec -a MyUniqueProcessName stress -c 1 -t 60s &" & sudo cpulimit -e MyUniqueProcessName -l 30 -i -z');
I can't see the cpulimit in the process list after doing this http://i.imgur.com/iK2nL43.png
Unfortunately, the && does more or less the opposite of what you want. :-) When you do A && B in Bash, that means, "Run command A and wait until it's done; if it succeeded, then run command B."
By contrast, A & B means, "Run command A and then immediately run command B."
So you're close to right in your command, but just getting tripped up by using two bash commands (should only need one) and the &&.
Also, did you try running each command separately, outside PHP, in two terminals? I just downloaded and built both stress and cpulimit (I assume these are the ones you're using?), ran the commands separately, and spotted a problem: cpulimit still isn't limiting the percentage.
Looking at the docs for stress, I see it works by forking child processes, so the one you're trying to CPU-limit is the parent, but that's not the one using the CPU. cpulimit --help reveals there's option -i, which includes child processes in what is limited.
This gets me to the point of being able to enter this in one terminal (first line shows input at the prompt; subsequent show output):
$> exec -a MyUniqueProcessName stress -c 1 -t 60s & cpulimit -e MyUniqueProcessName -l 30 -i
[1] 20229
MyUniqueProcessName: info: [20229] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd
Process 20229 found
Then, in another terminal running top, I see:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20237 stackov+ 20 0 7164 100 0 R 30.2 0.0 0:04.38 stress
Much better. (Notice that outside the Bash shell where you aliased it with exec -a, you will see the process name as stress.) Unfortunately, I also see another issue, which is cpulimit remaining "listening" for more processes with that name. Back to cpulimit --help, which reveals the -z option.
Just to reduce the complexity a bit, you could leave the alias off and use the PID of the stress process, via the special Bash variable $!, which refers to the PID of the last process launched. Running the following in a terminal seems to do everything you want:
stress -c 1 -t 60s & cpulimit -p $! -l 30 -i -z
So now, just change the PHP script with what we've learned:
exec('bash -c "exec -a MyUniqueProcessName stress -c 1 -t 60s & cpulimit -e MyUniqueProcessName -l 30 -i -z"');
...or, simpler version:
exec('bash -c "stress -c 1 -t 60s & cpulimit -p \$! -l 30 -i -z"');
(Notice the $ in the $! had to be escaped with a backslash, \$!, because of the way it's quoted when passed to bash -c.)
Final Answer:
Based on the last example you amended to your question, you'll want something like this:
passthru('bash -c "sudo -u test stress -c 1 -t 60s & sudo -u root cpulimit -p \$! -l 30 -i -z"');
When I run this with php stackoverflow-question.php, it outputs the following:
stress: info: [3374] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd
stress: info: [3374] successful run completed in 60s
Process 3371 found
(The second two lines only appear after the PHP script finishes, so don't be mislead. Use top to check.)
Running top in another terminal during the 60 seconds the PHP script is running, I see:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3472 test 20 0 7160 92 0 R 29.5 0.0 0:07.50 stress
3470 root 9 -11 4236 712 580 S 9.0 0.0 0:02.28 cpulimit
This is exactly what you've described wanting: stress is running under the user test, and cpulimit is running under the user root (both of which you can change in the command, if desired). stress is limited to around 30%.
I'm not familiar with runuser and don't see the applicability, since sudo is the standard way to run a process as another user. To get this to work, you may have to adjust your settings in /etc/sudoers (which will require root access, but you obviously already have that). That's entirely outside the scope of this discussion, but as an example, I added the following rules:
my-user ALL=(test) NOPASSWD:NOEXEC: /home/my-user/development/stackoverflow/stress
my-user ALL=(root) NOPASSWD:NOEXEC: /home/my-user/development/stackoverflow/cpulimit
Ok, so I have a ssh connection open to a remote server. I'm running a tail on the logs and if an ID shows up in the logs I need to do an insert into the database.
So I have my ssh tail working and I have it piping into my grep function which is giving me the IDs I need. The next step is that as those IDs are found it needs to immediately kick off a php script.
What I thought it would look like is:
ssh -t <user>#<host> "tail -f /APP/logs/foo.log" | grep -oh "'[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'" | php myscript.php <grep result>
And yes my regex is horrible, wanted to use [0-9]{8}, but what I have gets the job done.
Any suggestions? Tried looking at -exec, awk, or any other tool. I could write the result to its own file and then read the new file, but that doesn't catch the streaming ids.
-=-=-=-=-EDIT-=-=-=-=-=-
So here is what I'm using:
ssh -t <user>#<host> "tail -f /APP/logs/foo.log" |grep "^javax.ejb.ObjectNotFoundException" |awk '/[0-9]/ { system("php myscript.php "$6) }'
And if I use tail -l #lines it works, or if after a while I ctrl-c, it then works. The behavior I wanted though was to as the tail got a bad ID to kick off the script to fix the bad ID. Not wait until an EOF or some tail buffer...
I'm having similar problem. There's something funny with tail -f and grep -o combination when ssh.
So on local server, if you do
tail -f myfile.log |grep -o keyword
It grep just fine.
But if you do it from remote server....
ssh user#server 'tail -f myfile.log |grep -o keyword'
doesn't work
But if you remove -f from tail or -o from grep, work just fine... weird :-/