Can't run Linux "awk" command in script from PHP - php

I have the shell script "test.sh":
#!/system/bin/sh
PID=$(ps | grep logcat | grep root |grep -v grep | awk '{print $2}')
echo "Using awk: $PID"
PID=$(ps | grep logcat | grep root |grep -v grep | cut -d " " -f 7 )
echo "Using cut: $PID"
When I run the script from PHP:
exec("su -c sh /path/to/my/script/test.sh");
I got this output:
Using awk:
Using cut: 6512
So "cut" command is work but "awk" command doesn't when I run the script from PHP, but when I run it from terminal:
# sh test.sh
I can get both awk and cut work fine! This how look like the output of "ps":
USER PID PPID VSIZE RSS WCHAN PC NAME
root 6512 5115 3044 1108 poll_sched b6e4bb0c S logcat
Do I missed something?

You should learn how to debug first
You said
So "cut" command is work but "awk" command doesn't when I run the
script from PHP, but when I run it from terminal:
I wonder how ?
actually throws error like below, in CLI
$ php -r 'exec("su -c sh /path/to/my/script/test.sh");'
su: user /path/to/my/script/test.sh does not exist
You first need below syntax while debugging code
// basic : stdin (0) stdout (1) stderr (2)
exec('your_command 2>&1', $output, $return_status);
// to see the response from your command
// su: user /path/to/my/script/test.sh does not exist
print_r($output);
Remember :
su gives you root permissions but it does not change the PATH variable and current working directory.
The operating system assumes that, in the absence of a username, the
user wants to change to a root session, and thus the user is prompted
for the root password
[akshay#localhost Desktop]$ su
Password:
[root#localhost Desktop]# pwd
/home/akshay/Desktop
[root#localhost Desktop]# exit
exit
[akshay#localhost Desktop]$ su -
Password:
[root#localhost ~]# pwd
/root
Solution:
You should allow executing your script without password prompt ( don't use su use sudo )
To allow apache user to execute your script and some commands you may make entry like below in /etc/sudoers
# which awk => give you awk path
# same use in your script also, or else set path variable
www-data ALL=NOPASSWD: /path/to/my/script/test.sh, /bin/cut, /usr/bin/awk
So it becomes :
// assuming your script is executable
exec("sudo /path/to/my/script/test.sh 2>&1", $output);
print_r($output);

Related

Process php : Scanimage -L (usb)

I created a process in my project PHP for display all devices with a scanimage command line, but the output of the process is empty not in a terminal:
Result in terminal :
scanimage -L | grep -v "ip="
device `fujitsu:fi-6130dj:105613' is a FUJITSU fi-6130dj scanner
IResult in process php
String(0)""
And since grep I have:
Result in terminal:
scanimage -L
device fujitsu:fi-6130dj:105613' is a FUJITSU fi-6130dj scanner
devicehpaio:/net/HP_LaserJet_400_colorMFP_M475dw?ip=192.168.121.121' is a Hewlett-Packard HP_LaserJet_400_colorMFP_M475dw all-in-one
Result in process:
device `hpaio:/net/HP_LaserJet_400_colorMFP_M475dw?ip=192.168.121.121' is a Hewlett-Packard HP_LaserJet_400_colorMFP_M475dw all-in-one*
The problem is just for USB devices.
sudo visudo (and add www-data ALL=(ALL) NOPASSWD: /usr/bin/scanimage )
it's work.

Selinux blocks the crontab command from php

There are Fedora 25 and apache on our server.
I want to do so that the php script on our web site can change crontab settings.
I created the following test php script:
<?php
system("echo '*/2 * * * * date > /var/www/logs/testlog.txt' | crontab - 2>&1");
But it did not work. I got the message:
/var/spool/cron/#tmp.mh203-95.XXXXG0KrFF: Permission denied
I looked at output of sealert -a /var/log/audit/audit.log
and found:
SELinux is preventing crontab from write access on the directory /var/spool/cron.
Okay. It sounds like apache is not allowed the write access to /var/spool/cron because that directory has not the httpd_sys_rw_content_t label.
So I executed the command:
chcon -v -R -t httpd_sys_rw_content_t /var/spool/cron
My php script begun to work. The crontab -l command gave normal output.
But the new problem appeared. :( The cron tasks was not executed.
In the /var/log/cron I saw the error:
Mar 23 18:05:01 mh203-95 crond[1653]: (apache) Unauthorized SELinux context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 file_context=system_u:object_r:httpd_sys_rw_content_t:s0 (/var/spool/cron/apache)
Mar 23 18:05:01 mh203-95 crond[1653]: (apache) FAILED (loading cron table)
After many time of research... I found that the /var/spool/cron must have the user_cron_spool_t label. So I executed: chcon -v -R -t user_cron_spool_t /var/spool/cron.
The cron tasks begun to works. But my php script did not work again. The same problem as at the beginning.
sealert suggested the commands like:
ausearch -c 'crontab' --raw | audit2allow -M my-crontab
semodule -X 300 -i my-crontab.pp
But it did not help.
What am I missing?
How to solve the problem?
Can I somehow combine two labels user_cron_spool_t and httpd_sys_rw_content_t for /var/spool/cron directory?
I had solved the problem.
The reason was in this: sealert generates the same politic name my-crontab in all suggested commands. The new politic overwrote the old.
It is just needed to change this name slightly.
So i executed:
ausearch -c 'crontab' --raw | audit2allow -M my-crontab
semodule -X 300 -i my-crontab.pp
ausearch -c 'crontab' --raw | audit2allow -M my-crontab2
semodule -X 300 -i my-crontab2.pp
ausearch -c 'crontab' --raw | audit2allow -M my-crontab3
semodule -X 300 -i my-crontab3.pp
...
Before every ausearch ... I executed:
echo -n "" > /var/log/audit/audit.log
My php script.
sealert -a /var/log/audit/audit.log

Starting FOREVER or PM2 as WWW-DATA from a PHP script

I have a nodejs script named script.js.
var util = require('util');
var net = require("net");
process.on("uncaughtException", function(e) {
console.log(e);
});
var proxyPort = "40000";
var serviceHost = "1.2.3.4";
var servicePort = "50000";
net.createServer(function (proxySocket) {
var connected = false;
var buffers = new Array();
var serviceSocket = new net.Socket();
serviceSocket.connect(parseInt(servicePort), serviceHost);
serviceSocket.pipe(proxySocket).pipe(serviceSocket);
proxySocket.on("error", function (e) {
serviceSocket.end();
});
serviceSocket.on("error", function (e) {
console.log("Could not connect to service at host "
+ serviceHost + ', port ' + servicePort);
proxySocket.end();
});
proxySocket.on("close", function(had_error) {
serviceSocket.end();
});
serviceSocket.on("close", function(had_error) {
proxySocket.end();
});
}).listen(proxyPort);
I am runing it normally like nodejs script.js, but now i want to include forever or pm2 functionalities as well. When i am root everything works smootly:
chmod -R 777 /home/nodejs/forever/;
-- give rights
watch -n 0.1 'ps ax | grep forever | grep -v grep'
-- watch forwarders (where i see if a forever is opened)
/usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file
-- open with forever
forever list
-- it is there, i can see it
forever stopall
-- kill them all
The problem is when i want to run the script from a PHP script with the system or exec functions :
sudo -u www-data /usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file
-- open as www-data (or i can do this just by accessing `http://1.2.3.4/test.php`, it is the same thing)
forever list
-- see if it is there, and it is not (i see it in watch)
forever stopall
-- says no forever is opened
kill PID_ID
-- the only way is to kill it by pid ... and on another server all of this works very well, can create and kill forevers from a php script when accessing it from web ... not know why
-- everything is in /etc/sudoers including /usr/local/bin/forever
Why is that? How can i solve this?
I also made some trick, created a user 'forever2', i created a script.sh with this content :
sudo su forever2 user123; /usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file;
where user123 is not existent, is just a trick to exit the shell after execution. The script works, runs forever, i can close all forevers with the command forever stopall from root. When i try the same thing running the http://1.2.3.4/test.php or as www-data user i cannot close it from root or www-data, so not even this works.
I tried from Ubuntu 14.04.3 LTS, Ubuntu 14.04 LTS , Debian GNU/Linux 8 ... still the same thing.
Any ideeas?
Thanks.
If you are starting the process from within Apache or the web-server you are already as the www-data user, so doing a sudo su to the user context you already have is likely not necessary.
When you start this forever task you may also be required to shut the terminals/inputs and directly send to background. Something like this:
// Assemble command
$cmd = '/usr/bin/forever';
$cmd.= ' -d -v --pidfile /tmp/my.pid'; // add other options
$cmd.= ' start';
$cmd.= ' /etc/dynamic_ip/nodejs/proxy.js';
// "magic" to get details
$cmd.= ' 2>&1 1>/tmp/output.log'; // Route STDERR to STDOUT; STDOUT to file
$cmd.= ' &'; // Send whole task to background.
system($cmd);
Now, there won't be any output here but you should have something in /tmp/output.log which could show why forever failed, or the script crashed.
If you've been running the script sometimes as root, then trying the same command as www-data you may also be running into a permissions on one or more files/directories created from the execution as root which now conflict when running as www-data.
This is part of PHP security you say you're running it from a php script and your not your running it from Apache via a php script.
PHP web scripts should not have root access as such they run under the same permissions as Apache user www-data.
There are ways to prevent php running as root but run a task as root but it's a little hacky and I'm not going to share the code but I will explain so you can look into it. here is where to start
http://php.net/manual/en/function.proc-open.php
With a Proccess like this you can then execute a proc. Like your script.js via nodeJS using SUDO and then read stdOut and stdErr wait for password request then provide it by writing to stdIn for that process.
Don't forget in doing this the user www-data has to have a password and be in the sudoers list
Per the OPs Comment
Due to the way SUDO works the PATH does not appear to contain the path to the node executables npm, node so your best of building a .sh (bash script) and using sudo to run that.
You still need to monitor this process as it will still ask for a password
#!/bin/bash
sudo -u ec2-user -i
# change this to the path you want to run from
cd ~
/usr/local/bin/pm2 -v

Tail -f | Grep <regex> | php script.php <grep result>

Ok, so I have a ssh connection open to a remote server. I'm running a tail on the logs and if an ID shows up in the logs I need to do an insert into the database.
So I have my ssh tail working and I have it piping into my grep function which is giving me the IDs I need. The next step is that as those IDs are found it needs to immediately kick off a php script.
What I thought it would look like is:
ssh -t <user>#<host> "tail -f /APP/logs/foo.log" | grep -oh "'[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'" | php myscript.php <grep result>
And yes my regex is horrible, wanted to use [0-9]{8}, but what I have gets the job done.
Any suggestions? Tried looking at -exec, awk, or any other tool. I could write the result to its own file and then read the new file, but that doesn't catch the streaming ids.
-=-=-=-=-EDIT-=-=-=-=-=-
So here is what I'm using:
ssh -t <user>#<host> "tail -f /APP/logs/foo.log" |grep "^javax.ejb.ObjectNotFoundException" |awk '/[0-9]/ { system("php myscript.php "$6) }'
And if I use tail -l #lines it works, or if after a while I ctrl-c, it then works. The behavior I wanted though was to as the tail got a bad ID to kick off the script to fix the bad ID. Not wait until an EOF or some tail buffer...
I'm having similar problem. There's something funny with tail -f and grep -o combination when ssh.
So on local server, if you do
tail -f myfile.log |grep -o keyword
It grep just fine.
But if you do it from remote server....
ssh user#server 'tail -f myfile.log |grep -o keyword'
doesn't work
But if you remove -f from tail or -o from grep, work just fine... weird :-/

Really daemonize a command when using exec()?

From PHP pages of my apache server, I run some commands using a line like :
exec("{$command} >> /tmp/test.log 2>&1 & echo -n \$!");
You can see an explaination of the arguments here.
But I don't understand something : if I restart or stop my apache server, my command dies too.
root#web2:/sx/temp# ps ax | grep 0ff | grep -v grep
15957 ? S 0:38 /usr/bin/php /sx/site_web_php/fr_FR/app/console task:exec /sx/temp/task_inventaire/ 0ff79bf690dcfdf788fff26c259882e2d07426df 10800
root#web2:/sx/temp# /etc/init.d/apache2 restart
Restarting web server: apache2 ... waiting ..
root#web2:/sx/temp# ps ax | grep 0ff | grep -v grep
root#web2:/sx/temp#
After some researches, I read some things about parent pids, but using a & inside my command-line, I thought I was really detaching my child process from his parent.
I am using apache2 with libapache2-mod-php5 and apache2-mpm-prefork.
How can I really detach my children programs from apache?
edit
You can reproduce it on a Linux/Mac this way :
a) create a executed_script.php file that contains :
<?php
sleep(10);
b) create a execute_from_http.php file that contains :
<?php
exec("php executed_script.php > /tmp/test.log 2>&1 & echo -n \$!");
c) run http://localhost/path/execute_from_http.php
d) on a terminal, run the command :
ps axjf | grep execute | grep -v grep ; sudo /etc/init.d/apache2 restart ; ps axjf | grep execute | grep -v grep
If you run the command during the 10 secs of the execute_from_http.php script, you'll get the output :
php#beast:/var/www/xxx/$ ps axjf | grep execute | grep -v grep ; sudo /etc/init.d/apache2 restart ; ps axjf | grep execute | grep -v grep
1 5257 5245 5245 ? -1 S 33 0:00 php executed_script.php
* Restarting web server apache2
... waiting ...done.
php#beast:/var/www/xxx/$
As you can see, the ps command outputs only once, this tells you that the executed script died when apache restarted.
The "at" method
I found a working solution but I don't know if that's ok if we speak performance and security. It uses the at command, a kind of cron working only once.
Instead of :
exec("php executed_script.php > /dev/null 2>&1 & echo -n \$!");
Use :
exec("echo 'php executed_script.php > /dev/null 2>&1' | at now -M");
The key is that executed_script.php will be run by an external daemon (atd), so executed_script.php will be a child of atd and not an apache's one.
php#beast:/var/www/xxx$ ps axjf | grep execute | grep -v grep ; sudo /etc/init.d/apache2 restart ; ps axjf | grep execute | grep -v grep
7032 7033 973 973 ? -1 SN 33 0:00 \_ php executed_script.php
* Restarting web server apache2
... waiting ...done.
7032 7033 973 973 ? -1 SN 33 0:00 \_ php executed_script.php
php#beast:/var/www/xxx$ ps ax | grep 973
973 ? Ss 0:00 atd
Note several things :
you can't access the pid of your ran app, if you get $! like on my previous pieces of code, you'll get the pid of at.
you need to remove www-data which is by default in /etc/at.deny (it is probably there with reasons, so take care)
i have serious doubts about performance : I think that at write on a file read by atd to communicate
The fork / setsid method
As #hek2mgl wrote in its own answer, we can use a pcntl_fork(), but that's not as simple as that. First, you can't run pcntl_fork() behind apache, because if we look at the PHP Manual, Introduction of the Process Control, we can see:
Process Control should not be enabled within a web server environment
and unexpected results may happen if any Process Control functions are
used within a web server environment.
When a fork is made, you get two exact copy of the parent process in memory. And because PHP behind apache is run as a module, at the end of the PHP execution (even after a die()), you come back to the apache's module wrapper, and you can't control what's going on.
So here is the scenario with an intermediate command that will daemonize your execution:
1) From Apache, you run the intermediate command that will create your daemonized command :
$command = escapeshellarg("php executed_script.php");
exec("php run_as_daemon.php {$command} >> /dev/null 2>&1 &");
2) The intermediate command fork and use posix_setsid to really detach your command.
<?php
if (!isset($argv[1]))
{
exit;
}
$command = $argv[1];
$pid = pcntl_fork();
if ($pid < 0) // error
exit;
else if ($pid) // parent
exit;
else // child
{
$sid = posix_setsid(); // creates a daemon
if ($sid < 0)
exit;
exec("{$command} >> /dev/null 2>&1 &");
}
3) Your executed command, of course, doesn't change :
<?php
sleep(10);
Result :
php#beast:/var/www/xxx/$ wget -qO- http://localhost/xxx/execute_from_http.php && sleep 1 && ps axjf | grep execute | grep -v grep ; sudo /etc/init.d/apache2 restart ; ps axjf | grep execute | grep -v grep
1 19958 19956 19956 ? -1 S 33 0:00 php executed_script.php
* Restarting web server apache2 ......done.
1 19958 19956 19956 ? -1 S 33 0:00 php executed_script.php
First note, that the '&' in your example is just a boolean AND that concats the command and the echo. If you want to start the command in background, meaning that exec will return immediately, use the & at the very end of the command line:
exec("{$command} >> /tmp/test.log 2>&1 & echo -n \$! &");
If you want the process running after apache has finished you'll have to daemonize the process using pcntl_fork()
Here comes an example:
$pid = pcntl_fork();
switch($pid) {
case -1 : die ('Error while forking');
case 0: // daemon code
posix_setsid(); // create new process group
exec("{$command} >> /tmp/test.log 2>&1 & echo -n \$!");
break;
default:
echo 'daemon started';
break;
}
Now there is no code in the starting PHP scripts that handles the return value of exec nor its output. So the current process can finish before exec has finished. The worker process will be owned by init after this.
Also you can have a look at the PEAR package System_Daemon. This can help to daemonize a script.

Categories