Restart NAMED/bind Service via Cronjob/bash in PHP - php

I simply want to restart named depending on whether a file exists. I've been stuck on this all day.
Command to create bash file:
$this->execute('echo -e "#!/bin/bash\nsudo /sbin/service named reload" >> /var/reload_named.sh');
Here is my cronjob:
*/1 * * * * cronjob: sudo sh /var/reload_named.sh; rm -f /var/reload_named.sh;
Here is what happens when the cronjob runs (/var/log/cron):
Jul 30 18:34:01 digitalocean CROND[24864]: (root) CMD (cronjob: sudo sh /var/reload_named.sh; rm -f /var/reload_named.sh )
Jul 30 18:34:01 digitalocean CROND[24862]: (root) UNSAFE (”example#digitalocean.com”)
For some reason it says it is UNSAFE. I've tried running with and without sudo.
It manages to delete the file but not restart named. I have tried doing so many other methods to get this to work.
I've tried (Over lots of Googling):
Running exec('service named restart') in php
Creating a .c file and adding a user that runs it from php
Running service named restart directly in crontab -e
Attempted different variations on running it with sudo
Tried adding apache user to sudo (Still fails)
Any help much appeciated
(I am on Centos 6.7)

I finally worked out a way to do this. Here is a method which SSHs into itself as root and runs service command:
$this->root_execute('service named reload');
public function root_execute($command = '')
{
set_include_path('/path/to/dir/ssh/');
require_once('Net/SSH2.php');
$ssh = new Net_SSH2(SSH_HOST);
if (!$ssh->login(SSH_USER, SSH_PASS)) {
exit('failed');
}
$res = $ssh->exec($command);
$ssh = null;
restore_include_path();
return $res;
}
(Unfortunately doesn't work with HTTPD if running from http .php)

Related

Running cron on container startup prevents PHP script from showing webpage

I'm attempting to construct a monitoring webservice using docker & PHP
I have a PHP script that when accessed performs a check to see if a number of services are running (through checking the HTTP headers for the services enpoints).
I have a functions.inc.php file and an index.php file
Here is my index.php file
<?php
header("Access-Control-Allow-Origin: *");
header("Content-type: application/json");
require('functions.inc.php');
//connect to MySQL DB
include("conn.php");
//Log file - check if it exists - create one if not
$file = "/var/www/html/logs/log".date("Y-m-d").".txt";
if (!file_exists($file)){
echo "Log file for today does not exist... creating and writing logfile\n";
$logdata = "Monitoring log for: ".date("Y/m/d"."\n");
file_put_contents($file,$logdata,FILE_APPEND);
}else{
echo "Log file for today exists\n";
}
//service.csv can be configured to add
//or remove services to the monitoring
//Open service config csv
//Check if service.CSV is openable
$services = fopen("/var/www/html/service.CSV",'r');
if($services){
//Read first line
fgetcsv($services,1000,',');
//loop through each service - perform check and write result to file
while(($value = fgetcsv($services,1000,','))!== FALSE){
$service=$value[0];
if(servicecheck($service)){
$response = "[".$service." - is running]\n";
}else{
$response = "[".$service." - is not running]\n";
}
echo $response;
file_put_contents($file,date("[h:i:sa ")."]".$response,FILE_APPEND);
}
}else{
die("Unable to open file");
}
I can currently run the script through buidling and running it through docker, making it accessible through localhost port 80.
Here is my initial Dockerfile:
FROM php:7.2-apache
COPY src/ /var/www/html/
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN chmod 777 /var/www/html
This all works as intended. I can access the service through localhost:80 on my browser.
However, I want these service checks to run even when the page isn't accessed.
I found that cron would be the appropriate solution for this, and I tried to implement it. My new dockerfile looks like this-
FROM php:7.2-apache
COPY src/ /var/www/html/
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN chmod 777 /var/www/html
RUN apt-get update && apt-get -y install cron
# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
RUN chmod 777 /etc/cron.d/hello-cron
RUN touch /var/log/cron.log
CMD cron && tail -f /var/log/cron.log
This does work. In the docker terminal I can see the script running every minute. It outputs to terminal what until now has been output to the browser. However now I can't access it through localhost:80 and make the php script run manually.
heres my cronfile (hello-cron) for reference
* * * * * /usr/local/bin/php /var/www/html/index.php >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.
I believe the issue lies with cron, as its the final line CMD cron && tail -f /var/log/cron.log that makes the page inaccessible.
Is there any solution for this? I'm relatively new to it all so I'm sure there is something I'm missing but I cant find anything that helps

How to allow www-data to execute ping?

I try to execute a ping command with the user www-data
$command = 'ping -c 4 www.stackoverflow.com 2>&1';
$result = shell_exec($command);
But i always get ping: icmp open socket: Operation not permitted.
So i tried to allow the command by executing visudo and adding this line:
www-data ALL = NOPASSWD: /bin/ping
Then i restarted apache2 and tried it again, but i still get Operation not permitted.
How can i solve this?
The use of setuid, that provoques that ping is executed with the user of the ping itself (root) and not the user who launches the command (here www-data), is an old way to solve this problem, and not the best one today.
See this post. Recent linux distribution use kernel capabilities to solve that. Run getcap /bin/ping, it should return: /bin/ping = cap_net_raw+ep.
If not, you can manually set the capabilities. Run, as root:
# setcap cap_net_raw+ep /bin/ping
Or, more elegant, re-installe the appropriate package. In Debian distributions and derivatives, this is iputils-ping.

Starting FOREVER or PM2 as WWW-DATA from a PHP script

I have a nodejs script named script.js.
var util = require('util');
var net = require("net");
process.on("uncaughtException", function(e) {
console.log(e);
});
var proxyPort = "40000";
var serviceHost = "1.2.3.4";
var servicePort = "50000";
net.createServer(function (proxySocket) {
var connected = false;
var buffers = new Array();
var serviceSocket = new net.Socket();
serviceSocket.connect(parseInt(servicePort), serviceHost);
serviceSocket.pipe(proxySocket).pipe(serviceSocket);
proxySocket.on("error", function (e) {
serviceSocket.end();
});
serviceSocket.on("error", function (e) {
console.log("Could not connect to service at host "
+ serviceHost + ', port ' + servicePort);
proxySocket.end();
});
proxySocket.on("close", function(had_error) {
serviceSocket.end();
});
serviceSocket.on("close", function(had_error) {
proxySocket.end();
});
}).listen(proxyPort);
I am runing it normally like nodejs script.js, but now i want to include forever or pm2 functionalities as well. When i am root everything works smootly:
chmod -R 777 /home/nodejs/forever/;
-- give rights
watch -n 0.1 'ps ax | grep forever | grep -v grep'
-- watch forwarders (where i see if a forever is opened)
/usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file
-- open with forever
forever list
-- it is there, i can see it
forever stopall
-- kill them all
The problem is when i want to run the script from a PHP script with the system or exec functions :
sudo -u www-data /usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file
-- open as www-data (or i can do this just by accessing `http://1.2.3.4/test.php`, it is the same thing)
forever list
-- see if it is there, and it is not (i see it in watch)
forever stopall
-- says no forever is opened
kill PID_ID
-- the only way is to kill it by pid ... and on another server all of this works very well, can create and kill forevers from a php script when accessing it from web ... not know why
-- everything is in /etc/sudoers including /usr/local/bin/forever
Why is that? How can i solve this?
I also made some trick, created a user 'forever2', i created a script.sh with this content :
sudo su forever2 user123; /usr/local/bin/forever -d -v --pidFile "/home/nodejs/forever/file.pid" --uid 'file' -p '/home/nodejs/forever/' -l '/home/nodejs/forever/file.log' -o '/home/nodejs/forever/file.log' -e '/home/nodejs/forever/file.log' -a start /etc/dynamic_ip/nodejs/proxy.js 41789 1.2.3.4:44481 414 file;
where user123 is not existent, is just a trick to exit the shell after execution. The script works, runs forever, i can close all forevers with the command forever stopall from root. When i try the same thing running the http://1.2.3.4/test.php or as www-data user i cannot close it from root or www-data, so not even this works.
I tried from Ubuntu 14.04.3 LTS, Ubuntu 14.04 LTS , Debian GNU/Linux 8 ... still the same thing.
Any ideeas?
Thanks.
If you are starting the process from within Apache or the web-server you are already as the www-data user, so doing a sudo su to the user context you already have is likely not necessary.
When you start this forever task you may also be required to shut the terminals/inputs and directly send to background. Something like this:
// Assemble command
$cmd = '/usr/bin/forever';
$cmd.= ' -d -v --pidfile /tmp/my.pid'; // add other options
$cmd.= ' start';
$cmd.= ' /etc/dynamic_ip/nodejs/proxy.js';
// "magic" to get details
$cmd.= ' 2>&1 1>/tmp/output.log'; // Route STDERR to STDOUT; STDOUT to file
$cmd.= ' &'; // Send whole task to background.
system($cmd);
Now, there won't be any output here but you should have something in /tmp/output.log which could show why forever failed, or the script crashed.
If you've been running the script sometimes as root, then trying the same command as www-data you may also be running into a permissions on one or more files/directories created from the execution as root which now conflict when running as www-data.
This is part of PHP security you say you're running it from a php script and your not your running it from Apache via a php script.
PHP web scripts should not have root access as such they run under the same permissions as Apache user www-data.
There are ways to prevent php running as root but run a task as root but it's a little hacky and I'm not going to share the code but I will explain so you can look into it. here is where to start
http://php.net/manual/en/function.proc-open.php
With a Proccess like this you can then execute a proc. Like your script.js via nodeJS using SUDO and then read stdOut and stdErr wait for password request then provide it by writing to stdIn for that process.
Don't forget in doing this the user www-data has to have a password and be in the sudoers list
Per the OPs Comment
Due to the way SUDO works the PATH does not appear to contain the path to the node executables npm, node so your best of building a .sh (bash script) and using sudo to run that.
You still need to monitor this process as it will still ask for a password
#!/bin/bash
sudo -u ec2-user -i
# change this to the path you want to run from
cd ~
/usr/local/bin/pm2 -v

Execute php file from bash script as www-data using crontab

I am trying to run a php file every night at a certain time using crontab, however the php needs to be running as a www-data because of the directory permissions. To run it as www-data I am using the root crontab and changing the user in there, like so:
* 20 * * * sudo -u www-data /usr/bin/env TERM=xterm /path/to/dailyProc.sh
dailyProc is as follows
today=`date +"%d%m%y"`
year=`date +"%y"`
dm=`date +"%m%d"`
`tar -zxf /path/to/input/$today.tgz -C /path/to/output`
echo "starting data proc"
`/usr/bin/php5 -f /path/to/dataproc.php date=$dm year=$year`
echo "data proc done"
All other commands in dailyProc.sh work but the php doesnt run. The php is using an output buffer and writing it to a file, which works fine calling it from the command line but doesnt work when calling by cron.
I can also definitely run dailyProc.sh from the command line as www-data using
sudo -u www-data dailyProc.sh
and everything works as expected.
Is there any reason I would not be able to run this php file in dailyProc.sh using crontab when everything else in it works?
Cron can be run per user too.
crontab -u www-data -e
This works for me:
* 20 * * * su - www-data -C "/path/to/dailyProc.sh"
You do not need to use su or sudo in a crontab entry, because the 6th column is for the user name anyway. And you don't need to start a terminal, because you won't see it anyway. Hence, the following should do:
* 20 * * * www-data /path/to/dailyProc.sh
The Syntax error: word unexpected… you mentioned in a comment appears to be inside your code. Try running the script from the command line and start debugging from there.
To do this I used curl inside dailyProc.sh
today=`date +"%d%m%y"`
year=`date +"%y"`
dm=`date +"%m%d"`
`tar -zxf /path/to/input/$today.tgz -C /path/to/output`
echo "starting data proc"
`/usr/bin/curl "myserver.com/dataproc.php?date=$dm?year=$year"`
echo "data proc done"

Running command-line application from PHP as specific user

I am running Apache on my localhost. From a PHP script run as www-user I would like to control Rhythmbox playback on my machine. So far I have a simple command in my PHP script:
exec('rhythmbox-client --pause');
This works great when I run it from the command-line as me, but if it runs as www-user I guess rhythmbox-client doesn't know/can't access my instance of Rhythmbox.
Is there an easy way for that PHP script to run as my user rather than www-user, or to tell rhythmbox-client which instance to control?
The overall application is that when my phone goes off-hook it calls my PHP script which pauses music, and resumes playback when the phone is on-hook. I love VoIP phones!
Solution:
Thanks to Carpetsmoker and Tarek I used sudo as the answer but there was a couple of problems. To overcome them I did the following:
Created a bash script to call rhythmbox-client. This bash script was executed using sudo in PHP as described in the answer below. Unfortunately rhythmbox-client didn't know what environment to control, so the bash script looks like this:
#! /bin/bash
DBUS_ADDRESS=`grep -z DBUS_SESSION_BUS_ADDRESS /proc/*/environ 2> /dev/null| sed 's/DBUS/\nDBUS/g' | tail -n 1`
if [ "x$DBUS_ADDRESS" != "x" ]; then
export $DBUS_ADDRESS
/usr/bin/rhythmbox-client --pause
fi
Now that bash script can be executed by PHP and wwwuser, and my phone can pause/play my music!
One solution is using sudo(8):
exec('sudo -u myuser ls /');
You will, obviously, need to setup sudo(8) to allow the user running your webserver to invoke it. Editing the sudoers file with visudo(8), you can use something like:
wwwuser ALL=/usr/bin/rhythmbox-client
To prevent Apache from being able to run other commands and only the rythymbox command.
In my case, the solution came this way:
Added this lines to sudoers file:
myuser ALL=(ALL) NOPASSWD: /usr/bin/prlctl
_www ALL=(ALL) NOPASSWD: /usr/bin/prlctl # IMPORTANT!!!
The EXEC() command in PHP was changed to:
exec("sudo -u myuser prlctl list -a", $out, $r);
If a process can be run by any user it can be run by PHP. Example is fortune command
-rwxr-xr-x 1 root root 18816 Oct 1 2009 /usr/games/fortune
Look at the x permission for every user. But this some times doesn't at all work and you may have to let the user, www-data or apache etc, run the program. You can sudo www-data and try to run the command. If it works then Apache/PHP should be able to run it.

Categories