I wrote my first bash script, wich is checking folders for changes with the function "inotify" and starts some actions. The whole process is runned by nohup as a backgroundprocess.
The folder is the destination of several Dataloggers, which are pushing files in zip-Format via ftp into different subfolders. The bash script unzips the files and starts a php-script afterwards, which is processing the content of the zip files.
My Problem: Sometimes the bash script gives me errors like the following:
- No zipfiles found.
- unzip: cannot find zipfile...
This shouldn't happen, because the files exist and I can run the same command in terminal without errors. I had the same problem before, when I accendently ran the script multiple times, so I guess this is somehow causing the problem.
I tried to manage the problem with a PID File, which is located in my home dir. For some reason, it still runs two instances of the bash script. If I try to run another instance, it shows the warning "Process already running" as its supposed to (see program code). When I kill the process of the second instance manually (kill $$), it restarts after a while and again there are two instances of the process running.
#!/bin/bash
PIDFILE=/home/PIDs/myscript.pid
if [ -f $PIDFILE ]
then
PID=$(cat $PIDFILE)
ps -p $PID > /dev/null 2>&1
if [ $? -eq 0 ]
then
echo "Process already running"
exit 1
else
## Process not found assume not running
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
else
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
while true;
do inotifywait -q -r -e move -e create --format %w%f /var/somefolder | while read FILE
do
dir=$(dirname $FILE)
filename=${FILE:$((${#dir}+1))}
if [[ "$filename" == *.zip ]];
then
unzip $FILE
php somephpscript $dir
fi
done
done
The Output of ps -ef looks Like this:
UID PID PPID C STIME TTY TIME CMD
root 1439 1433 0 11:19 pts/0 00:00:00 /bin/bash /.../my_script
root 3488 1439 0 15:10 pts/0 00:00:00 /bin/bash /.../my_script
As you can see, the second instances Parent-PID is the script itself
EDIT: I changed the bash script as recommended by Fred. The source code now looks like this:
#!/bin/bash
PIDFILE=/home/PIDs/myscript.pid
if [ -f $PIDFILE ]
then
PID=$(cat $PIDFILE)
ps -p $PID > /dev/null 2>&1
if [ $? -eq 0 ]
then
echo "Process already running"
exit 1
else
## Process not found assume not running
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
else
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
while read -r FILE
do
dir=$(dirname $FILE)
filename=${FILE:$((${#dir}+1))}
if [[ "$filename" == *.zip ]];
then
unzip $FILE
php somephpscript $dir
fi
done < <(inotifywait -q -m -r -e move -e create --format %w%f /var/somefolder)
Output of ps -ef still shows two instances:
UID PID PPID C STIME TTY TIME CMD
root 7550 7416 0 15:59 pts/0 00:00:00 /bin/bash /.../my_script
root 7553 7550 0 15:59 pts/0 00:00:00 /bin/bash /.../my_script
root 7554 7553 0 15:59 pts/0 00:00:00 inotifywait -q -m -r -e move -e create --format %w%f /var/somefolder
You are seeing two lines in the ps output and assumes this means your script was launched twice, but it is not the case.
You pipe inotifywait into a while loop (which is OK). What you may not realize is that, by doing so, you cause Bash to create a subshell to execute the while loop. That subshell is not a full copy of the whole script.
If you kill that subshell, because of the while true loop, it gets recreated instantly. Please note that inotifywait has a --monitor option ; I have not studied your script in enough detail, but maybe you could do away with the external while loop by using it.
There is another way to write the loop that will not eliminate the subshell but has other advantages. Try something like :
while IFS= read -r FILE
do
BODY OF THE LOOP
done < <(inotifywait --monitor OTHER ARGUMENTS)
The first < indicates input redirection, and the <( ) syntax indicates "execute this command, pipe its output to a FIFO, and give me the FIFO path so that I can redirect from this special file to feed its output to the loop".
You can get a feel for what I mean by doing :
echo <(cat </dev/null)
You will see that the argument that echo sees when using that syntax is a file name, probably something like /dev/fd/XX.
There is one MAJOR advantage to getting rid of the subshell : the loop executes in the main shell scope, so any change in variables you perform in the loop can be seen outside the loop once it terminates. It may not matter much here but, mark my words, you will come to appreciate the enormous difference it makes in many, many situations.
To illustrate what happens with the subshell, here is a small code snippet :
while IFS= read -r line
do
echo Main $$ $BASHPID
echo $line
done < <(echo Subshell $$ $BASHPID)
Special variable $$ contains the main shell PID, and special variable BASHPID contains the current subshell (or the main shell PID if no subshell was launched). You will see that the main shell PID is the same in the loop and in the process substitution, but BASHPID changes, illustrating that a subshell is launched. I do not think there is a way to get rid of that subshell.
Related
I have a very strange behavior running a script supposed to mount an encrypted partition when it's executed by php (5.6). (And Fat-free Framework but that's not a problem)
First, to prevent any permission problem, I've added the following line in visudo:
www-data ALL=(ALL) NOPASSWD:ALL
My shell script is this one:
#!/bin/sh
# Verify the command line args number (only one is admitted).
if [ $# -ne 1 ]; then
echo "usage: $0 <passphrase>" >&2
exit 1
fi
# If we run as root, ignore 'sudo' prefix.
if [ $(id -u) -eq 0 ]
then
SUDO=""
else
SUDO="sudo"
fi
echo $SUDO
# Get the block device name.
block_dev=$(dmesg | grep 'Attached SCSI' | tail -n1 | tr -d '[]'| awk '{print $4}')
if [ "$block_dev" = "" ]
then
echo "Unable to find a USB key" >&2
echo "KO"
exit 1
fi
# Umount already mounted USB mass storage partition if any.
${SUDO} umount "/dev/${block_dev}"? >/dev/null 2>&1
# Check the block device is really here.
if [ ! -b "/dev/$block_dev" ]
then
echo "USB key removed." >&2
echo "KO"
exit 1
fi
# Check the second partition exists.
if [ ! -b "/dev/${block_dev}2" ]
then
echo "Invalid USB key partitionning." >&2
echo "KO"
exit 1
fi
# Default value for the mapping name if not provided as environment parameter.
MAPPING_NAME="${MAPPING_NAME-cryptoblock}"
echo $MAPPING_NAME
# Umount already mounted mapper if any.
${SUDO} umount "/dev/mapper/$MAPPING_NAME" >/dev/null 2>&1
# Check if the mapper already exists.
if [ -b "/dev/mapper/$MAPPING_NAME" ]
then
# Remove the crypto loopback.
${SUDO} cryptsetup close "$MAPPING_NAME"
fi
echo crypto lock
# Establish the crypto loopback.
printf "$1" | ${SUDO} cryptsetup --key-file=- --cipher aes-cbc-plain open "/dev/${block_dev}2" --type=plain "$MAPPING_NAME"
if [ $? -ne 0 ]
then
echo "Invalid encryption" >&2
echo "KO"
exit 1
fi
# Default value for the mount point if not provided as environment parameter.
MOUNT_POINT="${MOUNT_POINT-/mnt/usb}"
echo $MOUNT_POINT
# Mount the second (encrypted) partition.
${SUDO} mount -t vfat /dev/mapper/"$MAPPING_NAME" "${MOUNT_POINT}" -o "rw,nosuid,nodev,shortname=mixed,uid=1000"
if [ $? -ne 0 ]
then
${SUDO} cryptsetup close "$MAPPING_NAME"
echo "Invalid passphrase" >&2
echo "KO"
exit 1
fi
# Everything seems correct.
sleep 0.5
echo "OK"
exit 0
And it perfectly works when I execute it directly from shell. ls /mnt/usb/ gives me the folder content and I can read files and write to them.
Now I have a php script which contains the following:
$cmd = './mount_usb_lock.sh ' . $this->f3->get('badge.AES') .' 2>&1';
$ret = exec($cmd, $output);
$badge = file_get_contents('/mnt/usb/file.lck');
$this->console_log($badge);
And the result is my file's content.
Later, I try to update the file content from php:
$badge = '\rmember_id: '. $member_id;
file_put_contents('/mnt/usb/file.lck', $badge, FILE_APPEND);
And I get the following error:
Internal Server Error
file_put_contents(/mnt/usb/file.lck): failed to open stream: Permission denied
I also tried to write it from shell script executed by php but the result is the same.
What is strange is that when I execute ls /mnt/usb in a shell, I don't see anything like if the partition was not mounted.
But file_get_contents() successfully got the file so what's wrong here?
Regards,
My supervisor log is empty and I don't know what to do to populate it.
I have a simple test bash script
#!/usr/bin/env bash
echo "STARTING SCRIPT"
while read LINE
do
echo ${LINE} 2>&1
done < "${1:-/dev/stdin}"
exit 0
And my supervisor.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
logfile=/var/log/supervisor/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/var/run/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200
umask=022
nocleanup=false
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:print-lines]
command=gearman -h 127.0.0.1 -p 4730 -w -f print-lines -- /var/www/html/myapp/bash/printlines.sh 2>&1
process_name=print-lines-worker
numprocs=1
startsecs = 0
autorestart = false
startretries = 1
user=root
stdout_logfile=/var/www/html/myapp/var/log/supervisor_print_lines_out.log
stdout_logfile_maxbytes=10MB
stderr_logfile=/var/www/html/myapp/var/log/supervisor_print_lines_err.log
stderr_logfile_maxbytes=10MB
Then I execute this job via php script
$gmClient= new GearmanClient();
$gmClient->addServer();
# run reverse client in the background
$jobHandle = $gmClient->doBackground("print-lines", $domains);
var_dump($jobHandle);
So what happens is following.
Job gets executed
myapp-dev: /var/www/html/myapp $ gearadmin --status
print-lines 1 1 1
But both of log files are empty... I would at least expect that somewhere would be written "STARTING SCRIPT" or something, but everything is empty.
What Am I Doing Wrong? Am I checking wrong log file?
If you need any additional informations, please let me know and I will provide. Thank you
I found a solution or at lease a workaround. What I did is following
my supervisor.conf command now looks like this
command=gearman -h 127.0.0.1 -p 4730 -w -f print-lines -- bash -c "/var/www/html/myApp/bash/printlines.sh > /var/www/html/myApp/var/log/printlines.log 2>&1"
So shell is now writing a log file when shell script gets executed
I came across a couple of issues with my QNAP NAS TS-251+ whilst developing a new project these are:
1) There is no php alias and when I add one via command line it is removed on NAS Restart.
2) A similar thing happens for Composer except on restart it removes Composer as well from the system.
How can I stop this from happening or get around it so that when my NAS restarts the php and composer alias are already set.
I managed to resolve this issue by adding a new script that runs when my NAS starts up. QNAP have provided some basic instructions on how to add a startup script on their wiki page under Running Your Own Application at Startup. However I added a couple more steps t
These steps are fairly basic:
Login to your NAS Server via SSH.
Run the following command mount $(/sbin/hal_app --get_boot_pd port_id=0)6 /tmp/config (Running ls /tmp/config will give you something similar to below)
Run vi /tmp/config/autorun.sh this will allow you to edit/create a file called autorun.sh **
For me I wanted to keep this file as simple as possible so I didn't have to change it much, so the script is just called from with this Shell Script. So add the following to autorun.sh.
autorun.sh code example:
#!/bin/sh
# autorun script for Turbo NAS
/share/CACHEDEV1_DATA/.qpkg/autorun/autorun_startup.sh start
exit 0
You will notice a path of /share/CACHEDEV1_DATA/.qpkg/autorun/ this is where my new script that I want to run is contained, you don't have to have yours here if you don't want to however I know the script will not be removed if placed here. autorun_startup.sh this is the name of the script I want to be running, and start is the command in the script I want to be running.
Run chmod +x /tmp/config/autorun.sh to make sure that autorun.sh is actually runnable.
Save the file and run umount /tmp/config (Important).
Navigate to the folder you have put in the autorun.sh (script in my case /share/CACHEDEV1_DATA/.qpkg/autorun/) and create any folders along the way that you need.
Create your new shell file using vi and call it whatever you want (Again in my case it is called autorun_startup.sh) and add your script to the file. The script I added is below but you can add whatever you want to you startup script.
autorun_startup.sh code example:
#!/bin/sh
RETVAL=0
QPKG_NAME="autorun"
APACHE_ROOT=`/sbin/getcfg SHARE_DEF defWeb -d Qweb -f
/etc/config/def_share.info`
QPKG_DIR=$(/sbin/getcfg $QPKG_NAME Install_Path -f /etc/config/qpkg.conf)
addPHPAlias() {
/bin/cat /etc/profile | /bin/grep "php" | /bin/grep "/usr/local/apache/bin/php" 1>>/dev/null 2>>/dev/null
[ $? -ne 0 ] && /bin/echo "alias php='/usr/local/apache/bin/php'" >> /etc/profile
}
addComposerAlias() {
/bin/cat /etc/profile | /bin/grep "composer" | /bin/grep "/usr/local/bin/composer" 1>>/dev/null 2>>/dev/null
[ $? -ne 0 ] && /bin/echo "alias composer='/usr/local/bin/composer'" >> /etc/profile
}
addPHPComposerAlias() {
/bin/cat /etc/profile | /bin/grep "php-composer" | /bin/grep "/usr/local/apache/bin/php /usr/local/bin/composer" 1>>/dev/null 2>>/dev/null
[ $? -ne 0 ] && /bin/echo "alias php-composer='php /usr/local/bin/composer'" >> /etc/profile
}
download_composer() {
curl -sS https://getcomposer.org/installer | /usr/local/apache/bin/php -- --install-dir=/usr/local/bin --filename=composer
}
case "$1" in
start)
/bin/echo "Enable PHP alias..."
/sbin/log_tool -t 0 -a "Enable PHP alias..."
addPHPAlias
/bin/echo "Downloading Composer..."
/sbin/log_tool -t 0 -a "Downloading Composer..."
download_composer
/bin/echo "Enable composer alias..."
/sbin/log_tool -t 0 -a "Enable composer alias..."
addComposerAlias
/bin/echo "Adding php composer alias..."
/sbin/log_tool -t 0 -a "Adding php composer alias..."
addPHPComposerAlias
/bin/echo "Use it: php-composer"
/sbin/log_tool -t 0 -a "Use it: php-composer"
;;
stop)
;;
restart)
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
exit $RETVAL
Run chmod +x /share/CACHEDEV1_DATA/.qpkg/autorun/autorun_startup.sh to make sure your script is runnable.
Restart your NAS System to make sure the script has been run. After restart for my script I just did php -version via terminal to make sure that the php alias worked and it did.
(*) With steps 3 and 8 you can either do this via something like WinSCP or continue doing it via command line (SSH). For me I chose to do it via WinSCP but here is the command still for SSH
I am fairly new to server related stuff so if anyone has a better way cool.
I've been at this for a few days now on and off while working on other sections of my project
echo "playing";
header("HTTP/1.1 200 OK");
exec('./omx-start.sh "' . $full . '" > /dev/null 2>&1 &');
die();
I've also put the exec in my die like:
die(exec('nohup ./omx-start.sh "' . $full . '" > /dev/null 2>&1 &'));
I've also tried adding nohup (like above)
content of omx-start.sh
ps cax | grep "omxplayer" > /dev/null
if [ $? -eq 0 ]; then
sudo killall omxplayer && sudo killall omxplayer.bin
fi
echo $1
if [ -e "playing" ]
then
rm "playing"
fi
mkfifo "playing"
nohup omxplayer -b -o hdmi "$1" > /dev/null 2>&1 &
also I've added nohup and & at by the control operator
it SHOULD fork off into subshell
I can do this easily with python, with any other language actually.
I am almost going to have to make my php script call a python script that runs the omx-start.sh script too? or is there actually a good way to fork php scripts or force them to stop loading?
My die(); SOMETIMES triggers as well if I do die("test"); I can see it, sometimes triggering. and the page STILL is hanging (loading) but the php process is freed up to take other request at that time.. but the page.. still.. hangs.. what?
I need to make a background script that is spawned by PHP command line script that echos to the SSH session. Essentially, I need to do this linux command:
script/path 2>&1 &
If I just run this command in linux, it works great. Output is still displayed to the screen, but I can still use the same session for other commands. However, when I do this in PHP, it doesn't work the same way.
I've tried:
`script/path 2>&1 &`;
exec("script/path 2>&1 &");
system("script/path 2>&1 &")
...And none of these work. I need it to spawn the process, and then kill itself so that I can free up the session, but I still want the output from the child process to print to the screen.
(please comment if this is unclear... I had a hard time putting this into words :P)
I came up with a solution that works in this case.
I created a wrapper bash script that starts up the PHP script, which in turn spawns a child script that has its output redirected to a file, which the bash script wrapper tails.
Here is the bash script I came up with:
php php_script.php "$#"
ps -ef | grep php_script.log | grep -v grep | awk '{print $2}' | xargs kill > /dev/null 2>&1
if [ "$1" != "-stop" ]
then
tail -f php_script.log -n 0 &
fi
(it also cleans up "tail" processes that are still running so that you don't get a gazillion processes when you run this bash script multiple times)
And then in your php child script, you call the external script like this:
exec("php php_script.php >> php_script.log &");
This way the parent PHP script exits without killing the child script, you still get the output from the child script, and your command prompt is still available for other commands.