is there another way to separate a process from php exec? - php

I've been at this for a few days now on and off while working on other sections of my project
echo "playing";
header("HTTP/1.1 200 OK");
exec('./omx-start.sh "' . $full . '" > /dev/null 2>&1 &');
die();
I've also put the exec in my die like:
die(exec('nohup ./omx-start.sh "' . $full . '" > /dev/null 2>&1 &'));
I've also tried adding nohup (like above)
content of omx-start.sh
ps cax | grep "omxplayer" > /dev/null
if [ $? -eq 0 ]; then
sudo killall omxplayer && sudo killall omxplayer.bin
fi
echo $1
if [ -e "playing" ]
then
rm "playing"
fi
mkfifo "playing"
nohup omxplayer -b -o hdmi "$1" > /dev/null 2>&1 &
also I've added nohup and & at by the control operator
it SHOULD fork off into subshell
I can do this easily with python, with any other language actually.
I am almost going to have to make my php script call a python script that runs the omx-start.sh script too? or is there actually a good way to fork php scripts or force them to stop loading?
My die(); SOMETIMES triggers as well if I do die("test"); I can see it, sometimes triggering. and the page STILL is hanging (loading) but the php process is freed up to take other request at that time.. but the page.. still.. hangs.. what?

Related

shell_exec won't stop even though add new shell_exec to stop the other one

I've got a PHP script that needs to run the .sh file using shell_exec
echo shell_exec('sh /var/www/html/daloradius/start.sh > /dev/null 2>/dev/null &');
I just dump it into background. This is my start.sh
sudo tcpdump port 1812 -w testing.pcap
we know that tcpdump always listen all the time, I tried to resolve this (stop the tcpdump process) with the button that triggering another shell_exec which is stop.sh
pid=$(ps aux | grep "sudo tcpdump" | head -1 | cut -d '.' -f 1 | cut -d ' ' -f 7)
sudo kill $pid
Stop.sh is doing fine when I tested it in cli, but when I click the button that triggering start.sh and I tried to stop it with the button that triggering stop.sh it doesn't work. The tcpdump won't stop, but when I try to stop it in cli using stop.sh it's work well. Can anybody gimme solution to force stop the tcpdump things? Thank you
You are trying to use bash when you should be orchestrating the process from php.
Here, we get the PID of the command and kill it from PHP. Replace the sleep statement with whatever code you have.
<?php
# Script must be run with sudo to start tcpdump
# Be security conscious when running ANY code here
$pcap_file = 'testing.pcap';
$filter = 'port 1812'
$command = "tcpdump $filter -w $pcap_file" . ' > /dev/null 2>&1 & echo $!;';
$pid = (int)shell_exec($command);
echo "[INFO] $pid tcpdump: Writing to $pcap_file\n";
# Some important code. Using sleep as a stand-in.
shell_exec("sleep 5");
echo "[INFO] $pid tcpdump: Ending capture\n";
shell_exec("kill -9 $pid");
Please note that tcpdump has the -c option to stop ofter n packets received and you can rotate files with -G. You may want to read up on tcpdump's manpage to get the most out of it.

Unix Bash Script starts multiple times even with PID File

I wrote my first bash script, wich is checking folders for changes with the function "inotify" and starts some actions. The whole process is runned by nohup as a backgroundprocess.
The folder is the destination of several Dataloggers, which are pushing files in zip-Format via ftp into different subfolders. The bash script unzips the files and starts a php-script afterwards, which is processing the content of the zip files.
My Problem: Sometimes the bash script gives me errors like the following:
- No zipfiles found.
- unzip: cannot find zipfile...
This shouldn't happen, because the files exist and I can run the same command in terminal without errors. I had the same problem before, when I accendently ran the script multiple times, so I guess this is somehow causing the problem.
I tried to manage the problem with a PID File, which is located in my home dir. For some reason, it still runs two instances of the bash script. If I try to run another instance, it shows the warning "Process already running" as its supposed to (see program code). When I kill the process of the second instance manually (kill $$), it restarts after a while and again there are two instances of the process running.
#!/bin/bash
PIDFILE=/home/PIDs/myscript.pid
if [ -f $PIDFILE ]
then
PID=$(cat $PIDFILE)
ps -p $PID > /dev/null 2>&1
if [ $? -eq 0 ]
then
echo "Process already running"
exit 1
else
## Process not found assume not running
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
else
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
while true;
do inotifywait -q -r -e move -e create --format %w%f /var/somefolder | while read FILE
do
dir=$(dirname $FILE)
filename=${FILE:$((${#dir}+1))}
if [[ "$filename" == *.zip ]];
then
unzip $FILE
php somephpscript $dir
fi
done
done
The Output of ps -ef looks Like this:
UID PID PPID C STIME TTY TIME CMD
root 1439 1433 0 11:19 pts/0 00:00:00 /bin/bash /.../my_script
root 3488 1439 0 15:10 pts/0 00:00:00 /bin/bash /.../my_script
As you can see, the second instances Parent-PID is the script itself
EDIT: I changed the bash script as recommended by Fred. The source code now looks like this:
#!/bin/bash
PIDFILE=/home/PIDs/myscript.pid
if [ -f $PIDFILE ]
then
PID=$(cat $PIDFILE)
ps -p $PID > /dev/null 2>&1
if [ $? -eq 0 ]
then
echo "Process already running"
exit 1
else
## Process not found assume not running
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
else
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "Could not create PID file"
exit 1
fi
fi
while read -r FILE
do
dir=$(dirname $FILE)
filename=${FILE:$((${#dir}+1))}
if [[ "$filename" == *.zip ]];
then
unzip $FILE
php somephpscript $dir
fi
done < <(inotifywait -q -m -r -e move -e create --format %w%f /var/somefolder)
Output of ps -ef still shows two instances:
UID PID PPID C STIME TTY TIME CMD
root 7550 7416 0 15:59 pts/0 00:00:00 /bin/bash /.../my_script
root 7553 7550 0 15:59 pts/0 00:00:00 /bin/bash /.../my_script
root 7554 7553 0 15:59 pts/0 00:00:00 inotifywait -q -m -r -e move -e create --format %w%f /var/somefolder
You are seeing two lines in the ps output and assumes this means your script was launched twice, but it is not the case.
You pipe inotifywait into a while loop (which is OK). What you may not realize is that, by doing so, you cause Bash to create a subshell to execute the while loop. That subshell is not a full copy of the whole script.
If you kill that subshell, because of the while true loop, it gets recreated instantly. Please note that inotifywait has a --monitor option ; I have not studied your script in enough detail, but maybe you could do away with the external while loop by using it.
There is another way to write the loop that will not eliminate the subshell but has other advantages. Try something like :
while IFS= read -r FILE
do
BODY OF THE LOOP
done < <(inotifywait --monitor OTHER ARGUMENTS)
The first < indicates input redirection, and the <( ) syntax indicates "execute this command, pipe its output to a FIFO, and give me the FIFO path so that I can redirect from this special file to feed its output to the loop".
You can get a feel for what I mean by doing :
echo <(cat </dev/null)
You will see that the argument that echo sees when using that syntax is a file name, probably something like /dev/fd/XX.
There is one MAJOR advantage to getting rid of the subshell : the loop executes in the main shell scope, so any change in variables you perform in the loop can be seen outside the loop once it terminates. It may not matter much here but, mark my words, you will come to appreciate the enormous difference it makes in many, many situations.
To illustrate what happens with the subshell, here is a small code snippet :
while IFS= read -r line
do
echo Main $$ $BASHPID
echo $line
done < <(echo Subshell $$ $BASHPID)
Special variable $$ contains the main shell PID, and special variable BASHPID contains the current subshell (or the main shell PID if no subshell was launched). You will see that the main shell PID is the same in the loop and in the process substitution, but BASHPID changes, illustrating that a subshell is launched. I do not think there is a way to get rid of that subshell.

Getting a process to fork and be independent of parent in Apache + PHP + Bash

I have a bash script that calls liquidsoap like so
/bin/sh -c "echo \$\$ > \"${sdir}/pid/${sfile}.pid\" && exec liquidsoap \"${sdir}/liq/${sfile}.liq\" >/dev/null 2>&1 || rm \"${sdir}/pid/{$sfile}.pid\"" &
(For readability, it might look like this with variables filled in)
/bin/sh -c "echo \$\$ > \"/radio/pid/station.pid\" && exec liquidsoap \"/radio/liq/station.liq\" >/dev/null 2>&1 || rm \"/radio/pid/station.pid\"" &
In PHP, the script is called with
return shell_exec("{$this->streamBase}/scripts/{$this->streamName} start config {$stationConfig}");
My problem is, I just had to restart Apache, and when I did, it also killed the liquid soap instances. I would like to get it to run fully independent of Apache such that I could restart Apache and they would keep running.
I'm not sure how I can achieve that.
EDIT:
I've tried changing
/bin/sh -c "echo \$\$ > \"${sdir}/pid/${sfile}.pid\" && exec liquidsoap \"${sdir}/liq/${sfile}.liq\" >/dev/null 2>&1 || rm \"${sdir}/pid/{$sfile}.pid\"" &
to
(/bin/sh -c "echo \$\$ > \"${sdir}/pid/${sfile}.pid\" && exec liquidsoap \"${sdir}/liq/${sfile}.liq\" >/dev/null 2>&1 || rm \"${sdir}/pid/{$sfile}.pid\"" & ) &
and
nohup /bin/sh -c "echo \$\$ > \"${sdir}/pid/${sfile}.pid\" && exec liquidsoap \"${sdir}/liq/${sfile}.liq\" >/dev/null 2>&1 || rm \"${sdir}/pid/{$sfile}.pid\"" &
Neither keep liquidsoap running if I restart (or stop/start) Apache. When Apache stops, so do those processes.
for an exit code to be propogated up the chain the parents and grandparents must exist, and if you kill the grandparent, aka apache, yes, you kill the children and grandchildren unless they leave the family and become daemons.

php - How to exec in background multiple bash command

I'm trying to run in background two bash command using exec function.
$action[] = "/path/script1 par1 > log1 2>&1";
...
$action[] = "/path/script2 par2 > log2 2>&1";
...
$cmd = "( " .implode(' && ',$action). " ) &";
exec($cmd);
script1 and script2 are bash scripts.
Scripts are properly executed and their output is correctly saved in their log files, but my application hangs. It seems that last & doesn't work.
I have already tried:
$cmd = "bash -c \"" .implode(' && ',$action). "\" &";
If I run a single command, it works in background.
I captured $cmd and if I run:
( /path/script1 par1 > log1 2>&1 && /path/script2 par2 > log2 2>&1 ) &
from command line, it works in background.
I'm using Apache/2.0.52 and PHP/5.2.0 on Red Hat Enterprise Linux AS release 3 (Taroon Update 2)
The answer is hidden in the PHP exec documentation:
If a program is started with this function, in order for it to
continue running in the background, the output of the program must be
redirected to a file or another output stream. Failing to do so will
cause PHP to hang until the execution of the program ends.
Add a redirection to the top level of your command line:
exec( "bash -c \"( ./script1 par1 > log1 2>&1 && ./script2 par2 > log2 2>&1 ) \" >/dev/null 2>&1 & " );
and without bash -c:
exec( "( ./script1 par1 > log1 2>&1 && ./script2 par2 > log2 2>&1 ) >/dev/null 2>&1 & " );
Tested with PHP 5.3.3-7 (command line invocation): Program hangs before adding the redirect operators, and terminates afterwards.

php exec command (or similar) to not wait for result

I have a command I want to run, but I do not want PHP to sit and wait for the result.
<?php
echo "Starting Script";
exec('run_baby_run');
echo "Thanks, Script is running in background";
?>
Is it possible to have PHP not wait for the result.. i.e. just kick it off and move along to the next command.
I cant find anything, and not sure its even possible. The best I could find was someone making a CRON job to start in a minute.
From the documentation:
In order to execute a command and have it not hang your PHP script while
it runs, the program you run must not output back to PHP. To do this,
redirect both stdout and stderr to /dev/null, then background it.
> /dev/null 2>&1 &
In order to execute a command and have
it spawned off as another process that
is not dependent on the Apache thread
to keep running (will not die if
somebody cancels the page) run this:
exec('bash -c "exec nohup setsid your_command > /dev/null 2>&1 &"');
You can run the command in the background by adding a & at the end of it as:
exec('run_baby_run &');
But doing this alone will hang your script because:
If a program is started with exec function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
So you can redirect the stdout of the command to a file, if you want to see it later or to /dev/null if you want to discard it as:
exec('run_baby_run > /dev/null &');
This uses wget to notify a URL of something without waiting.
$command = 'wget -qO- http://test.com/data=data';
exec('nohup ' . $command . ' >> /dev/null 2>&1 & echo $!', $pid);
This uses ls to update a log without waiting.
$command = 'ls -la > content.log';
exec('nohup ' . $command . ' >> /dev/null 2>&1 & echo $!', $pid);
I know this question has been answered but the answers i found here didn't work for my scenario ( or for Windows ).
I am using windows 10 laptop with PHP 7.2 in Xampp v3.2.4.
$command = 'php Cron.php send_email "'. $id .'"';
if ( substr(php_uname(), 0, 7) == "Windows" )
{
//windows
pclose(popen("start /B " . $command . " 1> temp/update_log 2>&1 &", "r"));
}
else
{
//linux
shell_exec( $command . " > /dev/null 2>&1 &" );
}
This worked perfectly for me.
I hope it will help someone with windows. Cheers.
There are two possible ways to implement it.
The easiest way is direct result to dev/null
exec("run_baby_run > /dev/null 2>&1 &");
But in case you have any other operations to be performed you may consider ignore_user_abort
In this case the script will be running even after you close connection.
"exec nohup setsid your_command"
the nohup allows your_command to continue even though the process that launched may terminate first. If it does, the the SIGNUP signal will be sent to your_command causing it to terminate (unless it catches that signal and ignores it).
On Windows, you may use the COM object:
if(class_exists('COM')) {
$shell = new COM('WScript.Shell');
$shell->Run($cmd, 1, false);
}
else {
exec('nohup ' . $cmd . ' 2>&1 &');
}

Categories