START a proccess in background in WINDOWS - php

I have a problem running a command from a PHP script. The command I am trying to run is
echo y | plink -ssh -N -D 9999 admin#1.2.3.4 -pw admin -v
The thing is that the command runs but the script freezes until the execution of plink command, which I don't want. I also tried (running in background) this :
START /MIN "cmd.exe" /C "plink -ssh -N -D 9999 admin#1.2.3.4 -pw admin"
and I see minimized plink running, and as soon as I close it, the script continues.
I also tried:
START /B /MIN "cmd.exe" /C "plink -ssh -N -D 9999 admin#1.2.3.4 -pw admin"
and it does the same thing, but is showing the output in the PHP script.
this is the function :
function create_tunnel($ip,$user,$pass,$port)
{
exec('START /min cmd /c "echo y | plink -ssh -N -D '.$port.' '.$user.'#'.$ip.' -pw '.$pass.' -v" > nul');
}
What must I do to run this command and let the PHP script continue execution? In linux this would be very simple, I would just use screen command.
Thanks.

Try Symfony Process component:
$process = new Process('ls -lsa');
$process->start();
while ($process->isRunning()) {
// waiting for process to finish
}
echo $process->getOutput();

Related

shell_exec won't stop even though add new shell_exec to stop the other one

I've got a PHP script that needs to run the .sh file using shell_exec
echo shell_exec('sh /var/www/html/daloradius/start.sh > /dev/null 2>/dev/null &');
I just dump it into background. This is my start.sh
sudo tcpdump port 1812 -w testing.pcap
we know that tcpdump always listen all the time, I tried to resolve this (stop the tcpdump process) with the button that triggering another shell_exec which is stop.sh
pid=$(ps aux | grep "sudo tcpdump" | head -1 | cut -d '.' -f 1 | cut -d ' ' -f 7)
sudo kill $pid
Stop.sh is doing fine when I tested it in cli, but when I click the button that triggering start.sh and I tried to stop it with the button that triggering stop.sh it doesn't work. The tcpdump won't stop, but when I try to stop it in cli using stop.sh it's work well. Can anybody gimme solution to force stop the tcpdump things? Thank you
You are trying to use bash when you should be orchestrating the process from php.
Here, we get the PID of the command and kill it from PHP. Replace the sleep statement with whatever code you have.
<?php
# Script must be run with sudo to start tcpdump
# Be security conscious when running ANY code here
$pcap_file = 'testing.pcap';
$filter = 'port 1812'
$command = "tcpdump $filter -w $pcap_file" . ' > /dev/null 2>&1 & echo $!;';
$pid = (int)shell_exec($command);
echo "[INFO] $pid tcpdump: Writing to $pcap_file\n";
# Some important code. Using sleep as a stand-in.
shell_exec("sleep 5");
echo "[INFO] $pid tcpdump: Ending capture\n";
shell_exec("kill -9 $pid");
Please note that tcpdump has the -c option to stop ofter n packets received and you can rotate files with -G. You may want to read up on tcpdump's manpage to get the most out of it.

Supervisor std out log is empty when executing shell script for command

My supervisor log is empty and I don't know what to do to populate it.
I have a simple test bash script
#!/usr/bin/env bash
echo "STARTING SCRIPT"
while read LINE
do
echo ${LINE} 2>&1
done < "${1:-/dev/stdin}"
exit 0
And my supervisor.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
logfile=/var/log/supervisor/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/var/run/supervisord.pid
nodaemon=false
minfds=1024
minprocs=200
umask=022
nocleanup=false
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:print-lines]
command=gearman -h 127.0.0.1 -p 4730 -w -f print-lines -- /var/www/html/myapp/bash/printlines.sh 2>&1
process_name=print-lines-worker
numprocs=1
startsecs = 0
autorestart = false
startretries = 1
user=root
stdout_logfile=/var/www/html/myapp/var/log/supervisor_print_lines_out.log
stdout_logfile_maxbytes=10MB
stderr_logfile=/var/www/html/myapp/var/log/supervisor_print_lines_err.log
stderr_logfile_maxbytes=10MB
Then I execute this job via php script
$gmClient= new GearmanClient();
$gmClient->addServer();
# run reverse client in the background
$jobHandle = $gmClient->doBackground("print-lines", $domains);
var_dump($jobHandle);
So what happens is following.
Job gets executed
myapp-dev: /var/www/html/myapp $ gearadmin --status
print-lines 1 1 1
But both of log files are empty... I would at least expect that somewhere would be written "STARTING SCRIPT" or something, but everything is empty.
What Am I Doing Wrong? Am I checking wrong log file?
If you need any additional informations, please let me know and I will provide. Thank you
I found a solution or at lease a workaround. What I did is following
my supervisor.conf command now looks like this
command=gearman -h 127.0.0.1 -p 4730 -w -f print-lines -- bash -c "/var/www/html/myApp/bash/printlines.sh > /var/www/html/myApp/var/log/printlines.log 2>&1"
So shell is now writing a log file when shell script gets executed

Open gnome-terminal from php script

I want to try to open gnome-terminal from php script:
shell_exec('bash /data/manager/application/.shell/system.sh');
This script use a function to check terminal:
SCRIPTCURRENT=`readlink -m $0`
SCRIPTCURRENTPATH=$(dirname "$SCRIPTCURRENT")
runintoterminal () {
if ! [ -t 1 ]; then
gnome-terminal -e "bash $1"
exit 0
fi
}
runintoterminal $SCRIPTCURRENT
I've tried:
shell_exec('gnome-terminal');
But it's doesn't work... (I know it's possible...) But how to ?
I use nginx and php-fpm. With my own socket. nginx and socket use my user and not www-data. (I'm on ubuntu 14.04LTS)
I've try 0777 rights...
My bash script can run from netbeans IDE ans terminal... but not from php...
The problem is most likely that gnome-terminal doesn't know where it should draw itself. Normally it shows up on the same display as the program that launched it (IDE, terminal), but web servers don't have displays so it doesn't know where to go. You can try DISPLAY=:0 gnome-terminal -e "bash $1" to show it on the current first local graphical login session, if it has permissions for that.
that other guy
shell_exec('DISPLAY=:0 bash /data/manager/application/.shell/system.sh');
Or on function:
SCRIPTCURRENT=`readlink -m $0`
SCRIPTCURRENTPATH=$(dirname "$SCRIPTCURRENT")
runintoterminal () {
if ! [ -t 1 ]; then
DISPLAY=:0 gnome-terminal -e "bash $1"
exit 0
fi
}
runintoterminal $SCRIPTCURRENT

Tail -f | Grep <regex> | php script.php <grep result>

Ok, so I have a ssh connection open to a remote server. I'm running a tail on the logs and if an ID shows up in the logs I need to do an insert into the database.
So I have my ssh tail working and I have it piping into my grep function which is giving me the IDs I need. The next step is that as those IDs are found it needs to immediately kick off a php script.
What I thought it would look like is:
ssh -t <user>#<host> "tail -f /APP/logs/foo.log" | grep -oh "'[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'" | php myscript.php <grep result>
And yes my regex is horrible, wanted to use [0-9]{8}, but what I have gets the job done.
Any suggestions? Tried looking at -exec, awk, or any other tool. I could write the result to its own file and then read the new file, but that doesn't catch the streaming ids.
-=-=-=-=-EDIT-=-=-=-=-=-
So here is what I'm using:
ssh -t <user>#<host> "tail -f /APP/logs/foo.log" |grep "^javax.ejb.ObjectNotFoundException" |awk '/[0-9]/ { system("php myscript.php "$6) }'
And if I use tail -l #lines it works, or if after a while I ctrl-c, it then works. The behavior I wanted though was to as the tail got a bad ID to kick off the script to fix the bad ID. Not wait until an EOF or some tail buffer...
I'm having similar problem. There's something funny with tail -f and grep -o combination when ssh.
So on local server, if you do
tail -f myfile.log |grep -o keyword
It grep just fine.
But if you do it from remote server....
ssh user#server 'tail -f myfile.log |grep -o keyword'
doesn't work
But if you remove -f from tail or -o from grep, work just fine... weird :-/

How can I force a PHP script to continue on with a script after calling exec()?

I'm using the exec function in PHP to run a command. The command I'm running can often take quite a bit of time and I don't have any need to read it's output. Is there a simple way of telling PHP not to wait for the exec command to finish before moving on with the rest of the script?
// nohup_test.php:
// a very long running process
$command = 'tail -f /dev/null';
exec("nohup $command >/dev/null 2>/dev/null &"); // here we go
printf('run command: %s'.PHP_EOL, $command);
echo 'Continuing to execute the rest of this script instructions'.PHP_EOL;
for ($x=1000000;$x-->0;) {
for ($y=1000000;$y-->0;) {
//this is so long, so I can do ps auwx | grep php while it's running and see whether $command run in separate process
}
}
run nohup_test.php:
$ php nohup_test.php
run command: tail -f /dev/null
Continuing to execute the rest of this script instructions
Let's find out pids of our processes:
$ ps auwx | grep tail
nemoden 3397 0.0 0.0 3252 636 pts/8 S+ 18:41 0:00 tail -f /dev/null
$ ps auwx | grep php
nemoden 3394 82.0 0.2 31208 6804 pts/8 R+ 18:41 0:04 php nohup_test.php
as you can see, pid is different and my script is running without waiting for tail -f /dev/null.
Here is what I use (you can use exec or system instead of paasthru):
passthru("/path/to/program args >> /path/to/logfile 2>&1 &");
What you are looking for is called an Asynchronous call, like answered here:
Asynchronous shell exec in PHP
php execute a background process
PHP exec() as Background Process (Windows Wampserver Environment)

Categories