PHP only able to kill process when run in cli - php

I have this php script "start_stream.php", which starts a process (ffmpeg) using proc_open() and then stays in a loop checking for a keep-alive signal.
Once the keep-alive signal is too old, the php script will terminate the process by its pID with this command:
$output = system('taskkill /F /PID '.$this->pID);
echo 'TASKKILL OUTPUT: ';
var_dump($output);
The output from this is always like this:
TASKKILL OUTPUT: string(55) "SUCCESS: The process with PID 2272 has been terminated."
BUT under some circumstances the process does not terminate, but keeps running. But I always get the SUCCESS message.
If I run the start_stream.php script from cmd (windows) with this command:
php start_stream.php --stream [STREAM-NAME] --url [STREAM-URL]
Then the process will terminate properly as expected.
But since this is a web-application, I need to call the start_stream.php script from another php script. I do it like this:
//Set the command for streaming
$command = 'php start_stream.php --stream '.$stream.' --url '.$url;
//Add the environment variables for shell execution
putenv('PATH=' . $_SERVER['PATH']);
//Set the working directory to streaming directory
chdir("..\\streaming");
//Start the streaming process
define('STDIN',fopen("php://stdin","r"));
define('STDOUT',fopen("stdout-".$stream,"w"));
$descriptorspec = [STDIN, STDOUT, STDOUT];
$process = proc_open($command, $descriptorspec, $pipes);
When I do this, I still get the SUCCESS output, stating that the process has terminated, but the process keeps running in the background.
It works to kill the process with this:
shell_exec('taskkill /IM "ffmpeg.exe" /F');
But I cannot use that, since it will kill all ffmpeg processes, and I need to run concurrent processes, killing only the relevant ones.
As I said earlier, the script runs and terminates as intended if I use the following command on the command line:
php start_stream.php --stream [STREAM-NAME] --url [STREAM-URL]
But if I use php.exe instead of php like so:
php.exe start_stream.php --stream [STREAM-NAME] --url [STREAM-URL]
I will have the same problem with the ffmpeg process not terminating, but still giving a success message.
I am also using proc_terminate() in the start_stream.php script like so:
proc_terminate($this->process);
But this also gives me a "process terminated" status response while the process does not stop.
The pID for the process is the pID for the cmd instance running the process, not for the ffmpeg process itself. When I call the termination of this process, the cmd process is killed in every instance.
When calling the php process from cmd without using php.exe, the ffmpeg process is killed along with the cmd process. When using php.exe, only the cmd process is killed, but the ffmpeg process keeps running.
Is there a way to either ensure that the ffmpeg process is killed along with the cmd process, or maybe some way to get the pID of the ffmpeg process itself in the context that I need?

Although I was able to fix this, I am still not exactly sure why and how the execution conext changes between calling start_stream.php from cmd, and calling it from another php script.
When running a command using php's proc_open(), the pID returned will by default be of the shell executing the command. So if we are starting a process via the shell, we are getting the pID of the shell executing the process, but not the pID for the child process itself.
In some context, terminating the shell will also terminate the child process, but in other cases, the shell will be terminated, while the child process keeps running.
If we pass the bypass_shell = TRUE to proc_open, php will return the pID of the child process and will not be creating a new shell process to contain it.
The following example will return the destination process pID:
$command = "ffmpeg do something";
$descriptorspec = [STDIN, STDOUT, STDOUT];
$otherOptions = ['bypass_shell' => TRUE];
$this->process = proc_open( $command, $descriptorspec, [], NULL, NULL, $otherOptions);

Related

End Process Created by PHP exec in Ubuntu using Apache

I have an Ubuntu VM running in VirtualBox that hosts a server using Apache. The concept of the server is to accept HTTP POST requests, store them in a MySQL database and then execute a Python script with the relevant POST data to be displayed in a Discord channel.
The process itself is working but each time the PHP script calls the Python script, a new process is created that never actually ends. After a few hours of receiving live data the server runs out of available memory due to the amount of lingering processes. The PHP script has the following exec call as the last line of code;
exec("python3 main.py $DATA");
I would like to come up with a way to actually kill the processes created from this exec command (using user www-data), either in the Python file after the script is executed or automatically with an Apache setting that I probably just do not know about.
When running the following command in a terminal I can see the different processes;
ps -o pid,user,%mem,command ax | sort -b -k3 -r
There are 3 separate processes that show up, 1 referencing the actual python3 exec command as marked up in PHP;
9903 www-data 0.4 python3 main.py DATADATADATADATADATADATA
Then another process showing the more common -k start commands;
9907 www-data 0.1 /usr/sbin/apache2 -k start
And lastly another process very similar to the PHP exec command;
9902 www-data 0.0 sh -c python3 main.py DATADATADATADATADATADATA
How can I ensure Apache cleans these processes up - OR what do I need to add into my Python or PHP code to appropriately exec a Python script without leaving behind processes?
Didn't realize the exec command in php would wait for a return output indefinitely. Added this to the end of the string I was using in my exec call; > /dev/null &
i.e.: exec("python3 main.py $DATA > /dev/null &");

Use PHP over HTTP to run background bash with trap on exit

I'm trying to use PHP to trigger a bash script that should never stop running. It's not just that the command needs to run and I don't need to wait for output, it needs to continue running after PHP is finished. This has worked other times (and the question has been asked already), the difference seems to be my bash script has a trap for when it's closed.
Here is my bash script:
#!/bin/bash
set -e
WAIT=5
FILE_LOCK="$1"
echo "Daemon started (PID $$)..."
echo "$$" > "$FILE_LOCK"
trap cleanup 0 1 2 3 6 15
cleanup()
{
echo "Caught signal..."
rm -rf "$FILE_LOCK"
exit 1
}
while true; do
# do things
sleep "$WAIT"
done
And here is my PHP:
$command = '/path/to/script.sh /tmp/script.lock >> /tmp/script.log 2>&1 &';
$lastLine = exec($command, $output, $returnVal);
I see the script run, the lock file get created, then it exits, and the trap removes the lock file. In my /tmp/script.log I see:
Daemon started (PID 55963)...
Caught signal...
What's odd is that this only happens when running the PHP via Apache. From command line it keeps running as expected.
The signal on the trap that's being caught is 0.
I've tried wrapping my command in a bash environment, like $command = '/bin/bash -c "' . addslashes($command) . '"';, also tried adding nohup to the beginning. Nothing seems to be working. Is this possible to do for a never ending script?
Found the problem thanks to #lxg.
My # do things command was giving errors, which was causing the script to exit. For some reason they were suppressed.
When removing set -e from the beginning of my bash script I started seeing the errors output to my log file. Not sure why they didn't show up before.
The issue was in my bash loop it was running PHP commands. Even though my bash user and Apache user are the same, for some reason they had different $PATHs. This meant that when running on command line I was using a PHP7 binary, but when Apache trigged bash commands it was using a PHP5 binary (even though Apache itself is configured to use PHP7). So the application errored out and that is what caused the script to die.
The solution was to explicitly set the PHP binary path in my bash loop.
I was doing this with
BIN_PHP=$(which php)
But on true command line it would return one value (/path/to/php7/bin/php) vs command line initiated by Apache (/path/to/php5/bin/php). Despite Apache being the same as a my command line user, it didn't load the ~/.bashrc which specified my correct PHP path.

runing script in background and returning its pid from php

I try to invoke an app inside php:
ob_start();
passthru("(cd /opt/server/TrackServer/release && ./TrackServer& ) && pidof TrackServer");
$pid = ob_get_clean();
var_dump($pid);
exit;
The goal is to run TrackServer within its path and to get it's process id so I can close it after I do some test.
When I run the command in terminal:
(cd /opt/server/TrackServer/release && nohup ./TrackServer&) && pidof TrackServer
I get correct pid returned but in php the command stops and doesn't go further, the TrackServer is started and running but I have to kill it from terminal to unblock the php script, after killing the process the php script prints the correct pid for the process I've just closed from terminal.
Why the command stops?
Is there a way to make it run in php the way I'am trying to run it (without forking to a new thread)?
From the passthru manual page: The passthru() function is similar to the exec() function in that it executes a command.
What this means is that you can't execute your command line directly, as this runs several commands and relies on the shell to implement backgrounding and subshells as needed.
Try this instead:
passthru("/bin/bash -c 'cd /opt/server/TrackServer/release && nohup ./TrackServer& && pidof TrackServer'");
EDIT:
I found a working solution:
ob_start();
passthru("/bin/bash -c 'cd /opt/server/TrackServer/release && nohup ./TrackServer&' > /dev/null 2>&1 &");
passthru("pidof TrackServer");
$pid = ob_get_clean();
The command was stopping because:
Run multiple exec commands at once (But wait for the last one to finish)
PHP's exec function will always wait for a response from your
execution. However you can send the stdout & stderror of the process
to /dev/null (on unix) and have these all the scripts executed almost
instantly.

Parallelizing PHP processes with a Bash Script?

I want to launch ~10 php processes from a bash script. When one of them finishes, I'd like the bash script to launch another php process, and continue on indefinitely, always having ~10 php processes running.
What is the simplest way to do this?
The php file launched will be the same every time, but the php process will know to pull new values from the database so it's processing new data each time. The file I need to launch and all it's classes are already written in php.
Seems like a good fit for superivisord. The following configuration will make sure that 10 processes are always running, and deals with log rotation, which is also handy. All output, including stderr, will be written to /var/log/worker.log. With "autorestart=true", supervisord will replace a child process as soon as it exits.
[program:worker]
command=php /path/to/worker.php
process_name=%(program_name)s_%(process_num)d
stdout_logfile=/var/log/%(program_name)s.log
redirect_stderr=true
stdout_capture_maxbytes=512MB
stdout_logfile_backups=3
numprocs=10
numprocs_start=0
autostart=true
autorestart=true
Once you have the supervisor config in place (usually /etc/supervisord/conf.d), you can use supervisorctl as a convenient way to start and stop the process group.
$ supervisorctl start worker
...
$ supervisorctl stop worker
...
$ supervisorctl status
worker:worker_0 RUNNING pid 8985, uptime 0:09:24
worker:worker_1 RUNNING pid 10157, uptime 0:08:52
...
worker:worker_9 RUNNING pid 12459, uptime 0:08:31
You could use GNU Parallel, piping the list of images to manage into parallel as described here.
You could use something like this. Use one file to launch 10 (only run this once) and the bottom of each file could relaunch itself when it finished.
/**
* Asynchronously execute/include a PHP file. Does not record the output of the file anywhere.
*
* #param string $filename file to execute, relative to calling script (or root?)
* #param string $options (optional) arguments to pass to file via the command line
*/
function asyncInclude($filename, $options = '') {
exec("/path/to/php -f {$filename} {$options} >> /dev/null &");
}
jcomeau#intrepid:/tmp$ cat test.sh
#!/bin/sh
set -m # monitor mode
task="php-cgi /tmp/sleep.php"
function do_task {
$task >/dev/null &
echo -n spawned $! ' ' >&2
}
trap do_task SIGCHLD
for i in $(seq 1 10); do
do_task
done
while true; do
wait
done
jcomeau#intrepid:/tmp$ cat sleep.php
<?php sleep(3); ?>

How to check if a php script is still running

I have a PHP script that listens on a queue. Theoretically, it's never supposed to die. Is there something to check if it's still running? Something like Ruby's God ( http://god.rubyforge.org/ ) for PHP?
God is language agnostic but it would be nice to have a solution that works on windows as well.
I had the same issue - wanting to check if a script is running. So I came up with this and I run it as a cron job. It grabs the running processes as an array and cycles though each line and checks for the file name. Seems to work fine. Replace #user# with your script user.
exec("ps -U #user# -u #user# u", $output, $result);
foreach ($output AS $line) if(strpos($line, "test.php")) echo "found";
In linux run ps as follows:
ps -C php -f
You could then do in a php script:
$output = shell_exec('ps -C php -f');
if (strpos($output, "php my_script.php")===false) {
shell_exec('php my_script.php > /dev/null 2>&1 &');
}
The above code lists all php processes running in full, then checks to see if "my_script.php" is in the list of running processes, if not it runs the process and does not wait for the process to terminate to carry on doing what it was doing.
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php 2>&1 | mail -s "Daemon stopped" you#example.org
Edit:
Technically, this invokes the mailer right away, but only completes the command when the php script ends. Doing this captures the output of the php-script and includes in the mail body, which can be useful for debugging what caused the script to halt.
Simple bash script
#!/bin/bash
while [true]; do
if ! pidof -x script.php;
then
php script.php &
fi
done
Not for windows, but...
I've got a couple of long-running PHP scripts, that have a shell script wrapping it. You can optionally return a value from the script that will be checked in the shell-script to exit, restart immediately, or sleep for a few seconds -and then restart.
Here's a simple one that just keeps running the PHP script till it's manually stopped.
#!/bin/bash
clear
date
php -f cli-SCRIPT.php
echo "wait a little while ..."; sleep 10
exec $0
The "exec $0" restarts the script, without creating a sub-process that will have to unravel later (and take up resources in the meantime). This bash script wraps a mail-sender, so it's not a problem if it exits and pauses for a moment.
Here is what I did to combat a similar issue. This helps in the event anyone else has a parameterized php script that you want cron to execute frequently, but only want one execution to run at any time. Add this to the top of your php script, or create a common method.
$runningScripts = shell_exec('ps -ef |grep '.strtolower($parameter).' |grep '.dirname(__FILE__).' |grep '.basename(__FILE__).' |grep -v grep |wc -l');
if($runningScripts > 1){
die();
}
You can write in your crontab something like this:
0 3 * * * /usr/bin/php -f /home/test/test.php my_special_cron
Your test.php file should look like this:
<?php
php_sapi_name() == 'cli' || exit;
if($argv[1]) {
substr_count(shell_exec('ps -ax'), $argv[1]) < 3 || exit;
}
// your code here
That way you will have only one active instace of the cron job with my-special-cron as process key. So you can add more jobs within the same php file.
test.php system_send_emails sendEmails
test.php system_create_orders orderExport
Inspired from Justin Levene's answer and improved it as ps -C doesn't work in Mac, which I need in my case. So you can use this in a php script (maybe just before you need daemon alive), tested in both Mac OS X 10.11.4 & Ubuntu 14.04:
$daemonPath = "FULL_PATH_TO_DAEMON";
$runningPhpProcessesOfDaemon = (int) shell_exec("ps aux | grep -c '[p]hp ".$daemonPath."'");
if ($runningPhpProcessesOfDaemon === 0) {
shell_exec('php ' . $daemonPath . ' > /dev/null 2>&1 &');
}
Small but useful detail: Why grep -c '[p]hp ...' instead of grep -c 'php ...'?
Because while counting processes grep -c 'php ...' will be counted as a process that fits in our pattern. So using a regex for first letter of php makes our command different from pattern we search.
One possible solution is to have it listen on a port using the socket functions. You can check that the socket is still listening with a simple script. Even a monitoring service like pingdom could monitor its status. If it dies, the socket is no longer listening.
Plenty of solutions.. Good luck.
If you have your hands on the script, you can just ask him to set a time value every X times in db, and then let a cron job check if that value is up to date.
troelskn wrote:
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php | mail -s "Daemon stopped" you#example.org
This will call mail each time a line is printed in daemon.php (which should be never, but still.)
Instead, use the double ampersand operator to separate the commands, i.e.
php daemon.php & mail -s "Daemon stopped" you#example.org
If you're having trouble checking for the PHP script directly, you can make a trivial wrapper and check for that. I'm not sufficiently familiar with Windows scripting to put how it's done here, but in Bash, it'd look like...
wrapper_for_test_php.sh
#!/bin/bash
php test.php
Then you'd just check for the wrapper like you'd check for any other bash script: pidof -x wrapper_for_test_php.sh
I have used cmder for windows and based on this script I came up with this one that I managed to deploy on linux later.
#!/bin/bash
clear
date
while true
do
php -f processEmails.php
echo "wait a little while for 5 secobds...";
sleep 5
done

Categories