I have some PHP files. Each of them start a socket listener, or run an infinite loop. The scripts halt when are executed via php command:
php sock_listener.php ...halt there
php listener2.php ... halt there
...
Currently I use screen command to start all the listener PHP files every time the machine is rebooted. Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
Using screen
Create a detached screen session for the first script:
session='php-test'
screen -S "$session" -d -m -t A php a.php
where -d -m combination causes screen to create a detached session.
Run the rest of the scripts in the same session in separate windows:
screen -S "$session" -X screen -t B php b.php
screen -S "$session" -X screen -t C php c.php
where
-X sends the built-in screen command to the running session;
-t sets the window title.
The session will be available in the output of screen -ls command:
There is a screen on:
8951.php-test (Detached)
Connect to the session using -r option, e.g.:
screen -r 8951.php-test
List the windows within the screen session with Ctrl-a " shortcut, or windowlist -b command.
Forking Processes to Background
A less convenient way is to send the commands to background by appending an ampersand at the end of each command:
nohup php a.php 2>a.php.err >a.php.out &
nohup php b.php 2>b.php.err >b.php.out &
nohup php c.php 2>c.php.err >c.php.out &
where
nohup prevents termination of the commands, if the user logs out of the shell. Read this tutorial for more information;
2>a.php.err redirects the standard error to a.php.err file;
>a.php.out redirects the standard output to a.php.out file.
Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
You can put the above-mentioned commands into a shell script file, e.g.:
#!/bin/bash -
# Put the commands here
make it executable:
chmod +x /path/to/script
and call it when you need it:
/path/to/script
Modify the shebang as appropriate.
Just run them under circus. Circus will let you define a number of processes and how many instances you want to run, and just keep them running.
https://circus.readthedocs.io/en/latest/
Related
My Dockerfile:
FROM php:7.0-fpm
# Install dependencies, etc
RUN \
&& mkfifo /tmp/stdout \
&& chmod 777 /tmp/stdout
ADD docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
as you can see I'm creating a named pipe at /tmp/stdout. And my docker-entrypoint.sh:
#!/usr/bin/env bash
# Some run-time configuration stuff...
exec "./my-app" "$#" | tail -f /tmp/stdout
My PHP application (an executable named my-app) writes its application logs to /tmp/stdout. I want those logs to then be captured by Docker so that I can do docker logs <container_id> and see the logs that the application wrote to /tmp/stdout. I am attempting to do this by running the my-app command and then tailing /tmp/stdout, which will then output the logs to stdout.
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it. This is confirmed if I do docker exec -it <container_id> bash, and then do tail -f /tmp/stdout myself inside the container. Once I do that, the container immediately exits because the application has written its logs to the named pipe.
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Can anybody tell me why this isn't working, and what I need to change? I expect I have to change the exec call in docker-entrypoint.sh, but I'm not sure how. Thank you!
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it.
This is correct, see fifo(7). But with your example code
exec "./my-app" "$#" | tail -f /tmp/stdout
this should actually work since the pipe will start ./my-app and tail simultaneously so that there is something reading from /tmp/stdout.
But one problem here is that tail -f will never terminate by itself and so neither your docker-entrypoint.sh/container. You could fix this with:
tail --pid=$$ -f /tmp/stdout &
exec "./my-app" "$#"
tail --pid will terminate as soon as the process provided by id terminates where $$ is the pid of the bash process (and through exec later the pid of ./my-app).
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Does this mean it has to write to a filesystem path or is the path /tmp/stdout hardcoded?
If you can use any path you can use /dev/stdout / /proc/self/fd/1 / /proc/$$/fd/1 as logging path to let your application write to stdout.
If /tmp/stdout is hardcoded try symlinking it to /dev/stdout:
ln -s /dev/stdout /tmp/stdout
My issue seems to be asked before but hold on, this one is a bit different.
I have 2 php files, I run the following commands:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
This will create basically 2 php processes.
My problem is that sometimes one of the files or even both of them are killed by the server for unknown reason and I have to re-enter the commands over again. I tried 'forever' but doesn't help.
If the server is rebooted I will have to enter those 2 commands, I thought about Cronjob but I'm not sure if it would launch it twice which would create more confusion.
My question is how to start automatically the files if one or both of them got killed? What is the best way to achieve this which would check exactly that file_1.php or that file_2.php is indeed running?
There are a couple of ways you can do this. As #Chris Haas mentioned in the comments, supervisord can do this for you, or you can run a watchdog shell script from cron at regular intervals that checks to see if your script is running, and if not, start it. Here's one I use.
#!/bin/bash
FIND_PROC=`ps -ef |grep "php queue_agent.php --env prod" | awk '{if ($8 !~ /grep/) print $2}'`
# if FIND_PROC is empty, the process has died; restart it
if [ -z "${FIND_PROC}" ]; then
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo queue_agent.php failed at `date`
cd ${DIR}
nohup nice -n 10 php queue_agent.php --env prod -v > ../../sandbox/logs/queue_agent.log &
fi
exit 0
I think u can try to figure out why these two php scripts shut down as the first step. To solve this problem, u can use this php function:
void register_shutdown_function(callback $callback[.mixed $parameter]);
which Registers a callback to be executed after script execution finishes or exit() is called. So u can log some info when php files get shut down like this:
function myLogFunction() {
//log some info here
}
register_shutdown_function(myLogFunction);
Instead of putting the standard output and error output into /dev/null, u can put them into a log file(Since maybe we can get some helpful info from the output). So instead of using:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
try:
nohup php file_1.php 2>yourLog.log &
nohup php file_2.php 2>yourLog.log &
If u want to autorun these two php files when u boot the server, try edit /etc/rc.local(which is autorun when the os start up). Add your php cli command lines in this file.
If u can't figure out why php threads get shut down, try supervisor as #Chris Haas mensioned.
I have a PHP script that executes an external bash script to make an SSH connection but even though i am using ssh's move to background (-f) as well as an '&' my PHP script hangs.
Problem line in PHP script
system('/usr/local/bin/startProxy 172.16.0.5 9051');
I have also tried :
system('/usr/local/bin/startProxy 172.16.0.5 9051 &');
And the startProxy script is simply :
#!/bin/bash
#
# startProxy <IP_Address> <Proxy_Port>
#
# Starts an ssh proxy connection using -D <port> to remote system
#
ssh -o ConnectTimeout=5 -f -N -D $2 $1 &
Calling the startProxy script from command line works find and the script returns immediately.
When the system() command is run, the called script does run (I can see ssh running via ps), it just never appears to return to the PHP script.
I have run into the same issue when trying to call ssh directly via the system() method as well.
Thanks #Martin,
For future self and others, I had to change the system call to
system('/usr/local/bin/startProxy 172.16.0.5 9051 2>&1 >/dev/null');
and to change the startProxy script to :
ssh -o ConnectTimeout=5 -f -N -D $2 $1 2>&1 >/dev/null
before PHP return to the rest of the script!
Anyone able to explain why PHP stops like this if output is not redirected (I presume PHP isn't scanning the command and seeing the redirection part) and also why PHP hangs is i don't include the redirection in the ssh command within startProxy, dispite the fact that both the PHP system call and the ssh command in startProxy where using background options (-f and '&' )
So I'm using a Putty session to run a script on background.
my script can be located by
cd /var/www/listingapp.
Then I will run the command
php /var/www/listingapp/public/index.php batch/GetPrefecture init > /dev/null &
so I'm expecting that the script will run in background, but after I close the putty CLI and try to re-open it, the process can no longer be found when entering the command
ps aux | grep /var/www/listingapp
So this means it stopped. How do I make it to run in background when session is terminated?
nohup php /var/www/listingapp/public/index.php batch/GetPrefecture init &
I'm trying to create a PHP wrapper class around the Linux Screen utility.
I need to be able to start a screen session by name and get back the PID of the screen that was created.
I cannot assume the session name is unique so it can't parse the session -ls output to find the PID.
Based on examples and suggestions around the internet, I have tried the following approach using PHP's exec() function:
screen -D -m -S 'screen_name' 2>&1 &
The -D parameter tells screen to not fork the process, so it has the same PID as the background process.
So, then I can parse the PID from the output of the & background shell operator which is in this format:
[<number>] <pid>
This command works from the Linux shell (terminates immediately), but when I run it from PHP it freezes and the browser never loads the page.
I have tried executing it with both PHP's exec() and shell_exec() and it's the same result.
I have also tried lots of different combinations with the last part of the command such as the ones described HERE.
Edit:
If I use this command:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & 2>&1
Then the PHP exec() works. It starts the screen session but then I don't get any output from the & background operator. I'm assuming it's because I'm redirecting the stdout and stderr to /dev/null but as far as I know it should only be doing that for the screen command, not the & operator.
ALMOST SOLUTION:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & echo $!
I realized that the text showing in the terminal from the & background operator wasn't actually coming from the command stdout or stderr. It's the shell displaying it. So I added echo $! after the & operator which will print the PID of the process created by the last command.
This works in the terminal, it prints the correct PID. But when it's executed in PHP, the actual screen PID value is 4 more than the one returned by the echo.
It's like the echo is being executed before the & operator. Is there any way to make it wait?
SOLUTION:
I was wrapping the command in sudo, and the background operator was acting on the sudo command instead of the screen command. Make sure you escape your command arguments! :)
ALMOST SOLUTION:
screen -D -m -S 'screen_name' 1> /dev/null 2> /dev/null & echo $!
I realized that the text showing in the terminal from the & background operator wasn't actually coming from the command stdout or stderr. It's the shell displaying it. So I added echo $! after the & operator which will print the PID of the process created by the last command.
This works in the terminal, it prints the correct PID. But when it's executed in PHP, the actual screen PID value is 4 more than the one returned by the echo.
It's like the echo is being executed before the & operator. Is there any way to make it wait?
SOLUTION:
I was wrapping the command in sudo, and the background operator was acting on the sudo command instead of the screen command. Make sure you escape your command arguments! :)
– Bradley Odell