Tailing named pipe in Docker to write to stdout - php

My Dockerfile:
FROM php:7.0-fpm
# Install dependencies, etc
RUN \
&& mkfifo /tmp/stdout \
&& chmod 777 /tmp/stdout
ADD docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
as you can see I'm creating a named pipe at /tmp/stdout. And my docker-entrypoint.sh:
#!/usr/bin/env bash
# Some run-time configuration stuff...
exec "./my-app" "$#" | tail -f /tmp/stdout
My PHP application (an executable named my-app) writes its application logs to /tmp/stdout. I want those logs to then be captured by Docker so that I can do docker logs <container_id> and see the logs that the application wrote to /tmp/stdout. I am attempting to do this by running the my-app command and then tailing /tmp/stdout, which will then output the logs to stdout.
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it. This is confirmed if I do docker exec -it <container_id> bash, and then do tail -f /tmp/stdout myself inside the container. Once I do that, the container immediately exits because the application has written its logs to the named pipe.
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Can anybody tell me why this isn't working, and what I need to change? I expect I have to change the exec call in docker-entrypoint.sh, but I'm not sure how. Thank you!

What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it.
This is correct, see fifo(7). But with your example code
exec "./my-app" "$#" | tail -f /tmp/stdout
this should actually work since the pipe will start ./my-app and tail simultaneously so that there is something reading from /tmp/stdout.
But one problem here is that tail -f will never terminate by itself and so neither your docker-entrypoint.sh/container. You could fix this with:
tail --pid=$$ -f /tmp/stdout &
exec "./my-app" "$#"
tail --pid will terminate as soon as the process provided by id terminates where $$ is the pid of the bash process (and through exec later the pid of ./my-app).
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Does this mean it has to write to a filesystem path or is the path /tmp/stdout hardcoded?
If you can use any path you can use /dev/stdout / /proc/self/fd/1 / /proc/$$/fd/1 as logging path to let your application write to stdout.
If /tmp/stdout is hardcoded try symlinking it to /dev/stdout:
ln -s /dev/stdout /tmp/stdout

Related

How run PHP script file in the background forever

My issue seems to be asked before but hold on, this one is a bit different.
I have 2 php files, I run the following commands:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
This will create basically 2 php processes.
My problem is that sometimes one of the files or even both of them are killed by the server for unknown reason and I have to re-enter the commands over again. I tried 'forever' but doesn't help.
If the server is rebooted I will have to enter those 2 commands, I thought about Cronjob but I'm not sure if it would launch it twice which would create more confusion.
My question is how to start automatically the files if one or both of them got killed? What is the best way to achieve this which would check exactly that file_1.php or that file_2.php is indeed running?
There are a couple of ways you can do this. As #Chris Haas mentioned in the comments, supervisord can do this for you, or you can run a watchdog shell script from cron at regular intervals that checks to see if your script is running, and if not, start it. Here's one I use.
#!/bin/bash
FIND_PROC=`ps -ef |grep "php queue_agent.php --env prod" | awk '{if ($8 !~ /grep/) print $2}'`
# if FIND_PROC is empty, the process has died; restart it
if [ -z "${FIND_PROC}" ]; then
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo queue_agent.php failed at `date`
cd ${DIR}
nohup nice -n 10 php queue_agent.php --env prod -v > ../../sandbox/logs/queue_agent.log &
fi
exit 0
I think u can try to figure out why these two php scripts shut down as the first step. To solve this problem, u can use this php function:
void register_shutdown_function(callback $callback[.mixed $parameter]);
which Registers a callback to be executed after script execution finishes or exit() is called. So u can log some info when php files get shut down like this:
function myLogFunction() {
//log some info here
}
register_shutdown_function(myLogFunction);
Instead of putting the standard output and error output into /dev/null, u can put them into a log file(Since maybe we can get some helpful info from the output). So instead of using:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
try:
nohup php file_1.php 2>yourLog.log &
nohup php file_2.php 2>yourLog.log &
If u want to autorun these two php files when u boot the server, try edit /etc/rc.local(which is autorun when the os start up). Add your php cli command lines in this file.
If u can't figure out why php threads get shut down, try supervisor as #Chris Haas mensioned.

PHP script hangs when calling external command from CLI

I have a PHP script that executes an external bash script to make an SSH connection but even though i am using ssh's move to background (-f) as well as an '&' my PHP script hangs.
Problem line in PHP script
system('/usr/local/bin/startProxy 172.16.0.5 9051');
I have also tried :
system('/usr/local/bin/startProxy 172.16.0.5 9051 &');
And the startProxy script is simply :
#!/bin/bash
#
# startProxy <IP_Address> <Proxy_Port>
#
# Starts an ssh proxy connection using -D <port> to remote system
#
ssh -o ConnectTimeout=5 -f -N -D $2 $1 &
Calling the startProxy script from command line works find and the script returns immediately.
When the system() command is run, the called script does run (I can see ssh running via ps), it just never appears to return to the PHP script.
I have run into the same issue when trying to call ssh directly via the system() method as well.
Thanks #Martin,
For future self and others, I had to change the system call to
system('/usr/local/bin/startProxy 172.16.0.5 9051 2>&1 >/dev/null');
and to change the startProxy script to :
ssh -o ConnectTimeout=5 -f -N -D $2 $1 2>&1 >/dev/null
before PHP return to the rest of the script!
Anyone able to explain why PHP stops like this if output is not redirected (I presume PHP isn't scanning the command and seeing the redirection part) and also why PHP hangs is i don't include the redirection in the ssh command within startProxy, dispite the fact that both the PHP system call and the ssh command in startProxy where using background options (-f and '&' )

How can I check if a program is running in the background using a cron, and start it if needed?

I have the task to run a daemon in the background on a production server. However, I do want to be sure that this daemon always runs. The daemon is PHP process.
I tried to approach this by checking if the daemon is running, and if not: start it. So I have a command like:
if [ $(ps ax | grep -c "akeneo:batch:job-queue-consumer-daemon") -lt 3 ]; then php /home/sibo/www/bin/console akeneo:batch:job-queue-consumer-daemon & fi
I first do an if with ps and grep -c to check if there are processes running with a given name, and if not: I start the command ending with an &, forcing it to start.
The above command works, if I execute it from the command line the process gets started and I can see that is is running when I execute a simple ps ax-command.
However, as soon as I try to do this using the crontab it doesn't get started:
* * * * * if [ $(ps ax | grep -c "akeneo:batch:job-queue-consumer-daemon") -lt 3 ]; then php /home/sibo/www/bin/console akeneo:batch:job-queue-consumer-daemon & fi
I also set the MAILTO-header in the crontab, but I'm not getting any e-mails as well.
Can anyone tell me what's wrong with my approach? And how I can get it started?
An easy and old-style one is to create a bash file where you basically check if the process is running, otherwise you start it.
Here the content of the bash file:
#!/bin/bash
if [ $(ps -efa | grep -v grep | grep job-queue-consumer-daemon -c) -gt 0 ] ;
then
echo "Process running ...";
else
php /home/sibo/www/bin/console akeneo:batch:job-queue-consumer-daemon
fi;
Then in the crontab file you run the bash file.
There are special services for such tasks. For example http://supervisord.org/
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
And you can manage it via f.e https://github.com/supervisorphp/supervisor
A command working on command line and not working in CRON, this happened to me and here is what solved my problem.
Run echo $PATH in your terminal, copy entire output.
Then type crontab -e and at top of file, write this
PATH=WHATEVER_YOU_COPIED_FROM_LAST_COMMAND_OUTPUT
PS: (more suggestions)
I think you need to install apt-get install postfix on Ubuntu to be able to send emails.
You should also see CRON logs by
grep CRON /var/log/syslog
i would recommend you to use supervisord, it handles these kinds of issues with automatic restart on failed services, additionaly, you can try to set the akeneo commands as a service.
Otherwise, if you would like to do it using cronjobs, you may have an issue with the php binary, you need to setup the absolute path :
e.g : /usr/bin/php
I would also recommend if you use cronjob:
Check the logs of the cronjob for additional issues
grep CRON /var/log/syslog
Clean it up using a standalone bash script (don't forget to chmod +x)

PHP Docker write to stderr through file

Is it possible to redirect a plain text log file to stderr within a Docker container that is writable by PHP?
I have a PHP application that is writing to a file and we're trying to move it into a Docker container without changing any code. I've tried symlinking but this results in permissions errors.
Not sure what type of symlinking you tried but you should be able to do this with something like
RUN ln -sf /dev/stderr /path/to/your/text.log
In your Dockerfile (where the text.log path is the one inside the container).
I ended up finding this solution which uses a fifopipe to pipe the data into stdout.
# Allow `nobody` user to write to /dev/stderr
mkfifo -m 600 /tmp/logpipe
chown nobody:nobody /tmp/logpipe
cat <> /tmp/logpipe 1>&2 &
https://github.com/moby/moby/issues/6880#issuecomment-344114520

How do I execute multiple PHP files in single shell line?

I have some PHP files. Each of them start a socket listener, or run an infinite loop. The scripts halt when are executed via php command:
php sock_listener.php ...halt there
php listener2.php ... halt there
...
Currently I use screen command to start all the listener PHP files every time the machine is rebooted. Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
Using screen
Create a detached screen session for the first script:
session='php-test'
screen -S "$session" -d -m -t A php a.php
where -d -m combination causes screen to create a detached session.
Run the rest of the scripts in the same session in separate windows:
screen -S "$session" -X screen -t B php b.php
screen -S "$session" -X screen -t C php c.php
where
-X sends the built-in screen command to the running session;
-t sets the window title.
The session will be available in the output of screen -ls command:
There is a screen on:
8951.php-test (Detached)
Connect to the session using -r option, e.g.:
screen -r 8951.php-test
List the windows within the screen session with Ctrl-a " shortcut, or windowlist -b command.
Forking Processes to Background
A less convenient way is to send the commands to background by appending an ampersand at the end of each command:
nohup php a.php 2>a.php.err >a.php.out &
nohup php b.php 2>b.php.err >b.php.out &
nohup php c.php 2>c.php.err >c.php.out &
where
nohup prevents termination of the commands, if the user logs out of the shell. Read this tutorial for more information;
2>a.php.err redirects the standard error to a.php.err file;
>a.php.out redirects the standard output to a.php.out file.
Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
You can put the above-mentioned commands into a shell script file, e.g.:
#!/bin/bash -
# Put the commands here
make it executable:
chmod +x /path/to/script
and call it when you need it:
/path/to/script
Modify the shebang as appropriate.
Just run them under circus. Circus will let you define a number of processes and how many instances you want to run, and just keep them running.
https://circus.readthedocs.io/en/latest/

Categories