My Dockerfile:
FROM php:7.0-fpm
# Install dependencies, etc
RUN \
&& mkfifo /tmp/stdout \
&& chmod 777 /tmp/stdout
ADD docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
as you can see I'm creating a named pipe at /tmp/stdout. And my docker-entrypoint.sh:
#!/usr/bin/env bash
# Some run-time configuration stuff...
exec "./my-app" "$#" | tail -f /tmp/stdout
My PHP application (an executable named my-app) writes its application logs to /tmp/stdout. I want those logs to then be captured by Docker so that I can do docker logs <container_id> and see the logs that the application wrote to /tmp/stdout. I am attempting to do this by running the my-app command and then tailing /tmp/stdout, which will then output the logs to stdout.
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it. This is confirmed if I do docker exec -it <container_id> bash, and then do tail -f /tmp/stdout myself inside the container. Once I do that, the container immediately exits because the application has written its logs to the named pipe.
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Can anybody tell me why this isn't working, and what I need to change? I expect I have to change the exec call in docker-entrypoint.sh, but I'm not sure how. Thank you!
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it.
This is correct, see fifo(7). But with your example code
exec "./my-app" "$#" | tail -f /tmp/stdout
this should actually work since the pipe will start ./my-app and tail simultaneously so that there is something reading from /tmp/stdout.
But one problem here is that tail -f will never terminate by itself and so neither your docker-entrypoint.sh/container. You could fix this with:
tail --pid=$$ -f /tmp/stdout &
exec "./my-app" "$#"
tail --pid will terminate as soon as the process provided by id terminates where $$ is the pid of the bash process (and through exec later the pid of ./my-app).
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Does this mean it has to write to a filesystem path or is the path /tmp/stdout hardcoded?
If you can use any path you can use /dev/stdout / /proc/self/fd/1 / /proc/$$/fd/1 as logging path to let your application write to stdout.
If /tmp/stdout is hardcoded try symlinking it to /dev/stdout:
ln -s /dev/stdout /tmp/stdout
I'm trying to use PHP to trigger a bash script that should never stop running. It's not just that the command needs to run and I don't need to wait for output, it needs to continue running after PHP is finished. This has worked other times (and the question has been asked already), the difference seems to be my bash script has a trap for when it's closed.
Here is my bash script:
#!/bin/bash
set -e
WAIT=5
FILE_LOCK="$1"
echo "Daemon started (PID $$)..."
echo "$$" > "$FILE_LOCK"
trap cleanup 0 1 2 3 6 15
cleanup()
{
echo "Caught signal..."
rm -rf "$FILE_LOCK"
exit 1
}
while true; do
# do things
sleep "$WAIT"
done
And here is my PHP:
$command = '/path/to/script.sh /tmp/script.lock >> /tmp/script.log 2>&1 &';
$lastLine = exec($command, $output, $returnVal);
I see the script run, the lock file get created, then it exits, and the trap removes the lock file. In my /tmp/script.log I see:
Daemon started (PID 55963)...
Caught signal...
What's odd is that this only happens when running the PHP via Apache. From command line it keeps running as expected.
The signal on the trap that's being caught is 0.
I've tried wrapping my command in a bash environment, like $command = '/bin/bash -c "' . addslashes($command) . '"';, also tried adding nohup to the beginning. Nothing seems to be working. Is this possible to do for a never ending script?
Found the problem thanks to #lxg.
My # do things command was giving errors, which was causing the script to exit. For some reason they were suppressed.
When removing set -e from the beginning of my bash script I started seeing the errors output to my log file. Not sure why they didn't show up before.
The issue was in my bash loop it was running PHP commands. Even though my bash user and Apache user are the same, for some reason they had different $PATHs. This meant that when running on command line I was using a PHP7 binary, but when Apache trigged bash commands it was using a PHP5 binary (even though Apache itself is configured to use PHP7). So the application errored out and that is what caused the script to die.
The solution was to explicitly set the PHP binary path in my bash loop.
I was doing this with
BIN_PHP=$(which php)
But on true command line it would return one value (/path/to/php7/bin/php) vs command line initiated by Apache (/path/to/php5/bin/php). Despite Apache being the same as a my command line user, it didn't load the ~/.bashrc which specified my correct PHP path.
I need to make a background script that is spawned by PHP command line script that echos to the SSH session. Essentially, I need to do this linux command:
script/path 2>&1 &
If I just run this command in linux, it works great. Output is still displayed to the screen, but I can still use the same session for other commands. However, when I do this in PHP, it doesn't work the same way.
I've tried:
`script/path 2>&1 &`;
exec("script/path 2>&1 &");
system("script/path 2>&1 &")
...And none of these work. I need it to spawn the process, and then kill itself so that I can free up the session, but I still want the output from the child process to print to the screen.
(please comment if this is unclear... I had a hard time putting this into words :P)
I came up with a solution that works in this case.
I created a wrapper bash script that starts up the PHP script, which in turn spawns a child script that has its output redirected to a file, which the bash script wrapper tails.
Here is the bash script I came up with:
php php_script.php "$#"
ps -ef | grep php_script.log | grep -v grep | awk '{print $2}' | xargs kill > /dev/null 2>&1
if [ "$1" != "-stop" ]
then
tail -f php_script.log -n 0 &
fi
(it also cleans up "tail" processes that are still running so that you don't get a gazillion processes when you run this bash script multiple times)
And then in your php child script, you call the external script like this:
exec("php php_script.php >> php_script.log &");
This way the parent PHP script exits without killing the child script, you still get the output from the child script, and your command prompt is still available for other commands.
I have 80 Php scripts that I want to run. I want to Start Script1 wait until its finish starts script2 ...
Now, if I add the following :
start C:\xampp\php\php.exe -f C:\xampp\Script1.php
start C:\xampp\php\php.exe -f C:\xampp\Script2.php
start C:\xampp\php\php.exe -f C:\xampp\Script3.php
When I execute my Batch file it will run all scripts at the same time.
Is there any command I could use in my bat file to tell the system wait until script is done then execute the one after?
Anyway to set a time interval between scripts ?
I could run all 80 scripts in the cron tab, However, Id rather configure one batch file to handle all the scripts then assign this file to the crontab.
Thanks
start fires off the tasks asynchronously, thus they will all run at the same time. Use the /wait flag to start the task and wait for it to complete.
e.g.
start /wait C:\xampp\php\php.exe -f C:\xampp\Script1.php
start /wait C:\xampp\php\php.exe -f C:\xampp\Script2.php
start /wait C:\xampp\php\php.exe -f C:\xampp\Script3.php
Alternately, as Marc B states, removing the start call should give you sequential execution as well.
As Joseph Sibler suggests, you could craft a single PHP file that calls them sequentially. You could even generate that PHP file using something like xargs (available on windows w/ mingw32, cygwin, and probably other sources).
Something like this ought to do it:
del single.php
dir /b Script*.php | xargs cat >> single.php
start c:\xampp\php\php.exe -f single.php
I have a PHP script that listens on a queue. Theoretically, it's never supposed to die. Is there something to check if it's still running? Something like Ruby's God ( http://god.rubyforge.org/ ) for PHP?
God is language agnostic but it would be nice to have a solution that works on windows as well.
I had the same issue - wanting to check if a script is running. So I came up with this and I run it as a cron job. It grabs the running processes as an array and cycles though each line and checks for the file name. Seems to work fine. Replace #user# with your script user.
exec("ps -U #user# -u #user# u", $output, $result);
foreach ($output AS $line) if(strpos($line, "test.php")) echo "found";
In linux run ps as follows:
ps -C php -f
You could then do in a php script:
$output = shell_exec('ps -C php -f');
if (strpos($output, "php my_script.php")===false) {
shell_exec('php my_script.php > /dev/null 2>&1 &');
}
The above code lists all php processes running in full, then checks to see if "my_script.php" is in the list of running processes, if not it runs the process and does not wait for the process to terminate to carry on doing what it was doing.
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php 2>&1 | mail -s "Daemon stopped" you#example.org
Edit:
Technically, this invokes the mailer right away, but only completes the command when the php script ends. Doing this captures the output of the php-script and includes in the mail body, which can be useful for debugging what caused the script to halt.
Simple bash script
#!/bin/bash
while [true]; do
if ! pidof -x script.php;
then
php script.php &
fi
done
Not for windows, but...
I've got a couple of long-running PHP scripts, that have a shell script wrapping it. You can optionally return a value from the script that will be checked in the shell-script to exit, restart immediately, or sleep for a few seconds -and then restart.
Here's a simple one that just keeps running the PHP script till it's manually stopped.
#!/bin/bash
clear
date
php -f cli-SCRIPT.php
echo "wait a little while ..."; sleep 10
exec $0
The "exec $0" restarts the script, without creating a sub-process that will have to unravel later (and take up resources in the meantime). This bash script wraps a mail-sender, so it's not a problem if it exits and pauses for a moment.
Here is what I did to combat a similar issue. This helps in the event anyone else has a parameterized php script that you want cron to execute frequently, but only want one execution to run at any time. Add this to the top of your php script, or create a common method.
$runningScripts = shell_exec('ps -ef |grep '.strtolower($parameter).' |grep '.dirname(__FILE__).' |grep '.basename(__FILE__).' |grep -v grep |wc -l');
if($runningScripts > 1){
die();
}
You can write in your crontab something like this:
0 3 * * * /usr/bin/php -f /home/test/test.php my_special_cron
Your test.php file should look like this:
<?php
php_sapi_name() == 'cli' || exit;
if($argv[1]) {
substr_count(shell_exec('ps -ax'), $argv[1]) < 3 || exit;
}
// your code here
That way you will have only one active instace of the cron job with my-special-cron as process key. So you can add more jobs within the same php file.
test.php system_send_emails sendEmails
test.php system_create_orders orderExport
Inspired from Justin Levene's answer and improved it as ps -C doesn't work in Mac, which I need in my case. So you can use this in a php script (maybe just before you need daemon alive), tested in both Mac OS X 10.11.4 & Ubuntu 14.04:
$daemonPath = "FULL_PATH_TO_DAEMON";
$runningPhpProcessesOfDaemon = (int) shell_exec("ps aux | grep -c '[p]hp ".$daemonPath."'");
if ($runningPhpProcessesOfDaemon === 0) {
shell_exec('php ' . $daemonPath . ' > /dev/null 2>&1 &');
}
Small but useful detail: Why grep -c '[p]hp ...' instead of grep -c 'php ...'?
Because while counting processes grep -c 'php ...' will be counted as a process that fits in our pattern. So using a regex for first letter of php makes our command different from pattern we search.
One possible solution is to have it listen on a port using the socket functions. You can check that the socket is still listening with a simple script. Even a monitoring service like pingdom could monitor its status. If it dies, the socket is no longer listening.
Plenty of solutions.. Good luck.
If you have your hands on the script, you can just ask him to set a time value every X times in db, and then let a cron job check if that value is up to date.
troelskn wrote:
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php | mail -s "Daemon stopped" you#example.org
This will call mail each time a line is printed in daemon.php (which should be never, but still.)
Instead, use the double ampersand operator to separate the commands, i.e.
php daemon.php & mail -s "Daemon stopped" you#example.org
If you're having trouble checking for the PHP script directly, you can make a trivial wrapper and check for that. I'm not sufficiently familiar with Windows scripting to put how it's done here, but in Bash, it'd look like...
wrapper_for_test_php.sh
#!/bin/bash
php test.php
Then you'd just check for the wrapper like you'd check for any other bash script: pidof -x wrapper_for_test_php.sh
I have used cmder for windows and based on this script I came up with this one that I managed to deploy on linux later.
#!/bin/bash
clear
date
while true
do
php -f processEmails.php
echo "wait a little while for 5 secobds...";
sleep 5
done