I'm trying to create a bash script that starts two processes: PHP-FPM and Nginx.
First PHP-FPM should start and once that has finished starting up (port 9000 will then be reachable for example, but there might be other means of checking it has finished starting up) the Nginx server should be started.
My current script looks like this:
#!/usr/bin/env bash
set -e
php-fpm -F &
nginx &
wait -n
But sometimes early on nginx will give me a 502 gateway error because php-fpm is not ready yet.
What's the cleanest/best way of getting this startup in order?
Regards,
Kees.
You can modify your script in this way:
#!/usr/bin/env bash
set -e
php-fpm -F && nginx
wait -n
As you can see in this answer on Unix&Linux, the && operator allows you to run the second command only if the first exited successfully.
Related
I have an Ubuntu VM running in VirtualBox that hosts a server using Apache. The concept of the server is to accept HTTP POST requests, store them in a MySQL database and then execute a Python script with the relevant POST data to be displayed in a Discord channel.
The process itself is working but each time the PHP script calls the Python script, a new process is created that never actually ends. After a few hours of receiving live data the server runs out of available memory due to the amount of lingering processes. The PHP script has the following exec call as the last line of code;
exec("python3 main.py $DATA");
I would like to come up with a way to actually kill the processes created from this exec command (using user www-data), either in the Python file after the script is executed or automatically with an Apache setting that I probably just do not know about.
When running the following command in a terminal I can see the different processes;
ps -o pid,user,%mem,command ax | sort -b -k3 -r
There are 3 separate processes that show up, 1 referencing the actual python3 exec command as marked up in PHP;
9903 www-data 0.4 python3 main.py DATADATADATADATADATADATA
Then another process showing the more common -k start commands;
9907 www-data 0.1 /usr/sbin/apache2 -k start
And lastly another process very similar to the PHP exec command;
9902 www-data 0.0 sh -c python3 main.py DATADATADATADATADATADATA
How can I ensure Apache cleans these processes up - OR what do I need to add into my Python or PHP code to appropriately exec a Python script without leaving behind processes?
Didn't realize the exec command in php would wait for a return output indefinitely. Added this to the end of the string I was using in my exec call; > /dev/null &
i.e.: exec("python3 main.py $DATA > /dev/null &");
We are having a Docker server 'Docker version 17.03.0-ce, build 60ccb22'. We have a number of workers, around 10,each one of them performs a really simple task that takes a few seconds to complete and exit. We decided that every one of them is going to start a docker container and when the script finishes, the container gets stopped and removed. What is more, crontabs deal with running So, we created a bash script for every worker that instantiates the container with the flags --rm and -d and also starts the script file in the bin/ folder
#! /bin/sh
f=`basename $0`
workerName=${f%.*} \\name of the bash script without the part after the .
//We link with the Docker host the folder of the worker and a log file that is going to be used for monitoring from outside the container.
docker run --rm -d --name $workerName -v `cat /mnt/volume-fra1-05/apps/pd-executioner/master/active_version`:/var/www/html -v /mnt/volume-fra1-06/apps/$workerName.log:/var/www/html/logs/$workerName.log iqucom/php-daemon-container php bin/$workerName
echo `date` $0 >> /var/log/crontab.log
So, we created a bash script for every worker that instantiates the container with the flags --rm and -d and also starts the script file in the bin/ folder. All the workers are very similar to the structure and the code and really simple, there are not big code differences. However, we have experienced the following behaviour: some containers (random ones every time) refuse to stop and be removed even after many hours. Inside the container, the process php bin/$workerName is still running with PID 1. There is nothing like an infinite loop in the code that could stop the script from finishing. It happens randomly and still cannot find a pattern. Do you have any idea on why this might be happening?
So this can be some issue related to your PHP script getting stuck somehow. But since you are sure it is suppose to timeout after lets assume 240secs then you should can change your container command to
docker run --rm -d --name $workerName -v cat /mnt/volume-fra1-05/apps/pd-executioner/master/active_version:/var/www/html -v /mnt/volume-fra1-06/apps/$workerName.log:/var/www/html/logs/$workerName.log iqucom/php-daemon-container timeout 240 php bin/$workerName
This will make sure that any stuck container will exit after a timeout if it doesn't exit on its own
I use nginx and php7.1-fpm. I want to run a background process using PHP and exec().
My short code:
<?php
exec('/usr/local/bin/program > /dev/null 2>&1');
Unfortunately after the systemd restart php7.1-fpm the program is killed.
I have tried to run with a different user than the one running the pool:
<?php
exec('sudo -u another_user /usr/local/bin/program > /dev/null 2>&1');
However, this does not solve the problem - still kills.
I can not use ssh2_connect(). How can I solve this problem?
It seems this is due to the php-fpm service being managed by the systemd.
All processes launched from php-fpm belong to its control-group and when you restart the service systemd sends the SIGTERM to all the processes in the control-group even if they are daemonized, detached and/or belong to another session.
You can check your control-groups with this command:
systemd-cgls
What I've done is to change the KillMode of the php-fpm service to process.
Just edit it's .service file:
vi /etc/systemd/system/multi-user.target.wants/php7.0-fpm.service
and change or add the line to the [Service] block:
KillMode=process
Then reload the configuration by executing:
systemctl daemon-reload
That worked for me.
References:
Can't detach child process when main process is started from systemd
http://man7.org/linux/man-pages/man5/systemd.kill.5.html
What would be wonderful would be a command (similar to setsid) that allowed to launch a process and detach from control-group but I haven't been able to find it.
i have a command that run normally in terminal :
php -f /home/roshd-user/Symfony/app/console video:convert
i want run this command as service in my server. create a vconvertor.conf in /etc/init/
.
this service run(start and stop) normally but not execute my command ?!
my command without service is run well and return my result but when use it into a service not execute ?!
vconvertor.conf contain this codes :
#info
description "Video Convertor PHP Worker"
author "Netroshd"
# Events
start on startup
stop on shutdown
# Automatically respawn
respawn
respawn limit 20 5
# Run the script!
# Note, in this example, if your PHP script returns
# the string "ERROR", the daemon will stop itself.
script
[ $( exec php -f /home/roshd-user/Symfony/app/console video:convert
) = 'ERROR' ] && ( stop; exit 1; )
end script
I would declare setuid and setgid in your config as the Apache usergroup ie www-data
and make your command run in the prod Symfony environment.
#info
description "Video Convertor PHP Worker"
author "Netroshd"
# Events
start on startup
stop on shutdown
# Automatically respawn
respawn
respawn limit 20 5
# Run as the www-data user and group (same as Apache is under in Ubuntu)
setuid www-data
setgid www-data
# Run the script!
exec php /home/roshd-user/Symfony/app/console video:convert -e prod --no-debug -q
If you still have issues, it might be worth installing the "wrep/daemonizable-command" with Composer and making your video convert command extend the Wrep\Daemonizable\Command\EndlessContainerAwareCommand. The library also provides an example of how to use it
I need to write some scripts for some automation work,
I put a php file on local apache server
test.php
<?php
system("bash inform.sh");
?>
the content of inform.sh is:
#!/bin/bash
proc_id=`ps -ef|grep "sleep"|grep -v "grep"|awk '{print $2}'`
kill -9 $proc_id
I run sleep process on a shell, and open the php page on firefox : localhost/test.php
but it doesn't kill the sleep process,
if I run the php directly through shell, then it works
what's wrong with this and how to deal with it? thanks
I use the following shell command and then it is OK:
sudo -u apache sleep 2000