run symfony command as service in ubuntu - php

i have a command that run normally in terminal :
php -f /home/roshd-user/Symfony/app/console video:convert
i want run this command as service in my server. create a vconvertor.conf in /etc/init/
.
this service run(start and stop) normally but not execute my command ?!
my command without service is run well and return my result but when use it into a service not execute ?!
vconvertor.conf contain this codes :
#info
description "Video Convertor PHP Worker"
author "Netroshd"
# Events
start on startup
stop on shutdown
# Automatically respawn
respawn
respawn limit 20 5
# Run the script!
# Note, in this example, if your PHP script returns
# the string "ERROR", the daemon will stop itself.
script
[ $( exec php -f /home/roshd-user/Symfony/app/console video:convert
) = 'ERROR' ] && ( stop; exit 1; )
end script

I would declare setuid and setgid in your config as the Apache usergroup ie www-data
and make your command run in the prod Symfony environment.
#info
description "Video Convertor PHP Worker"
author "Netroshd"
# Events
start on startup
stop on shutdown
# Automatically respawn
respawn
respawn limit 20 5
# Run as the www-data user and group (same as Apache is under in Ubuntu)
setuid www-data
setgid www-data
# Run the script!
exec php /home/roshd-user/Symfony/app/console video:convert -e prod --no-debug -q
If you still have issues, it might be worth installing the "wrep/daemonizable-command" with Composer and making your video convert command extend the Wrep\Daemonizable\Command\EndlessContainerAwareCommand. The library also provides an example of how to use it

Related

End Process Created by PHP exec in Ubuntu using Apache

I have an Ubuntu VM running in VirtualBox that hosts a server using Apache. The concept of the server is to accept HTTP POST requests, store them in a MySQL database and then execute a Python script with the relevant POST data to be displayed in a Discord channel.
The process itself is working but each time the PHP script calls the Python script, a new process is created that never actually ends. After a few hours of receiving live data the server runs out of available memory due to the amount of lingering processes. The PHP script has the following exec call as the last line of code;
exec("python3 main.py $DATA");
I would like to come up with a way to actually kill the processes created from this exec command (using user www-data), either in the Python file after the script is executed or automatically with an Apache setting that I probably just do not know about.
When running the following command in a terminal I can see the different processes;
ps -o pid,user,%mem,command ax | sort -b -k3 -r
There are 3 separate processes that show up, 1 referencing the actual python3 exec command as marked up in PHP;
9903 www-data 0.4 python3 main.py DATADATADATADATADATADATA
Then another process showing the more common -k start commands;
9907 www-data 0.1 /usr/sbin/apache2 -k start
And lastly another process very similar to the PHP exec command;
9902 www-data 0.0 sh -c python3 main.py DATADATADATADATADATADATA
How can I ensure Apache cleans these processes up - OR what do I need to add into my Python or PHP code to appropriately exec a Python script without leaving behind processes?
Didn't realize the exec command in php would wait for a return output indefinitely. Added this to the end of the string I was using in my exec call; > /dev/null &
i.e.: exec("python3 main.py $DATA > /dev/null &");

Start process in bash script after other process has started up

I'm trying to create a bash script that starts two processes: PHP-FPM and Nginx.
First PHP-FPM should start and once that has finished starting up (port 9000 will then be reachable for example, but there might be other means of checking it has finished starting up) the Nginx server should be started.
My current script looks like this:
#!/usr/bin/env bash
set -e
php-fpm -F &
nginx &
wait -n
But sometimes early on nginx will give me a 502 gateway error because php-fpm is not ready yet.
What's the cleanest/best way of getting this startup in order?
Regards,
Kees.
You can modify your script in this way:
#!/usr/bin/env bash
set -e
php-fpm -F && nginx
wait -n
As you can see in this answer on Unix&Linux, the && operator allows you to run the second command only if the first exited successfully.

How to make sure supervisord worker (php) is running fine on GAE flex env?

I am using symfony 4 with app engine flex env. I wrote a symfony console command which is meant to be running long term. Docs says that GAE has supervisord in place so I can use it to manage the script. How can I make sure the worker is actually running?
I've created the file additional-supervisord.conf with content:
[program:custom-worker]
command = php bin/console app:my-console-command
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes=0
user = www-data
autostart = true
autorestart = true
priority = 5
stopwaitsecs = 20
But I can't see anything in the log and don't know is command running properly or not. I also ssh to the instance and checked the processes - supervisord is running, but in php processes I can't see my script running. So I assume it does not work.
How can I check supervisord logs and track what's going with the worker? Would appreciate any advise.
Assuming you have stored your additional-supervisord.conf file correctly in the root of your project.
Give the full path to the console executable inside your script:
[program:custom-worker]
command = php %(ENV_APP_DIR)s/bin/console your:command
Further you can log stdout & stderr of the command to a file as follows:
[program:custom-worker]
command = php %(ENV_APP_DIR)s/bin/console custom:command 2>&1 1>%(ENV_APP_DIR)s/app/logs/custom.command.out.log

How can I check if a program is running in the background using a cron, and start it if needed?

I have the task to run a daemon in the background on a production server. However, I do want to be sure that this daemon always runs. The daemon is PHP process.
I tried to approach this by checking if the daemon is running, and if not: start it. So I have a command like:
if [ $(ps ax | grep -c "akeneo:batch:job-queue-consumer-daemon") -lt 3 ]; then php /home/sibo/www/bin/console akeneo:batch:job-queue-consumer-daemon & fi
I first do an if with ps and grep -c to check if there are processes running with a given name, and if not: I start the command ending with an &, forcing it to start.
The above command works, if I execute it from the command line the process gets started and I can see that is is running when I execute a simple ps ax-command.
However, as soon as I try to do this using the crontab it doesn't get started:
* * * * * if [ $(ps ax | grep -c "akeneo:batch:job-queue-consumer-daemon") -lt 3 ]; then php /home/sibo/www/bin/console akeneo:batch:job-queue-consumer-daemon & fi
I also set the MAILTO-header in the crontab, but I'm not getting any e-mails as well.
Can anyone tell me what's wrong with my approach? And how I can get it started?
An easy and old-style one is to create a bash file where you basically check if the process is running, otherwise you start it.
Here the content of the bash file:
#!/bin/bash
if [ $(ps -efa | grep -v grep | grep job-queue-consumer-daemon -c) -gt 0 ] ;
then
echo "Process running ...";
else
php /home/sibo/www/bin/console akeneo:batch:job-queue-consumer-daemon
fi;
Then in the crontab file you run the bash file.
There are special services for such tasks. For example http://supervisord.org/
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
And you can manage it via f.e https://github.com/supervisorphp/supervisor
A command working on command line and not working in CRON, this happened to me and here is what solved my problem.
Run echo $PATH in your terminal, copy entire output.
Then type crontab -e and at top of file, write this
PATH=WHATEVER_YOU_COPIED_FROM_LAST_COMMAND_OUTPUT
PS: (more suggestions)
I think you need to install apt-get install postfix on Ubuntu to be able to send emails.
You should also see CRON logs by
grep CRON /var/log/syslog
i would recommend you to use supervisord, it handles these kinds of issues with automatic restart on failed services, additionaly, you can try to set the akeneo commands as a service.
Otherwise, if you would like to do it using cronjobs, you may have an issue with the php binary, you need to setup the absolute path :
e.g : /usr/bin/php
I would also recommend if you use cronjob:
Check the logs of the cronjob for additional issues
grep CRON /var/log/syslog
Clean it up using a standalone bash script (don't forget to chmod +x)

PHP kill exec() background process after php-fpm restarted

I use nginx and php7.1-fpm. I want to run a background process using PHP and exec().
My short code:
<?php
exec('/usr/local/bin/program > /dev/null 2>&1');
Unfortunately after the systemd restart php7.1-fpm the program is killed.
I have tried to run with a different user than the one running the pool:
<?php
exec('sudo -u another_user /usr/local/bin/program > /dev/null 2>&1');
However, this does not solve the problem - still kills.
I can not use ssh2_connect(). How can I solve this problem?
It seems this is due to the php-fpm service being managed by the systemd.
All processes launched from php-fpm belong to its control-group and when you restart the service systemd sends the SIGTERM to all the processes in the control-group even if they are daemonized, detached and/or belong to another session.
You can check your control-groups with this command:
systemd-cgls
What I've done is to change the KillMode of the php-fpm service to process.
Just edit it's .service file:
vi /etc/systemd/system/multi-user.target.wants/php7.0-fpm.service
and change or add the line to the [Service] block:
KillMode=process
Then reload the configuration by executing:
systemctl daemon-reload
That worked for me.
References:
Can't detach child process when main process is started from systemd
http://man7.org/linux/man-pages/man5/systemd.kill.5.html
What would be wonderful would be a command (similar to setsid) that allowed to launch a process and detach from control-group but I haven't been able to find it.

Categories