I have a docker container, for which I need to run the following command
php /var/www/html/artisan queue:work &
It starts a worker process that looks for jobs and executes them.
I can run it by doing exec -it when the container is running.
But I need to do it using Dockerfile so that when my container re-deploys, it starts this automatically. I have tried
RUN php /var/www/html/artisan queue:work
CMD ["php","/var/www/html/artisan","queue:work"]
ENTRYPOINT ["php","/var/www/html/artisan","queue:work"]
separately of course. but none of them work. In the case of CMD and ENTRYPOINT my container starts giving out a 502 error and my service becomes inaccessible.
What am I doing wrong?
You could do this in multiple ways.
You could write a shell script that starts the background process first and then starts your API.
CMD ["./start_server.sh"]
Contents of ./start_server.sh
#!/bin/bash
php /var/www/html/artisan queue:work &
exec php-server-serving-api
You could also do this through a docker entrypoint shell script
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD ["php-server-serving-api"]
Contents of ./docker-entrypoint.sh
#!/bin/bash
php /var/www/html/artisan queue:work &
exec $#
However, what I recommend is, if they are separate type of workloads, run them in separate container. If the background processing task crashes, there is no one to restart it. If you run it as a separate container, you could use a system to restart it.
Related
I have in my docker file :
RUN apt-get -y install cron
RUN cron
COPY symfony.cron /var/spool/cron/crontabs/www-data
RUN chown www-data:crontab /var/spool/cron/crontabs/www-data
CMD ["start.sh"]
And in my start.sh :
echo "+ start cron service"
service cron start
All good, the problem is that crons are not running, and is strange because when I go to the docker container and I do service cron status I get [ ok ] cron is running.
so all good. And when I try to execute one command, is executed well. And when I do crontab -u www-data -l I get the liste with all the crons. So all is good at this point. But the crons are not executed anymore, so I don't understand the problem, maybe some permissions or I don't know. Please help me !!! Thx in advance and sorry for my english.
Any ideas please
You should broadly assume commands like service start don’t work in Docker. Just run the program you’re trying to run directly. Make sure it starts as a foreground process.
CMD ["crond", "-f"]
(If your main container script runs a service start command, and that command exits, and the main container script exits, then the container will exit, which isn’t what you want. If service is backed by systemd or another init system, your container won’t be running that.)
Hi I don't know how can I run a cron job inside this container.
I've found this: How to run a cron job inside a docker container
But that overrides the CMD, I don't know hot to keep php-fpm working
When you need to run multiple processes in your docker container a solution is to use supervisord as the main instruction. Docker will start and monitor supervisord which in turn will start your other processes.
Docker File Example:
FROM debian:9
...
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/my.conf"]
Supervisord config example (/etc/supervisor/my.conf):
[supervisord]
nodaemon=true
[program:cron]
command=/usr/sbin/crond -f -l 8
stdout_logfile=/dev/stdout
stderr_logfile=/dev/stderr
stdout_logfile_maxbytes=0
stderr_logfile_maxbytes=0
autorestart=true
[program:php-fpm]
command=docker-php-entrypoint php-fpm
Note that it is desirable to configure supervisord to output the logs to /dev/stdout and /dev/stderr to allow docker to handle these logs. Otherwise you risk your container to slow down over time as the amount of file writes increases.
The main question here is how to make PHP work kind of "in parallel" with the cron. And another answer, besides using supervisor, is to use bash's ability to manage tasks. This is generally mentioned here.
For the Alpine PHP-FPM container and the Cron, the startup script would look like this:
Docker File:
FROM php:8.1-fpm-alpine
RUN apk --update add --no-cache bash
COPY ./crontasks /var/spool/cron/crontabs/root
COPY entrypoint.bash /usr/sbin
RUN chmod a+x /usr/sbin/entrypoint.bash
ENTRYPOINT /usr/sbin/entrypoint.bash
entrypoint.bash file (the magic is here)
#!/bin/bash
# turn on bash's job control
set -m
# Start the "main" PHP process and put it in the background
php-fpm &
# Start the helper crond process
crond
# now we bring the primary process back into the foreground
fg %1
It is important to keep in mind that the cron job syntax in Alpine is different from Debian. And different folders are used for tasks.
I am running the command
docker run php
and the terminal shows 'Interactive shell' and the docker image exits automatically. Here is the docker status
docker ps -a
"docker-php-entrypoi…" Less than a second ago Exited (0) 3 seconds ago
Please try the following:
docker run -it --rm php bash
You need to tell docker run that it's an interactive process and allocate a tty for keyboard input, i.e.
$ docker run -it php
Interactive shell
php >
php needs -a to run in an interactive mode. -it is to keep a persistent session. To get an interactive directly, just run:
docker run -it --rm php php -a
I have a AMQP consumer (a RabbitMQ consumer) written in PHP always active running in the background. This script is run in multiple nodes and 12 times per node: 12 unix background processes running:
php -f consumer.php &.
If a new version of the code must be deployed, at the moment I always have to kill ALL these processes manually and launch them again one by one, in each node.
Is there a way to automate the deployment of background scripts? I.e. put it in a deployment pipeline and then having them reloaded, similarly to using https://deployer.org.
Is there a way to avoid downtime?
Any way ReactPHP would help in this case?
Found the answer in the Laravel docs (the solution works for any always running background process, not just PHP and Laravel). Supervisor!
Configuring Supervisor
Supervisor configuration files are typically stored in the /etc/supervisor/conf.d directory. Within this directory, you may create any number of configuration files that instruct supervisor how your processes should be monitored. For example, let's create a laravel-worker.conf file that starts and monitors a queue:work process:
Starting Supervisor
Once the configuration file has been created, you may update the Supervisor configuration and start the processes using the following commands:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*
It will even help me starting as many processes I want with a single configuration file and single command. Again, from the Laravel docs:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
By calling sudo supervisorctl start laravel-worker:*, 8 background processes will run, which also restart in case of error.
If I just want to restart with a new released version, I call the restart command directly:
supervisorctl restart laravel-worker:*
I'll just integrate this as a Deployer task in my CI/CD pipeline.
I am down with setting up Homestead and all. Now I am trying to start the queue worker daemon after each provisioning / vagrant up. How can I accomplish that?
I have tried to add the following to my .homestead/after.sh file:
nohup php project_name/artisan queue:work local --daemon --sleep=3 --tries=3 >> /dev/null 2>&1
But no luck. My console starts hanging and the vagrant/ruby processes must be killed.
Not sure if this is the right place to put it at all? Or does it need to be somewhere else, so it will be started on each vagrant up?