I have an application, that work with rabbitmq. There is 2 php scripts (send and receive messages) but i can run only one script using Dockerfile
CMD ["php", "./send.php"]
But i have to run two scripts. My tutor ask me to do two containers for each script:
version: "3"
services:
rabbit_mq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ./docker/rabbitmq/data/:/var/lib/rabbitmq/
- ./docker/rabbitmq/log/:/var/log/rabbitmq
- ./docker/rabbitmq/conf/:/var/conf/rabbitmq
environment:
- API_URL=Api:8000
send:
build:
context: './docker'
dockerfile: Dockerfile
image: php:7.4.cli
container_name: send
ports:
- 8000:8000
volumes:
- ./:/app
depends_on:
- rabbit_mq
receive:
image: php:7.4.cli
# build:
# context: './docker'
# dockerfile: Dockerfile
container_name: receive
ports:
- 8001:8001
volumes:
- ./:/app
depends_on:
- rabbit_mq
What can I do to run 2 scripts using "docker-compose up" command? I serf a lot of web-pages, but couldn't find anything, I really need your help!
you did not specify if those scripts terminate process or not, but to run them, you can make docker-compose like this:
version: "3"
services:
rabbit_mq:
# existing configuration
send:
# existing configuration
command: ["php", "./send.php"]
receive:
# existing configuration
command: ["php", "./receive.php"]
If you would like to run it as part of the docker-compose you can add these lines to certain container blocks in the composer file:
command: php send.php
command: php receive.php
https://docs.docker.com/compose/compose-file/#command
If you need more complicated things like restarting on failure, take a look at using `supervisor'.
Related
I have the following docker-compose.yml file which runs nginx with PHP support:
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
The PHP application inside /var/www/my-app needs to communicate with a linux daemon (let's call it myappd).
The way I see it, I need to either:
Copy the myappd into the nginx container to /usr/local/bin, make it executable with chmod +x and run it in the background.
Create a different container, copy myappd to /usr/local/bin, make it executable with chmod +x and run it in the foreground.
Now, I'm new to Docker and I'm researching and learning about it but my best guess, given that I'm using Docker Composer, is that option 2 is probably the recommended one? Given my limited knowledge about Docker, I'd have to guess that this container would require some sort of linux-based image (like Ubuntu or something) to run this binary. So maybe option 1 is preferred? Or maybe option 2 is possible with a minimal Ubuntu image or maybe it's possible without such image?
Either way, I have no idea how would I implement that on the composer file. Especially option 2, how would the PHP application communicate with the daemon in a different container? Just "sharing" a volume (where the binary is located) like I did for nginx/php services would suffice? Or something else is required?
Simple answer is adding command entry to php service in docker-compose.yml.
Given that myappd is at ./my-app/ on host machine and at /var/www/my-app/, updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
command: ["/bin/sh", "/var/www/my-app/mappd", "&&", "php-fpm"]
Better answer is to create the third container which runs linux daemon.
New Dockerfile is something like following.
FROM debian:jessie
COPY ./myappd /usr/src/app/
EXPOSE 44444
ENTRYPOINT ['/bin/sh']
CMD ['/usr/src/app/myappd']
Build image and name it myapp/myappd.
Updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
networks:
- network1
depends_on:
- daemon
daemon:
container_name: my-app-daemon
image: myapp/myappd
restart: always
networks:
- network1
networks:
network1:
You can send request with hostname daemon from inside php. Docker container has capability to resolve hostname of another container in the same network.
Since volumes_from disappear when Docker Compose change it's compose file version I am a bit lost in how to share a volume between different containers.
See the example below where a PHP application is living in a PHP-FPM container and Nginx is living in a second one.
version: '3.3'
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
- shared-volume:/var/www
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
- shared-volume:/var/www
volumes:
shared-volume:
driver_opts:
type: none
device: ~/sources/websocket
o: bind
In order to make the application works of course somehow Nginx has to access the PHP files and there is where volumes_from help us a lot. Now that option is gone.
When I try the command docker-compose up it ends with the following message:
ERROR: for websocket_php_1 Cannot create container for service php:
error while mounting volume with options: type='none'
device='~/sources/websocket' o='bind': no such file or directory
How do I properly share the same host volume between the two containers?
Why would you not use a bind mount? This is just source code that each needs to see, correct? I added the :ro (read-only) option which assumes no code generation is happening.
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro
I am trying to run batch in wordpress official container.
In container I want to run very simple batch like below:
./wordpress_batch.php
define('BASEPATH', '/path/to/wordpress');
require_once(BASEPATH . '/wp-load.php');
# batch program start
echo "batch test";
var_dump($wpdb);
But no output is shown.
This code work in wordpress in host machine.
What is wrong in docker container?
Any ideas to run this code?
Thanks.
./docker-compose.yml:
version: "2"
services:
wordpress:
build: containers/wordpress
ports:
- "9000:80"
depends_on:
- db
environment:
WORDPRESS_DB_HOST: "db:3306"
env_file: .env
volumes:
- ./wordpress_batch:/var/batch/
db:
build: containers/db
env_file: .env
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
driver: local
./containers/db/Dockerfile:
FROM mysql:latest
./containers/wordpress/Dockerfile:
FROM wordpress:latest
The way I ran the code:
$ docker-compose run wordpress bash
# root#97658bd14387:/var/batch# php wordpress_batch.php
I use this to set up nginx for PHP:
nginx:
image: nginx:latest
ports:
- 8080:80
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./code:/code
But how about Apache? How can I set up Apache + PHP in docker-compose.yml?
Following this guide:
version: '2'
services:
php:
build: php
ports:
- "80:80"
- "443:443"
volumes:
- ./php/www:/var/www/html
Error:
ERROR: In file './docker-compose.yml' service 'version' doesn't have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.
Any ideas? I'm on Xubuntu 16.04.
EDIT:
After managing to upgrade docker-compose to 1.9, I try with this file below:
version: '2'
services:
php:
build: php
expose:
- 9000
volumes:
- ./php/www:/var/www/html
apache2:
image: webdevops/apache:latest
args:
- PHP_SOCKET=php:9000
volumes:
- ./php/www:/var/www/html
ports:
- 80:80
- 443:443
links:
- php
Error:
$ sudo docker-compose up -d
Building php
ERROR: Cannot locate specified Dockerfile: Dockerfile
Docker is such as pain!
Any ideas how to fix this?
I would choose webdevops dockerized apache, because it has simple configuration:
version: '2'
services:
php:
build: php
expose:
- 9000
volumes:
- ./php/www:/var/www/html
apache2:
image: webdevops/apache:latest
args:
- PHP_SOCKET=php:9000
volumes:
- ./php/www:/var/www/html
ports:
- 80:80
- 443:443
links:
- php
Since the example above does not work, here is a different approach:
docker-compose.yml
version: '3.1'
services:
php:
image: php:apache
ports:
- 80:80
volumes:
- ./php/www:/var/www/html/
Launch the server with
docker-compose up
We need to create a new folders /php/www in current path
Create a file under php folder save as "Dockerfile" which contains as below without quote
"FROM php:5.6-apache
RUN docker-php-ext-install mysqli"
Copy your docker-compose.yml file in your current folder where your "php" folder has.
Create a sample file "index.php" under www folder (/php/www/index.php)
Run in command prompt docker-compose up -d
Open your browser type "localhost" you can see your sample file results.
Note: Above steps as per above mentioned docker-compose.yml file.
You can check this question.
If you use build instead of image, then you need "Dockerfile". Dockerfile would be use as configuration file for building image.
You maybe miss part in guide, where you should create file with name "Dockerfile" inside directory "php". Directory "php" must be in the same directory, where your "docker-compose.yml". In "docker-compose.yml" you have this line.
build: php
The line mean, that configuration file (by default: "Dockerfile") is inside of directory "php". So you should create directory "php" and file "Dockerfile" inside of it.
This is "Dockerfile" from your guide.
FROM php:5.6-apache
RUN docker-php-ext-install mysqli
docker-compose.yml reference version 2
Dockerfile reference
I found an elegant way to dynamically configure the ports and other parameters: In apache2's configuration files you can reference environment variables.
#/etc/apache2/ports.conf
# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf
#APACHE_HTTP_PORT_NUMBER:80
#APACHE_HTTPS_PORT_NUMBER:443
Listen ${APACHE_HTTP_PORT_NUMBER}
<IfModule ssl_module>
Listen ${APACHE_HTTPS_PORT_NUMBER}
</IfModule>
<IfModule mod_gnutls.c>
Listen ${APACHE_HTTPS_PORT_NUMBER}
</IfModule>
you can set the variables in Dockerfile or docker-compose.yml
You can set a directory with diferente Dockerfiles an declare in each service:
...
image: php:custom
build:
context: .
dockerfile: ./dockerfiles/Dockerfile-php
...
I have created a working example of PHP, APACHE, MYSQL, and PHPMYADMIN for PHP developers. You may find it useful if you need the original old-school working style. Please note that I am using port 8080 for my website and port 8081 for PHPMyAdmin. You can change these as you like.
version: '3.8'
services:
php-apache-environment:
container_name: php-apache
image: php:7.4-apache
volumes:
- ./php/src:/var/www/html/
ports:
- 8080:80
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
environment:
MYSQL_ROOT_PASSWORD: admin
MYSQL_DATABASE: ezapi
MYSQL_USER: root
MYSQL_PASSWORD: password
ports:
- "6033:3306"
volumes:
- dbdata:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3306
PMA_ARBITRARY: 1
restart: always
ports:
- 8081:80
volumes:
dbdata:
I've got a database backup bundle (https://github.com/dizda/CloudBackupBundle) installed on a Symfony3 project using Docker, but I can't get it to work due to it either not finding PHP or not finding MySQL
When I run php app/console --env=prod dizda:backup:start via exec, run, or via cron. I get mysqldump command not found error through the PHP image, or PHP not found error from the Mysql/db image.
How do I go about running a php command that then runs a mysqldump command.
My docker-compose file is as follows:
version: '2'
services:
web:
# image: nginx:latest
build: .
restart: always
ports:
- "80:80"
volumes:
- .:/usr/share/nginx/html
links:
- php
- db
- node
volumes_from:
- php
volumes:
- ./logs/nginx/:/var/log/nginx
php:
# image: php:fpm
restart: always
build: ./docker_setup/php
links:
- redis
expose:
- 9000
volumes:
- .:/usr/share/nginx/html
db:
image: mysql:5.7
volumes:
- "/var/lib/mysql"
restart: always
ports:
- 8001:3306
environment:
MYSQL_ROOT_PASSWORD: gfxhae671
MYSQL_DATABASE: boxstat_db_live
MYSQL_USER: boxstat_live
MYSQL_PASSWORD: GfXhAe^7!
node:
# image: //digitallyseamless/nodejs-bower-grunt:5
build: ./docker_setup/node
volumes_from:
- php
redis:
image: redis:latest
I'm pretty new to docker, so and easy improvements you can see feel free t flag...I'm in the trial and error stage!
Your image that has your code should have all the dependencies needed for your code to run.
In this case, your code needs mysqldump installed locally for it to run. I would consider this to be a dependency of your code.
It might make sense to add a RUN line to your Dockerfile that will install the mysqldump command so that your code can use it.
Another approach altogether would be to externalize the database backup process instead of leaving that up to your application. You could have some container that runs on a cron and does the mysqldump process that way.
I would consider both approaches to be clean.