How do I deliver the code of a containerized PHP application, whose image is based on busybox and contains only the code, between separate NGINX and PHP-FPM containers? I use the 3rd version of docker compose.
The Dockerfile of the image containing the code would be:
FROM busybox
#the app's code
RUN mkdir /app
VOLUME /app
#copy the app's code from the context into the image
COPY code /app
The docker-compose.yml file would be:
version: "3"
services:
#the application's code
#the volume is currently mounted from the host machine, but the code will be copied over into the image statically for production
app:
image: app
volumes:
- ../../code/cms/storage:/storage
networks:
- backend
#webserver
web:
image: web
depends_on:
- app
- php
networks:
- frontend
- backend
ports:
- '8080:80'
- '8081:443'
#php
php:
image: php:7-fpm
depends_on:
- app
networks:
- backend
networks:
cms-frontend:
driver: "bridge"
cms-backend:
driver: "bridge"
The solutions I thought of, neither appropriate:
1) Use the volume from the app's container in the PHP and NGINX containers, but compose v3 doesn't allow it (the volumes_from directive). Can't use it.
2) Place the code in a named volume and connect it to the containers. Going this way I can't containerize the code. Can't use. (I'll also have to manually create this volume on every node in a swarm?)
3) Copy the code twice directly into images based on NGINX and PHP-FPM. Bad idea, I'll have to maintain them to be in concert.
Got stuck with this. Any other options? I might have misunderstood something, only beginning with Docker.
I too have been looking around to solve a similar issue and it seems Nginx + PHP-FPM is one of those exceptions when it is better to have both services running in one container for production. In development you can bind mount the project folder to both nginx and php containers. As per Bret Fisher's guide for good defaults for php: php-docker-good-defaults
So far, the Nginx + PHP-FPM combo is the only scenario that I recommend using multi-service containers for. It's a rather unique problem that doesn't always fit well in the model of "one container, one service". You could use two separate containers, one with nginx and one with php:fpm but I've tried that in production, and there are lots of downsides. A copy of the PHP code has to be in each container, they have to communicate over TCP which is much slower than Linux sockets used in a single container, and since you usually have a 1-to-1 relationship between them, the argument of individual service control is rather moot.
You can read more about setting up multiple service containers on the docker page here (it's also listed in the link above): Docker Running Multiple Services in a Container
The way I see it, you have two options:
(1) Using Docker-compose : (this is for very simplistic development env)
You will have to build two separate container from nginx and php-fpm images. And then simply serve app folder from php-fpm on a web folder on nginx.
# The Application
app:
build:
context: ./
dockerfile: app.dev.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
expose:
- 9000
# The Web Server
web:
build:
context: ./
dockerfile: web.dev.dockerfile
working_dir: /var/www
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
(2) Use a single Dockerfile to build everything in it.
Start with some flavor of linux or php image
install nginx
Build your custom image
And serve multi services docker container using supervisord
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 10 months ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have running multiple containers separately which connected to each other using defined network in docker-compose.yml and my application is running perfect, so I want to create only one image for those multiple containers for deploying to my private repository (image with tags), I want to know what is the best practice to do that.
docker-compose.yml
version: '3.1'
networks:
lemp:
services:
nginx:
build:
context: .
dockerfile: Dockerfile
target: webserver
container_name: webserver
volumes:
- ./src/app:/var/www/html/app
ports:
- "80:80"
networks:
- lemp
php:
build:
context: .
dockerfile: Dockerfile
target: app
container_name: app
volumes:
- ./src/app:/var/www/html/app
ports:
- "9000:9000"
networks:
- lemp
Dockerfile
FROM nginx:1.21.6-alpine AS webserver
COPY ./src/ ./var/www/html
COPY ./nginx/conf.d/app.conf /etc/nginx/conf.d/app.conf
EXPOSE 80 443
FROM php:7.4-fpm-alpine AS app
EXPOSE 9000
You should plan to distribute your docker-compose.yml file, or perhaps a simplified version of it, as the standard way to run your combined application. If it requires two images, you'll need to push the two images separately to your repository; don't try to combine them. Do make sure the images are self-contained so you don't need the source code separately from the images to run them.
The docker-compose.yml file should roughly look like:
version: '3.8'
services:
nginx:
image: registry.example.com/nginx:${TAG:-latest}
ports:
- '80:80'
php:
image: registry.example.com/php:${TAG:-latest}
Calling out a couple of things here: I've removed the unnecessary networks: declarations (Compose provides a default network that works fine) and the unnecessary container_name: declarations. I've put in an image: line for each image in place of the build: block, and use an environment variable to inject the image tag. For the php container I've removed the ports: declaration since you probably don't want that externally accessible. Finally, for both containers I've removed the volumes: that override the image contents.
Next to this, put a docker-compose.override.yml file. This is not something you'd distribute. It can say:
version: '3.8'
services:
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
php:
build:
context: .
dockerfile: Dockerfile.php
ports:
- '9000:9000'
If you have both files, Compose merges their settings. So for a developer this adds in the ports: to directly access the PHP-FPM service if required, and build: blocks to explain how to build both images. Since the combined Compose configuration has both build: and image:, docker-compose build will build images with the specified names tagged with your local registry name.
You should have a separate Dockerfile for each image you're building. The Nginx image resembles what you already have; for the PHP-FPM container you need to make sure you COPY the code into the image.
# Dockerfile.nginx
FROM nginx:1.21.6-alpine
COPY ./src/ /var/www/html/
COPY ./nginx/conf.d/app.conf /etc/nginx/conf.d/app.conf
# Dockerfile.php
FROM php:7.4-fpm-alpine
COPY ./src/app/ /var/www/html/app/
Now you can build and run the application locally. Double-check that it works correctly, without volumes: overwriting the image code.
docker-compose build
docker-compose up -d
curl http://localhost/
If this works, then you're set to distribute this. Pick a tag (a date stamp or the current source control ID are good choices), build the images, and push them to a Docker registry.
export TAG=20220418
docker-compose build
docker-compose push
Now you can copy only the docker-compose.yml file, but none of the other files we've touched, to the remote system, or put it in a GitHub repository, or something else. On that system, set $TAG to match, and run docker-compose up as usual. Docker will automatically pull the images from the repository. Since the images are self-contained, the only thing you need is the docker-compose.yml file.
scp docker-compose.yml there:
ssh root#there
export TAG=20220418
docker-compose up -d
Unclear what you really need. You can publish the individual containers to your registry and provide a downloadable Compose file for anyone to use those containers together, which will pull each image, separately.
Otherwise, you would need to copy all relevant steps from one Dockerfile to the other. Note: If you are running unique entrypoint/commands (processes) in each Dockerfile, then this is considered bad practice.
UPDATE
Looking at your example, you could install php-fpm into the Nginx container, copy the PHP files, and just serve the static content from there. However, I would recommend keeping separate containers, for sure. Nginx should be replaceable as a reverse proxy.
Also, you don't have a correct multi-stage Dockerfile (using FROM twice doesn't merge anything), and your Compose file is just running the same image (context) twice on two different ports.
Looking at a common docker-compose setup for a Nginx / combo like:
version: '3'
services:
nginx-example:
image: nginx:1.13-alpine
ports:
- "80:80"
volumes:
- ./www:/www
- ./config/site.conf:/etc/nginx/conf.d/default.conf
php-example:
image: php-fpm
volumes:
- ./www:/www
You find many examples like that to make sure, that if you change something in your local www folder it will be immediately picked up by a running container.
But when I do not want that and copy some php files/content etc. into the container:
Is it enough to create a volume of the same name for both containers and copy my files into that folder e.g. in Dockerfile?
Or is it even possible to not have a volume but create a directory in the container and copy the files there... and in that case: do I have to do for both nginx and php-fpm with the same files?
Perhaps my misunderstanding is around how the php-fpm container works in that combination (of course fastcgi... in conf points to the php-example:9000 standard)
My ideal solution would be to copy once and making sure that file permissions are handled.
I am developing a Laravel application. I am trying to use Laravel Websocket in my application, https://docs.beyondco.de/laravel-websockets. I am using Docker/ docker-compose.yml. Since the Laravel Websocket run locally on port 6001, I am having problem with integrating it with docker-compose. Searching solution I found this link, https://github.com/laradock/laradock/issues/2002. I tried it but not working. Here is what I did.
I created a folder called workspace under the project root directory. Inside that folder, I created a file called, Dockerfile.
This is the content of Dockerfile
EXPOSE 6001
In the docker-compose.yml file, I added this content.
workspace:
port:
- 6001:6001
My docker-compose.yml file looks something like this
version: "3"
services:
workspace:
port:
- 6001:6001
apache:
container_name: web_one_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: web-one.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- web-one-network
ports:
- "80:80"
- "443:443"
php-fpm:
container_name: web-one-php
image: php-fpm-laravel:7.2-minimal
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- web-one-network
When I run "docker-compose up --build -d", it is giving me the following error.
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.workspace: 'port' (did you mean 'ports'?)
What is wrong and how can I fix it? How can I use Laravel Web Socket with docker-compose?
I tried changing from 'port' to 'ports', then I got the following error message instead.
ERROR: The Compose file is invalid because:
Service workspace has neither an image nor a build context specified. At least one must be provided.
Your Dockerfile is wrong. A Dockerfile must start with a FROM <image> directive as explained in the documentation. In your case it might be sufficient to run an existing php:<version>-cli image though, avoiding the whole Dockerfile stuff:
workspace:
image: php:7.3-cli
command: ["php", "artisan", "websockets:serve"]
Of course you will also need to add a volume with the application code and a suitable network. If you add a reverse proxy like Nginx in front of your services, you don't need to export the ports on your workspace either. Services may access other services as long as they are in the same network.
Currently, I have two containers php-fpm and NGINX where I run the PHP application.
Now my question, is there a way to "connect" two docker containers without using a volume?
Both containers need my application (NGINX to send static files e.g. css/js and php-fpm to interpret the PHP files).
Currently, my application is cloned from git into my NGINX container and I had a volume so the php-fpm also had the files to interpret PHP.
I search for a solution without that my application is on the host system.
Am, not sure what you trying to archive. But my docker-compose.yml is look lite this:
php:
container_name: custom_php
build:
context: php-fpm
args:
TIMEZONE: 'UTC'
volumes:
- ./website:/var/www/symfony
networks:
- app_net
nginx:
build: nginx
container_name: custom_nginx
ports:
- 80:80
volumes:
- ./website:/var/www/symfony
networks:
- app_net
networks:
app_net:
driver: bridge
Just make sure they are in one network and then you can talk from container to container by container name and port. Hope that helps
I have a small theoretical problem with combination of php-fpm, nginx and app code in Docker.
I'm trying to stick to the model when docker image does only one thing -> I have separate containers for php-fpm and nginx.
php:
image: php:5-fpm-alpine
expose:
- 9000:9000
volumes:
- ./:/var/www/app
nginx:
image: nginx:alpine
ports:
- 3000:80
links:
- php
volumes:
- ./nginx/app.conf:/etc/nginx/conf.d/app.conf
- ./:/var/www/app
NOTE: In app.conf is root /var/www/app;
Example schema from Symfony
This is great in development, but I don't know how to convert this to production ready state. Mount app directory in production is really bad practice (if I'm not wrong). In best case I copy app source code into container and use this prebuilded code (COPY . /var/www/app in Dockerfile), but in this case is impossible or I don't know how.
I need share app source code between two contatiner (nginx container and php-fpm container) because booth of that need it.
Of course I can make own nginx and php-fpm container and add COPY . /var/www/app into both of them, but I thing that is wrong way because I duplicate code and the whole build process (install dependencies, build source code, etc...) must be in both (nginx/php-fpm) containers.
I try to search but I don't find any idea how to solve this problem. A lot of articles show how to do this with docker-compose file and mount code with --volume but I didn't find any example how to use this on production (without volume).
Only one acceptable solutions for me (in this time) is make one container with nginx and php-fpm together but I'm not sure when is a good way (I try to find best practice).
Do you have any experiences with this or any idea how to solve it?
Thanks for any response!
I solve the problem by making a shared volume with the docker-compose file:
version: '3'
volumes:
share_place:
services:
php:
image: php:5-fpm-alpine
ports:
- 9000:9000
volumes:
- share_place:/var/www/app
nginx:
image: nginx:alpine
ports:
- 3000:80
volumes:
- share_place:/var/www/app
This will create a volume share_place that will share the data between the two container.
At this time I use smth like:
Dockerfile:
FROM php:fpm
COPY . /var/www/app/
WORKDIR /var/www/app/
RUN composer install
EXPOSE 9000
VOLUME /var/www/app/web
Dockerfile.nginx
FROM nginx
COPY default /etc/nginx/default
docker-compose.yml
app:
build:
context: .
web:
build:
context: .
dockerfile: Dockerfile.nginx
volumes_from: app
But in few days on 17.05 release we can do in one Dockerfile smth like:
FROM php:cli AS builder
COPY . /var/www/app/
WORKDIR /var/www/app/
RUN composer install && bin/console assets:dump
FROM php:fpm AS app
COPY --from=builder /var/www/app/src /var/www/app/vendor /var/www/app/
COPY --from=builder /var/www/app/web/app.php /var/www/app/vendo /var/www/app/web/
FROM nginx AS web
COPY default /etc/nginx/default
COPY --from=builder /var/www/app/web /var/www/app/web