Change user/group for PHP-FPM (Docker) - php

I ran docker php-fpm container with the following config
php-fpm:
tty: true
image: bitnami/php-fpm:latest
volumes:
- ./www:/www
php-fpm is running as daemon:daemon. How to properly change user/group for the container? For example, run it as www:www...

Build this into your Docker image. In your Dockerfile:
FROM bitnami/php-fpm:latest # (Debian-based)
# Create the non-root runtime user. It does not need a
# specific uid, shell, home directory, or other settings.
RUN adduser --system --no-create-home --group www
# Copy the files in as root, so they don't accidentally get
# overwritten at runtime
# (The base image sets WORKDIR /app)
COPY www ./
# Then set the runtime user
USER www
# The base image provides a useful CMD; leave it as is
(Some of the details around the base image's WORKDIR and CMD come from looking up the bitnami/php-fpm image on Docker Hub, and in turn following the link to the image's Dockerfile.)
Then your docker-compose.yml file just needs to specify the details to use this Dockerfile. You do not need volumes:; the code is already built into the image.
version: '3.8'
services:
php-fpm:
build: .
# ports: ['9000:9000']
# no volumes:
In practice it usually doesn't matter much what specific user ID a container process is running as, just so long as it isn't (or, depending on your needs, is) the special root user (with user ID 0). There shouldn't be a difference between the container process running as daemon vs. www. Conversely, looking at the bitnami/php-fpm Dockerfile, it isn't obvious to me that anything would cause the container to not run as root.

Instead of creating a Dockerfile, I have created a common.conf file:
user=www-data
group=www-data
listen.owner=www-data
listen.group=www-data
in docker-compose.yml:
php:
image: bitnami/php-fpm:8.0 # or any other
...
volumes:
# path to common.conf may differ if using a different image
- ./path-to/common.conf:/opt/bitnami/php/etc/common.conf:ro
To check the user, I have a index.php:
<?php
echo exec('whoami');

Related

Files owner inside Docker is root, service user is not root. Trouble with permissions

Welp, I invested three days into this, I figured I'll ask.
I boiled my problem down to this:
The app I'm dockerizing is nothing special:
# docker-compose.yml
services:
php:
image: php:8.1.5-fpm-bullseye
volumes:
- ./:/var/www
# this is the end goal: files writable by this image:
nginx:
image: "nginx:1.23-alpine"
ports:
- "8090:80"
volumes:
- .:/var/www
On my host machine the current user has: uid=1000(raveren) gid=1000(raveren)
But the files that end up in the mounted volume belong to root (id=0):
> docker compose exec php ls -l /var/www
total 3900
-rwxrwxr-x 1 root root 21848 Jul 19 11:52 Makefile
-rwxrwxr-x 1 root root 1153 Jul 18 07:03 README.md
# etc etc
How am I supposed to make some of the directories (i.e. cache, log, and potentially much more) writable for the www-data user that nginx is running on?
If the files belonged to a non-root user I could do that by either changing the www-data id to match the owner - or do something along the lines of this nice guide.
However, the problem I can't get past is: the containerized files don't "admit" that their owner is actually id=1000 and not root id=0.
I tried:
All variations of user directive - in yaml and Dockerfile
userns_mode: "host" in the yaml.
When I do docker compose exec chown 1000 testfile the owner on the host machine gets reflected as 100999. That's why I suspected userns because cat /etc/subuid gives raveren:100000:65536
Please advise!
I will answer my own question here, it turns out this was a bug of some software on my freshly installed test machine - most probably Docker. I spent too much time to care, it works everywhere but on this specific rig. <rant> so screw it and actually screw docker. After two years with it - just using for developer setups - I'm under the impression that each machine a dockerized app runs on - needs some special tweaking. </rant>
In several other machines everything works as expected: the user: directive in the yaml correctly assigns the user that the container runs as. The guide linked inside the question can help, or I did a slightly different approach which works as well:
# docker-compose.yml
services:
php:
build:
context: ./docker/php
args:
DOCKER_UID: ${DOCKER_UID:-1000} # these values must be in ENV, eg .env file
user:
"${DOCKER_UID:-1000}:${DOCKER_GID:-1000}"
# Dockerfile
FROM php:8.1.5-fpm-bullseye
ARG DOCKER_UID
# lots of stuff here
# Create a user for provided UID from the host machine
RUN useradd -u ${DOCKER_UID} -m the-whale
RUN usermod -G www-data,the-whale,root,adm the-whale

Do Nginx and PHP container both need same php files?

Looking at a common docker-compose setup for a Nginx / combo like:
version: '3'
services:
nginx-example:
image: nginx:1.13-alpine
ports:
- "80:80"
volumes:
- ./www:/www
- ./config/site.conf:/etc/nginx/conf.d/default.conf
php-example:
image: php-fpm
volumes:
- ./www:/www
You find many examples like that to make sure, that if you change something in your local www folder it will be immediately picked up by a running container.
But when I do not want that and copy some php files/content etc. into the container:
Is it enough to create a volume of the same name for both containers and copy my files into that folder e.g. in Dockerfile?
Or is it even possible to not have a volume but create a directory in the container and copy the files there... and in that case: do I have to do for both nginx and php-fpm with the same files?
Perhaps my misunderstanding is around how the php-fpm container works in that combination (of course fastcgi... in conf points to the php-example:9000 standard)
My ideal solution would be to copy once and making sure that file permissions are handled.

docker compose environment variable not setting the variable inside Dockerfile

I'm building my app for local development using docker-compose.yaml, using the two Dockerfiles - one for app (WordPress), and another for nginx. Since this is a specific app, that is built using Jenkins pipeline, I cannot change the Dockerfiles, but I would like to be able to have the same environment to test on locally as I have on the staging and production servers.
The php part works but nginx fails. The Dockerfile for nginx looks like this:
FROM nginx:latest
COPY scripts/docker-nginx-entrypoint.sh /docker-nginx-entrypoint.sh
COPY ./config/nginx.conf /opt/nginx.conf
COPY ./config/nginx.conf.prd /opt/nginx.conf.prd
COPY --from=DOCKER_IMAGE_NAME_PHP /var/www/html/ /var/www/html/
CMD ["/bin/bash","/docker-nginx-entrypoint.sh"]
The DOCKER_IMAGE_NAME_PHP part fails with
ERROR: Service 'nginx' failed to build: invalid from flag value DOCKER_IMAGE_NAME_PHP: invalid reference format: repository name must be lowercase
In my docker-compose.yaml for the nginx part I have
nginx:
build:
context: .
dockerfile: Dockerfile.static
container_name: web-service
working_dir: /var/www
volumes:
- .:/var/www
environment:
- "DOCKER_IMAGE_NAME_PHP=app-admin"
- "DOCKER_IMAGE_NAME_NGINX=web-service"
depends_on:
- app
ports:
- 8090:80
I thought that setting the environment in the compose file would be enough, and that this will be used (the app-admin is the container_name of the php part with WordPress).
In my Jenkins pipeline scripts, these names are used to build the app and static images manually (using docker build -t DOCKER_IMAGE_NAME_PHP -f Dockerfile.php), and then the names are set to env like
echo -e "DOCKER_IMAGE_NAME_PHP=$DOCKER_IMAGE_NAME_PHP" >>env
EDIT
Like the answer suggested I've tried with adding args under build key
build:
context: .
dockerfile: Dockerfile.static
args:
- "DOCKER_IMAGE_NAME_PHP=app-admin"
Then in my Dockerfile I've added
FROM nginx:latest
COPY scripts/docker-nginx-entrypoint.sh /docker-nginx-entrypoint.sh
COPY ./config/nginx.conf /opt/nginx.conf
COPY ./config/nginx.conf.prd /opt/nginx.conf.prd
ARG DOCKER_IMAGE_NAME_PHP
COPY --from=$DOCKER_IMAGE_NAME_PHP /var/www/html/ /var/www/html/
CMD ["/bin/bash","/docker-nginx-entrypoint.sh"]
But I still get the error. I've tried with ${DOCKER_IMAGE_NAME_PHP}, but that doesn't help.
The odd thing is that adding RUN echo $DOCKER_IMAGE_NAME_PHP, when I run this, I can see
Step 6/8 : RUN echo $DOCKER_IMAGE_NAME_PHP
---> Running in 0801fcd5b77f
app-admin
But it's not recognized in the COPY command. How come?
EDIT 2
So it turns out I cannot do this:
https://github.com/moby/moby/issues/34482
Because the --from expects the image name (which I'm trying to pass to it from the previously build service but in my case it's dynamic). This works in the Jenkins since I'm doing a docker build command, and the variables are available in the bash script...
COPY --from does not support variables, but FROM does.
the following example uses multi-stage build, to help you extract whatever you need from the first image.
ARG DOCKER_IMAGE_NAME_PHP=php:7.3
FROM ${DOCKER_IMAGE_NAME_PHP} as php-image
FROM nginx:latest
COPY scripts/docker-nginx-entrypoint.sh /docker-nginx-entrypoint.sh
COPY ./config/nginx.conf /opt/nginx.conf
COPY ./config/nginx.conf.prd /opt/nginx.conf.prd
COPY --from=php-image /var/www/html/ /var/www/html/
CMD ["/bin/bash","/docker-nginx-entrypoint.sh"]
an almost similar example sits in the docs
For that, you must use ARG in the Dockerfile.
https://docs.docker.com/engine/reference/builder/#arg
environment in docker-compose.yml is not available when you're building.
https://docs.docker.com/compose/compose-file/#environment
You can use arg in the docker-compose.yml too:
https://docs.docker.com/compose/compose-file/#args

Docker - deliver the code to nginx and php-fpm

How do I deliver the code of a containerized PHP application, whose image is based on busybox and contains only the code, between separate NGINX and PHP-FPM containers? I use the 3rd version of docker compose.
The Dockerfile of the image containing the code would be:
FROM busybox
#the app's code
RUN mkdir /app
VOLUME /app
#copy the app's code from the context into the image
COPY code /app
The docker-compose.yml file would be:
version: "3"
services:
#the application's code
#the volume is currently mounted from the host machine, but the code will be copied over into the image statically for production
app:
image: app
volumes:
- ../../code/cms/storage:/storage
networks:
- backend
#webserver
web:
image: web
depends_on:
- app
- php
networks:
- frontend
- backend
ports:
- '8080:80'
- '8081:443'
#php
php:
image: php:7-fpm
depends_on:
- app
networks:
- backend
networks:
cms-frontend:
driver: "bridge"
cms-backend:
driver: "bridge"
The solutions I thought of, neither appropriate:
1) Use the volume from the app's container in the PHP and NGINX containers, but compose v3 doesn't allow it (the volumes_from directive). Can't use it.
2) Place the code in a named volume and connect it to the containers. Going this way I can't containerize the code. Can't use. (I'll also have to manually create this volume on every node in a swarm?)
3) Copy the code twice directly into images based on NGINX and PHP-FPM. Bad idea, I'll have to maintain them to be in concert.
Got stuck with this. Any other options? I might have misunderstood something, only beginning with Docker.
I too have been looking around to solve a similar issue and it seems Nginx + PHP-FPM is one of those exceptions when it is better to have both services running in one container for production. In development you can bind mount the project folder to both nginx and php containers. As per Bret Fisher's guide for good defaults for php: php-docker-good-defaults
So far, the Nginx + PHP-FPM combo is the only scenario that I recommend using multi-service containers for. It's a rather unique problem that doesn't always fit well in the model of "one container, one service". You could use two separate containers, one with nginx and one with php:fpm but I've tried that in production, and there are lots of downsides. A copy of the PHP code has to be in each container, they have to communicate over TCP which is much slower than Linux sockets used in a single container, and since you usually have a 1-to-1 relationship between them, the argument of individual service control is rather moot.
You can read more about setting up multiple service containers on the docker page here (it's also listed in the link above): Docker Running Multiple Services in a Container
The way I see it, you have two options:
(1) Using Docker-compose : (this is for very simplistic development env)
You will have to build two separate container from nginx and php-fpm images. And then simply serve app folder from php-fpm on a web folder on nginx.
# The Application
app:
build:
context: ./
dockerfile: app.dev.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
expose:
- 9000
# The Web Server
web:
build:
context: ./
dockerfile: web.dev.dockerfile
working_dir: /var/www
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
(2) Use a single Dockerfile to build everything in it.
Start with some flavor of linux or php image
install nginx
Build your custom image
And serve multi services docker container using supervisord

How to deal with permissions using docker - nginx / php-fpm

I'm trying to deploy a very simple Symfony application using nginx & php-fpm via Docker.
Two docker services :
1. web : running nginx
2. php : running php-fpm; containing application source.
I want to build images that can be deployed without any external dependency.
That's why I'm copying source code within the php container.
On development process; i'm overriding /var/www/html volume with local path.
# file: php-fpm/Dockerfile
FROM php:7.1-fpm-alpine
COPY ./vendor /var/www/html
COPY . /var/www/html
VOLUME /var/www/html
Now the docker-compose configuration file.
# file : docker-compose-prod.yml
version: '2'
services:
web:
image: "private/web"
ports:
- 80:80
volumes_from:
- php
php:
image: "private/php"
ports:
- 9000:9000
The problem is about permissions.
When accessing localhost, Symfony is botting up, but cache / logs / sessions folders are not writable.
nginx is using /var/www/html to serve static files.
php-fpm is using /var/www/html to execute php files.
I'm not sure about the problem.
But how can I be sure about the following:
/var/www/html have to be readable for nginx ?
/var/www/html have to be writable for php-fpm ?
Note: I'm building images from MacbookPro; cache / logs / sessions are 777.
docker-compose.yml supports a user directive under services. The docs only mention it in the run command, but it works the same.
I have a similar setup and this is how I do it:
# file : docker-compose-prod.yml
version: '2'
services:
web:
image: "private/web"
ports:
- 80:80
volumes_from:
- php
php:
image: "private/php"
ports:
- 9000:9000
user: "$UID"
I have to run export UID before running docker-compose and then that sets the default user to my current user. This allows logging / caching etc. to work as expected.
I am using this solution "Docker for Symfony" https://github.com/StaffNowa/docker-symfony
New features on
./d4d start
./d4s stop
./d4d help
I've found a solution;
But if someone can explain best practices, it will be appreciate !
Folders cache / logs / sessions from docker context where not empty (on host).
Now that folders have been flushed, Symfony creates them with good permissions.
I've found people using usermod to change UID, ie: 1000 for www-data / nginx ...
But it seems to be an ugly hack. What do you think about ?

Categories