Custom application directory in PHP-FPM container - php

I'm using the php:7-fpm Docker image but I cannot put my application in /var/www/html. Instead, I want to put it in /opt/foo. /opt/foo is a volume. How can I do this without replacing the whole PHP-FPM configuration?

PHP-FPM defaults to the working directory, but because the image sets the working directory before it sets the command, you can't customize it with WORKDIR. So the only way to do it neatly seems to be appending to the PHP-FPM configuration file:
RUN echo 'chdir = /opt/foo' >> /usr/local/etc/php-fpm.d/www.conf

Related

Docker Apache Container looses static site data after reboot

I have a Docker container that serves some php-Pages with Symfony. It also has a connection to the database that works perfectly fine.
The root directory is /var/www/html/public and the rough structure of /var/www/html is as follows (-files and [directories]):
[/var/www/html]
-Dockerfile
-package.json
-vendor
-src
-someotherstuff
[public]
index.php
favicon.ico
[CSS]
[Javascript]
[uploads -> this folder is a volume, users can upload their own data]
-userFileA
-userFileB
When I start the container with this command everything works great:
docker run -d --restart always --name myContainerName -e DATABASE_URL=mysql://mysql:${DB_PW}#mysql:3306/myDbName -e APP_ENV=${ENV_TYPE} -e APP_SECRET=******** -v myapp_data:/var/www/html/public/uploads -p 127.0.0.1:82:80 --net mysql_net myId/myRepo –
After I reboot the server all files except the "uploads"-folder inside public get deleted though and the server files look as follows:
[/var/www/html]
-Dockerfile
-package.json
-vendor
-src
-someotherstuff
[public]
[uploads -> this folder is a volume]
-userFileA
-userFileB
Now the website is completetly broken because the index.php in the public-folder is missing.
I don't understand why this happens and only on reboot. If there was a conflict between the volume and the other folder shouldn't that also happen on the initial run?
And why do only files inside public get deleted and not outside? I am really confused and most info I can find for this topics are erros because a volume is set up incorrectly but my volume works fine and is actually the only thing still properly filled after a reboot
This is my Dockerfile and the commands I use to build it:
#docker build --tag=myId/myRepo .
#docker push myId/myRepo
FROM php:7.2-apache
ENV DB_HOST=mysql:3306 \
DB_USER=myDbUser \
DB_NAME=myDbName
COPY php.ini "$PHP_INI_DIR/php.ini"
RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
COPY ./ /var/www/html/
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
VOLUME /var/www/html/public/uploads/
My mistake was to specify VOLUME /var/www/html/public/uploads/ in the Dockerfile
After removing this line and only making the folder a volume with docker run -v ... the error didn't occur anymore.

Docker php-fpm running as www-data

I've recently been learning to build images and containers with Docker. I was getting fairly confident with it when using a Mac, but recently switched to Ubuntu, I'm fairly new to this side of development.
I'm using a standard new Laravel project as my "code", and am currently just using a php container and nginx container.
I'm using a docker-compose.yml file to create my containers:
version: "3.1"
services:
nginx:
image: nginx:latest
volumes:
- ./code:/var/www
- ./nginx_conf.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
php:
image: php:7.3-fpm
ports:
- 9000
volumes:
- ./code:/var/www
There may or may not be a mistake in the code above just because I've just typed it out rather than copy and pasting - but it works on my machine.
The problem is:
php-fpm is configured with --with-fpm-user=www-data and --with-fpm-group=www-data, and that's set in the php:7.3-fpm Dockerfile (see here).
The files on my host machine, are saved with my user name and group as owner / group.
When I go into the container, the files are owned by 1000 and group 1000 (I assume a mapping to my user account and group on the host machine?)
However, when I access the application through the browser, I get a permission denied error on start up (when Laravel tries to create an error log file in storage). I think this is because php-fpm is running as www-data, but the storage directory has permissions drwxr-xr-x for owner / group phil:phil - my host owner and group.
I've tried the following, after hours of googling and trials:
Recursively change the owner and group of the code directory on the host machine to www-data:www-data. This allows the Laravel application to work, but I now cant create or edit etc files on the host using PHPStorm, because the directory is read-only (I guess because phpstorm is running as my user, and directory is owned by a different user / group).
I've added my host user account to the www-data group, and granted write permissions to the group using sudo chmod -R g+w ./code, which now allows the application to run the application, and for phpstorm to write, execute etc files, but when i create or edit a file, the files ownership and group change back to my host phil:phil, and I guess this would break the application again.
I've tried to create a php image, and set the env (as described in the link above) to configure with --with-fpm-user=phil --with-fpm-group=phil, but after building, it doesn't change anything - it's still running with www-data (after reading a github issue I think this is because envs cant be changed until later, at which point php is already configured?) (see github issue here)
I'm running out of ideas to try. The only other thing I can think of, is to recursively set owner and group of the code directory on my host to www-data and try run phpstorm as www-data instead, but that feels weird (Update: I tried to open phpstorm as www-data user, using sudo -u www-data phpstorm.sh, but i get a java exception - something to do with graphics -so this approach is unfeasible as well)
Now the only thing I can think of to try is to create a new php image from alpine base image and bypass php's images completely - which seems like an awful lot of inconvenience just because the maintainers want to use ENV instead of ARG?
I'm not sure of best practice for this scenario. Should I be trying to change how php-fpm is run (user/group)? should I be updating the directory owner/group on my host? should I be running phpstorm as a different user?
Literally any advice will be greatly appreciated.
#bnoeafk I'll just post this as a new answer although it has basically been said already. I don't think this is hacky, it works basically like ntfsusermap, certainly more elegant than changing all file permissions.
For the Dockerfile:
FROM php:7.4-apache
# do stuff...
ARG UNAME=www-data
ARG UGROUP=www-data
ARG UID=1000
ARG GID=1001
RUN usermod --uid $UID $UNAME
RUN groupmod --gid $GID $UGROUP
Every user using this image can pass himself into it while building: docker-compose build --build-arg UID=$(id -u) --build-arg GID=$(id -g)
ran into the same problem a few weeks ago.
what actually happens is that your host and your container are sharing the same files via the volume, therefore, they also share the permissions.
in production, everything is fine - your server (the www-data user) should be the owner of the files, so no problem here. things get complicated in development - when you are trying to access those files from the host.
i know a few workarounds, the most hacky one seems to be to set www-data uid in the container to 1000, so it will match your uid in the host.
another simple one is to open 777 full permissions on the shared directory, since its only needed in the development build - (should never be done in production though, but as i mentioned before, in production you dont have any problem, so you must seperate the 2 processes and do it only in development mode)
to me, the most elegant solution seems to be to allow all group members to access the files (set 770 permissions), and add www-data to your group:
usermod www-data -a -G phill #// add it to your group
chown -r phill ./code #// make yourself the owner. might need sudo.
chmod 770 ./code #//grunt permissions to all group members
You have many options depending on your system to do this, but keep in mind you may have to restart your running process (php-fpm for example)
Some examples on how to achieve this: (you can run the commands outside the container with: docker container exec ...)
Example 1:
usermod -g 1007 www-data
It will update the uid of the user www-data to 1007
Example 2:
deluser www-data
adduser -u 1007 -D -S -G www-data www-data
It will delete the user www-data and recreate it with the uid 1007
Get pid and restart process
To restart a a running process, for example php-fpm, you can do it that way:
First get the pid, with one of the following command:
pidof php-fpm
ps -ef | grep -v grep | grep php-fpm | awk '{print $2}'
find /proc -mindepth 2 -maxdepth 2 -name exe -lname '*/php-fpm' -printf %h\\n 2>/dev/null | sed s+^/proc/++
Then restart the process with the pid(s) you got just before (if your process support USR2 signal):
kill -USR2 pid <-- replace pid by the number you got before
I found that the easiest way is to update the host or to build your container knowing the right pid (not always doable if you work with different environments)
Let's assume that you want to set the user of your PHP container and the owner of your project files to www-data. This can be done inside Dockerfile:
FROM php
.
.
.
RUN chown -R www-data:www-data /var/www
USER www-data # next instruction might face permission error if this line is not at the end of the dockerfile
The important fact here is that the original permissions in the Docker host are corresponded to the permission inside the container. Thus, if you now add your current user to www-data group (which probably needs a logout/reboot to take effect), you will have sufficient permission to edit the files outside the container (for instance in your IDE):
sudo usermod -aG www-data your_user
This way, the PHP code is permitted to run executables or write new files while you can edit the files on the host environment.

How to "dynamize" Dockerfile / Docker Compose?

I'm Dockerizing legacy PHP project. I would like to have Xdebug enabled in development environment and my Dockerfile copies pre-built php.ini into container.
Due to some network issues we have to have xdebug.remote_connect_back = 0 on Mac OS X (and corresponding xdebug.remote_host = docker.for.mac.localhost) and xdebug.remote_connect_back = 1 on Linux.
Is it possible to grab current OS type in Dockerfile/Docker Compose to copy php.ini corresponding to host OS?
Use volumes described here in docker-compose.yml. Create php.linux.ini and php.mac.ini in a config folder (or wherever) and map one of them to the container:
services:
php:
image: php
volumes:
- ./config/php.linux.ini:/etc/php.ini #or wherever the config is
Of course your users will have to manually change php.linux.ini for php.mac.ini, but it's a one time manual change.
That information isn't (and shouldn't) be available at image build time. The same Linux-based image could be run on native Linux, a Linux VM on Mac (and then either the Docker Machine VM or the hidden VM provided by Docker for Mac), a Linux VM on Windows, or even a Linux VM on Linux, regardless of where it was originally built.
Configuration such as host names should be provided at container run time. Environment variables are a typical way to do this, or you can use the Docker volume mechanism to push in configuration files from the host.
If your issue is purely around debugging your application, you can also set up a full development environment on your host, and only build in to your image the things you need to run it in a more production-like environment.
I decided to use Docker Compose ability of reading .env files. The whole workflow is as following:
create .env.sample file with all the lines commented (sorry, couldn't manage to correctly display commented lines):
OS=windows
OS=linux
OS=mac
ignore .env file by adding /.env line to .gitignore file
copy sample file with $ cp .env.sample .env and leave uncommented just one line corresponding to your OS
move OS-specific Xdebug-related section of php.ini into separate file with names like xdebug-mac.ini, xdebug-windows.ini, xdebug-linux.ini, etc.
add to docker-compose.yml args section to chosen service with value like - OS=${OS}
in corresponding Dockerfile add lines:
ARG OS=${OS}
COPY ./xdebug-${OS}.ini /usr/local/etc/php/conf.g/
OS value mentioned in .env will be expanded on building image time
execute $ docker-compose up -d --build to build image and start container
commit all your changes on success to let your colleagues have Xdebug set properly on any platform; don't forget to tell them make their own instance of .env file from template

Cannot use NGINX subdomain inside the Docker container

This question is kinda stupid, it's about using the Docker's service names as hostnames, so here's the context:
I am running the following NGINX containers: base, php-fpm and nginx. I also have a Laravel project who is located in the root project, in the /api folder. I also run haproxy on port 5000 for load balancing the requests over php-fpm containers.
The base container contains the linux environment from which i can run commands to phpunit, npm and literally have access to other containers' files that are sent using the volume from docker-compose.
The php-fpm contains the environment for PHP to run.
The nginx contains the NGINX server which is configured to hold two websites: the root website (localhost) and the api subdomain (api.localhost). The api. subdomain points to the /api folder within the root project, and the root website (localhost) points to the /frontend folder within the root project.
The problem is that within the base service container, i cannot run curl command to access the api.localhost website. I tried to use curl to access the nginx using the service name within the docker-compose (which is nginx):
$ curl http://nginx
and it works perfectly, but the frontend folder answers with code from the frontend folder. I have no idea how to use the service name to access the api.localhost wihin the container.
I have tried
$ curl http://api.nginx
$ curl http://api.localhost
Not even the localhost answers to the curl command:
$ curl http://localhost
Is there any way i can access the subdomain from a NGINX container using the service name as hostname?
I have found out that subdomains are not working well using NGINX and Docker Service name as hostname.
Instead, i had to change the structure of my project so that i don't use subdomains while trying to access URLs using service names as hostnames.

Add php and nginx settings to homestead 2.0 provisioning

i would like to increase the default max-post size in the php.ini and max upload size in the nginx config. How can i add that in an .sh file so it get executed when i provision the box?
Use the provision tools, such as puppet, chef, salt, ansible, etc.
For example, put below lines in your Vagrantfile, it will automatically apply puppet modules (such as php and nginx) with your changes.
config.vm.provision :puppet do |puppet|
puppet.module_path = "modules"
puppet.manifests_path = "manifests"
puppet.manifest_file = "vagrant.pp"
puppet.options = ['--verbose']
end
Take a look on these urls.
https://docs.vagrantup.com/v2/provisioning/puppet_apply.html
https://docs.vagrantup.com/v2/provisioning/ansible.html
https://docs.vagrantup.com/v2/provisioning/chef_solo.html
The correct answer to the exact question would be (given the current version of Homestead):
After cloning go to src/stubs and edit the after.sh file
launch init.sh from the root of the repository
vagrant up
after.sh is a file copied to the VM and launched after homestead finishes its provisioning.

Categories