Docker Apache Container looses static site data after reboot - php

I have a Docker container that serves some php-Pages with Symfony. It also has a connection to the database that works perfectly fine.
The root directory is /var/www/html/public and the rough structure of /var/www/html is as follows (-files and [directories]):
[/var/www/html]
-Dockerfile
-package.json
-vendor
-src
-someotherstuff
[public]
index.php
favicon.ico
[CSS]
[Javascript]
[uploads -> this folder is a volume, users can upload their own data]
-userFileA
-userFileB
When I start the container with this command everything works great:
docker run -d --restart always --name myContainerName -e DATABASE_URL=mysql://mysql:${DB_PW}#mysql:3306/myDbName -e APP_ENV=${ENV_TYPE} -e APP_SECRET=******** -v myapp_data:/var/www/html/public/uploads -p 127.0.0.1:82:80 --net mysql_net myId/myRepo –
After I reboot the server all files except the "uploads"-folder inside public get deleted though and the server files look as follows:
[/var/www/html]
-Dockerfile
-package.json
-vendor
-src
-someotherstuff
[public]
[uploads -> this folder is a volume]
-userFileA
-userFileB
Now the website is completetly broken because the index.php in the public-folder is missing.
I don't understand why this happens and only on reboot. If there was a conflict between the volume and the other folder shouldn't that also happen on the initial run?
And why do only files inside public get deleted and not outside? I am really confused and most info I can find for this topics are erros because a volume is set up incorrectly but my volume works fine and is actually the only thing still properly filled after a reboot
This is my Dockerfile and the commands I use to build it:
#docker build --tag=myId/myRepo .
#docker push myId/myRepo
FROM php:7.2-apache
ENV DB_HOST=mysql:3306 \
DB_USER=myDbUser \
DB_NAME=myDbName
COPY php.ini "$PHP_INI_DIR/php.ini"
RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
COPY ./ /var/www/html/
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
VOLUME /var/www/html/public/uploads/

My mistake was to specify VOLUME /var/www/html/public/uploads/ in the Dockerfile
After removing this line and only making the folder a volume with docker run -v ... the error didn't occur anymore.

Related

set aws credentails folder for php apache container [duplicate]

I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?
A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.
First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.
Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And you build it with a command in 18.09 or newer like:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
For a docker run, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for a compose file, you'd have:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.
I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254..... Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.
As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)
You can access these environment variables by running printenv at the terminal.
Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default profile in ~/.aws/credentials file.
If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose command.
export AWS_PROFILE=some_other_profile_name
version: '3'
services:
service-name:
image: docker-image-name:latest
environment:
- AWS_PROFILE=${AWS_PROFILE}
volumes:
- ~/.aws/:/root/.aws:ro
In this example, I used root user on docker. If you are using other user, just change /root/.aws to user home directory.
:ro - stands for read-only docker volume
It is very helpful when you have multiple profiles in ~/.aws/credentials file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.
Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose file.
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
The following one-liner works for me even when my credentials are set up by aws-okta or saml2aws:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
Please note that for advanced use cases you might need to allow rw (read-write) permissions, so omit the ro (read-only) limitation when mounting the .aws volume in -v$HOME/.aws:/root/.aws:ro
Volume mounting is noted in this thread but as of docker-compose v3.2 + you can Bind Mount.
For example, if you have a file named .aws_creds in the root of your project:
In your service for the compose file do this for volumes:
volumes:
# normal volume mount, already shown in thread
- ./.aws_creds:/root/.aws/credentials
# way 2, note this requires docker-compose v 3.2+
- type: bind
source: .aws_creds # from local
target: /root/.aws/credentials # to the container location
Using this idea, you can publicly store your docker images on docker-hub because your aws credentials will not physically be in the image...to have them associated, you must have the correct directory structure locally where the container is started (i.e. pulling from Git)
You could create ~/aws_env_creds containing:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
Add these value (replace the key of yours):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
Press "esc" to save the file.
Run and test the container:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds
If someone still face the same issue after following the instructions mentioned in accepted answer then make sure that you are not passing environment variables from two different sources. In my case I was passing environment variables to docker run via a file and as parameters which was causing the variables passed as parameters show no effect.
So the following command did not work for me:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
Moving the aws credentials into the mentioned env.list file helped.
for php apache docker the following command works
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache
Based on some of previous answers, I built my own as follows.
My project structure:
├── Dockerfile
├── code
│   └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
My docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
My Docker file:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py

Docker php-fpm running as www-data

I've recently been learning to build images and containers with Docker. I was getting fairly confident with it when using a Mac, but recently switched to Ubuntu, I'm fairly new to this side of development.
I'm using a standard new Laravel project as my "code", and am currently just using a php container and nginx container.
I'm using a docker-compose.yml file to create my containers:
version: "3.1"
services:
nginx:
image: nginx:latest
volumes:
- ./code:/var/www
- ./nginx_conf.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
php:
image: php:7.3-fpm
ports:
- 9000
volumes:
- ./code:/var/www
There may or may not be a mistake in the code above just because I've just typed it out rather than copy and pasting - but it works on my machine.
The problem is:
php-fpm is configured with --with-fpm-user=www-data and --with-fpm-group=www-data, and that's set in the php:7.3-fpm Dockerfile (see here).
The files on my host machine, are saved with my user name and group as owner / group.
When I go into the container, the files are owned by 1000 and group 1000 (I assume a mapping to my user account and group on the host machine?)
However, when I access the application through the browser, I get a permission denied error on start up (when Laravel tries to create an error log file in storage). I think this is because php-fpm is running as www-data, but the storage directory has permissions drwxr-xr-x for owner / group phil:phil - my host owner and group.
I've tried the following, after hours of googling and trials:
Recursively change the owner and group of the code directory on the host machine to www-data:www-data. This allows the Laravel application to work, but I now cant create or edit etc files on the host using PHPStorm, because the directory is read-only (I guess because phpstorm is running as my user, and directory is owned by a different user / group).
I've added my host user account to the www-data group, and granted write permissions to the group using sudo chmod -R g+w ./code, which now allows the application to run the application, and for phpstorm to write, execute etc files, but when i create or edit a file, the files ownership and group change back to my host phil:phil, and I guess this would break the application again.
I've tried to create a php image, and set the env (as described in the link above) to configure with --with-fpm-user=phil --with-fpm-group=phil, but after building, it doesn't change anything - it's still running with www-data (after reading a github issue I think this is because envs cant be changed until later, at which point php is already configured?) (see github issue here)
I'm running out of ideas to try. The only other thing I can think of, is to recursively set owner and group of the code directory on my host to www-data and try run phpstorm as www-data instead, but that feels weird (Update: I tried to open phpstorm as www-data user, using sudo -u www-data phpstorm.sh, but i get a java exception - something to do with graphics -so this approach is unfeasible as well)
Now the only thing I can think of to try is to create a new php image from alpine base image and bypass php's images completely - which seems like an awful lot of inconvenience just because the maintainers want to use ENV instead of ARG?
I'm not sure of best practice for this scenario. Should I be trying to change how php-fpm is run (user/group)? should I be updating the directory owner/group on my host? should I be running phpstorm as a different user?
Literally any advice will be greatly appreciated.
#bnoeafk I'll just post this as a new answer although it has basically been said already. I don't think this is hacky, it works basically like ntfsusermap, certainly more elegant than changing all file permissions.
For the Dockerfile:
FROM php:7.4-apache
# do stuff...
ARG UNAME=www-data
ARG UGROUP=www-data
ARG UID=1000
ARG GID=1001
RUN usermod --uid $UID $UNAME
RUN groupmod --gid $GID $UGROUP
Every user using this image can pass himself into it while building: docker-compose build --build-arg UID=$(id -u) --build-arg GID=$(id -g)
ran into the same problem a few weeks ago.
what actually happens is that your host and your container are sharing the same files via the volume, therefore, they also share the permissions.
in production, everything is fine - your server (the www-data user) should be the owner of the files, so no problem here. things get complicated in development - when you are trying to access those files from the host.
i know a few workarounds, the most hacky one seems to be to set www-data uid in the container to 1000, so it will match your uid in the host.
another simple one is to open 777 full permissions on the shared directory, since its only needed in the development build - (should never be done in production though, but as i mentioned before, in production you dont have any problem, so you must seperate the 2 processes and do it only in development mode)
to me, the most elegant solution seems to be to allow all group members to access the files (set 770 permissions), and add www-data to your group:
usermod www-data -a -G phill #// add it to your group
chown -r phill ./code #// make yourself the owner. might need sudo.
chmod 770 ./code #//grunt permissions to all group members
You have many options depending on your system to do this, but keep in mind you may have to restart your running process (php-fpm for example)
Some examples on how to achieve this: (you can run the commands outside the container with: docker container exec ...)
Example 1:
usermod -g 1007 www-data
It will update the uid of the user www-data to 1007
Example 2:
deluser www-data
adduser -u 1007 -D -S -G www-data www-data
It will delete the user www-data and recreate it with the uid 1007
Get pid and restart process
To restart a a running process, for example php-fpm, you can do it that way:
First get the pid, with one of the following command:
pidof php-fpm
ps -ef | grep -v grep | grep php-fpm | awk '{print $2}'
find /proc -mindepth 2 -maxdepth 2 -name exe -lname '*/php-fpm' -printf %h\\n 2>/dev/null | sed s+^/proc/++
Then restart the process with the pid(s) you got just before (if your process support USR2 signal):
kill -USR2 pid <-- replace pid by the number you got before
I found that the easiest way is to update the host or to build your container knowing the right pid (not always doable if you work with different environments)
Let's assume that you want to set the user of your PHP container and the owner of your project files to www-data. This can be done inside Dockerfile:
FROM php
.
.
.
RUN chown -R www-data:www-data /var/www
USER www-data # next instruction might face permission error if this line is not at the end of the dockerfile
The important fact here is that the original permissions in the Docker host are corresponded to the permission inside the container. Thus, if you now add your current user to www-data group (which probably needs a logout/reboot to take effect), you will have sufficient permission to edit the files outside the container (for instance in your IDE):
sudo usermod -aG www-data your_user
This way, the PHP code is permitted to run executables or write new files while you can edit the files on the host environment.

Custom application directory in PHP-FPM container

I'm using the php:7-fpm Docker image but I cannot put my application in /var/www/html. Instead, I want to put it in /opt/foo. /opt/foo is a volume. How can I do this without replacing the whole PHP-FPM configuration?
PHP-FPM defaults to the working directory, but because the image sets the working directory before it sets the command, you can't customize it with WORKDIR. So the only way to do it neatly seems to be appending to the PHP-FPM configuration file:
RUN echo 'chdir = /opt/foo' >> /usr/local/etc/php-fpm.d/www.conf

Run Docker PHP-apache: Forbidden You don't have permission to access / on this server

I have a folder: my-php-app and it contains a Dockerfile and a src/ folder.
The Dockerfile is very simple:
FROM php:5.6-apache
COPY config/php.ini /usr/local/etc/php/
COPY src/ /var/www/html/
My src/ contains an index.php
The index.php contains
<html>
<body>
<?php echo '<p>Hello World!</p>'; ?>
</body>
</html>
I did the following:
docker build -t my-php-app .
The new image was generated successfully.
Now I want to start a container from that image:
docker run -d -p 80:80 my-php-app
But when I'm visiting my localhost:80 I see:
Forbidden
You don't have permission to access / on this server.
So my question is:
How do I have to start my container properly? What am I doing wrong here.
You are not sharing your php.ini file, so I tried using the default production one provided by the PHP project and using that config file, I was able to run your project fine.
I suspect your issue lies there.
You could also just "chown -R" your files to www-data instead of your local username!

How to host multiple Laravel sites/applications in a web root

I have 2 Laravel 4 applications.
I want one of them to be served from within a folder that is inside the other's root folder.
For example, let's say Application A is deployed to /var/www/ folder, and I want Application B to be deployed to /var/www/B/.
When just naively putting it there, I get an error NotFoundHttpException from Application A's RouteCollection.php.
Any idea how this can be achieved?
Thanks in advance!
I supose you're using apache2. There is a file in /etc his name is hosts, you can configure a virtual domain to access diferent directories like:
127.0.0.1 project1.com
127.0.0.1 project2.com
The you have to configure the virtualhost. You have to go to /etc/apache2/sites-avaible and copy the default config file 000-default.conf
cd /etc/apache2/sites-available
sudo cp 000-default.conf 001-laravel1.conf
sudo nano 001-laravel1.conf
Inside of the edit of the document you only have to change two things:
ServerName (you have to put your virtual domain) -> project1.com
DocumentRoot (you have to put your directory of your proyect) -> /var/www/A
And the last thing is create a symbolic link to this archive in the directory /etc/apache2/sites-enabled
cd /etc/apache2/sites-enabled
ln -s 001-laravel1.conf ../sites-available/001-laravel1.conf

Categories