I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?
A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.
First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.
Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And you build it with a command in 18.09 or newer like:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
For a docker run, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for a compose file, you'd have:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.
I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254..... Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.
As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)
You can access these environment variables by running printenv at the terminal.
Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default profile in ~/.aws/credentials file.
If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose command.
export AWS_PROFILE=some_other_profile_name
version: '3'
services:
service-name:
image: docker-image-name:latest
environment:
- AWS_PROFILE=${AWS_PROFILE}
volumes:
- ~/.aws/:/root/.aws:ro
In this example, I used root user on docker. If you are using other user, just change /root/.aws to user home directory.
:ro - stands for read-only docker volume
It is very helpful when you have multiple profiles in ~/.aws/credentials file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.
Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose file.
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
The following one-liner works for me even when my credentials are set up by aws-okta or saml2aws:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
Please note that for advanced use cases you might need to allow rw (read-write) permissions, so omit the ro (read-only) limitation when mounting the .aws volume in -v$HOME/.aws:/root/.aws:ro
Volume mounting is noted in this thread but as of docker-compose v3.2 + you can Bind Mount.
For example, if you have a file named .aws_creds in the root of your project:
In your service for the compose file do this for volumes:
volumes:
# normal volume mount, already shown in thread
- ./.aws_creds:/root/.aws/credentials
# way 2, note this requires docker-compose v 3.2+
- type: bind
source: .aws_creds # from local
target: /root/.aws/credentials # to the container location
Using this idea, you can publicly store your docker images on docker-hub because your aws credentials will not physically be in the image...to have them associated, you must have the correct directory structure locally where the container is started (i.e. pulling from Git)
You could create ~/aws_env_creds containing:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
Add these value (replace the key of yours):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
Press "esc" to save the file.
Run and test the container:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds
If someone still face the same issue after following the instructions mentioned in accepted answer then make sure that you are not passing environment variables from two different sources. In my case I was passing environment variables to docker run via a file and as parameters which was causing the variables passed as parameters show no effect.
So the following command did not work for me:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
Moving the aws credentials into the mentioned env.list file helped.
for php apache docker the following command works
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache
Based on some of previous answers, I built my own as follows.
My project structure:
├── Dockerfile
├── code
│ └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
My docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
My Docker file:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py
I have setup Mapbender on Ubuntu 20.04 on a VirutalBox machine. PostgreSQL, PostGIS and Geoserver are all installed on the VM. I created a map application and added a search router function (followed the instructions in the documentation). The search is working like a charm in the dev environment but in the prod, it is not. In the dev environment, it is giving a result and hovering the mouse over the result highlights the feature and clicking on the result moves and zooms the map to the feature.
In the prod environment, nothing seems to happen when typing the search string and pressing search. The devtools report an internal server error 500, which is not very helpful. Although, in Firefox, the devtools show Referrer policy "strict-origin-when-cross-origin" in red.
I have already modified the Postgres configuration files to Listeners = * and host 0.0.0.0 to make sure it is not a database access problem.
Host Machine: Windows 10 Pro 20H2
Guest Machine: Ubuntu 20.04
Mapbender 3.2.6
Database Postgresql 12.8 with Postgis 3.0
WMS Served through Geoserver
PHP7.2
While I am not sure I provided all the information to properly diagnose the problem, any indication on what to do to investigate this issue and solve it are appreciated.
Update:
I modified php.ini to enable error logging by setting the following switches:
error_reporting = E_ALL
display_errors = Off
log_errors = On
ignore_repeated_errors = On
ignore_repeated_source = Off
error_log = /var/log/apache2/php_errors.log
But no errors are being logged so far and php_errors.log file is not being created. Even creating the file is not having any effect on the logging. Am not sure what I am missing. I want to reiterate though that the search is working in the dev environment so can't see how it can be an authentication issue. I am trying the search in the prod environment on a browser from within the VM, so using localhost to access the application.
On dev tools I get the following:
jquery.min.js:formatted:4210 POST`
http://localhost/mapbender1/application/bh_admin/element/337/0-ed10fcc5-57e7-1f83-8a76-c32030225b85/search 500 (Internal Server Error)
send # jquery.min.js:formatted:4210
ajax # jquery.min.js:formatted:3992
n.<computed> # jquery.min.js:formatted:4044
getJSON # jquery.min.js:formatted:4033
_search # js:14187
(anonymous) # jquery-ui.min.js:6
(anonymous) # js:13976
dispatch # jquery.min.js:formatted:2119
r.handle # jquery.min.js:formatted:1998
When clicking on jquery.min.js:4210, the following line is highlighted in the file:
g.send(b.hasContent && b.data || null),
Update 2
Following #IonBazan suggestion, I found the prod.log file, albeit in a different folder, and the error indicates that the database service cannot be found. The log file was in:
var/www/mapbender1/app/logs
And this is the message in the log file:
request.CRITICAL: Uncaught PHP Exception
Symfony\Component\DependencyInjection\Exception\ServiceNotFoundException:
"You have requested a non-existent service
"doctrine.dbal.mobh_data_connection". Did you mean this:
"doctrine.dbal.default_connection"?" at
/var/www/mapbender1/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/Container.php
line 348 {"exception":"[object]
(Symfony\Component\DependencyInjection\Exception\ServiceNotFoundException(code:
0): You have requested a non-existent service
"doctrine.dbal.mobh_data_connection". Did you mean this:
"doctrine.dbal.default_connection"? at
/var/www/mapbender1/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/Container.php:348)"}
[]
As I have mentioned before, the dev app is capable of accessing the service. This means, I suppose, that the DB connection parameters are correct in the parameters.yml and config.yml files. So I have a feeling there might be some cached item that needs updating, especially that Mapbender documentation mentions this:
The cache-mechanism of the development-environment behaves
differently: Not all files are cached, thus code changes are directly
visible. Therefore the usage of the app_dev.php is always slower than
the production-environment.
And
The directory app/cache contains the cache-files. It contains
directories for each environment (prod and dev). But the mechanism of
the dev-cache, as described before, behaves differently.
If changes of the Mapbender interface or the code are made, the
cache-directory (app/cache) has to be cleared to see the changes in
the application.
So this turned out to be a folder permission issue. The reason why the dev environment was working was because the dev caches less components than the prod, which makes changes made to configuration files like parameters.yml and config.yml reflected in the dev and not in the prod. At some point during the setup and configuration process, the ownership of the cache/prod folder went to root which left the www-data user without the proper access rights to the folder. So bottom line, the prod cache was not being updated which made the database connection service invisible to the prod environment, although the parameters.yml and config.yml had the correct settings.
So what I did was the following, noting that there are steps I performed which might have been unnecessary, but at this stage I will not be looking into finding out which step was not needed.
First step, stop the running services (Apache and PHP server):
sudo app/console server:stop
sudo service apache2 stop
Clear the prod cache:
sudo app/console cache:clear --env=prod --no-debug
I also used the cache:clear command with the no-warmup switch which essentially leaves you with an almost empty cache folder. I issued this command since the previous one left some files in the folder.
sudo app/console cache:clear --env=prod --no-warmup
Install the assets:
sudo app/console assets:install web --env=prod
Give www-data user the proper folder permissions:
sudo chown -R www-data:www-data /var/www/mapbender/app/cache
sudo chmod -R ug+w /var/www/mapbender/app/cache
Start Apache and PHP server:
sudo service apache2 start
sudo app/console server:start
Note that app/console needs to be executed from the folder /var/www/mapbender
Like I mentioned earlier, there might be unnecessary steps but this is more or less what I did and now the app is working as expected.
Disclaimer: I am not a developer and the information presented here was assembled from more than one source, including the Mapbender documentation.
I have one laravel docker container which is build with a custom nginx + php-fpm docker image.
I have deployed successfully to a k8s cluster and can access properly, also logging into the pod and running env I can see all the environment variables being set successfully from my k8s configmap
In the laravel code I read the environment variables like this:
For example at SomeController.php have the following code:
$apiCode = env('API_CODE');
// also tried like this $apiCode = getenv('API_CODE'); still not successful in fetching
My problem and this question is that the env vars are always read empty inside the php code even though inside the pod with env command I can see them properly set, somehow the php code cannot find them.
(I am not caching laravel config so that case we can exclude, also tried with the command php artisan config:clear beforehand, still same result cannot fetch the env vars within the php)
In the kubernetes definition yml I attach the configmap to env variables like below and see them defined properly inside pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
tier: api
spec:
replicas: 1
selector:
matchLabels:
tier: api
template:
metadata:
labels:
tier: api
spec:
containers:
- name: api
image: somenging-fpm-image:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: api-config-env-file
For the moment I am lost and without any idea why this might happen.
I thought initially that maybe configmap was created late after pod started (php fpm somehow process then did not pick on start the env)
To verify and exclude that case, I destroyed the pod and recreated the deployment+pod in order to use the already existing configmap, and still the result was the same php did not pick up the env vars that were present in pod
I could log into pod by kubectl exec -it [podnamehere] /bin/bash and run env there and could see properly env vars set from my configmap api-config-env-file, but the code would always have them empty not be able to read them
I encountered this problems only when there are redis or caching system.
But you already state that
(I am not caching laravel config so that case we can exclude, also tried with the command php artisan config:clear beforehand, still same result cannot fetch the env vars within the php)
So double check if you run any command like php artisan optimize. It will save the cache to redis, and the laravel application will pick the setting up from there.
Just to make sure you don't miss anything.
Try:
php artisan optimize:clear
This command will do the following:
Cached events cleared!
Compiled views cleared!
Application cache cleared!
Route cache cleared!
Configuration cache cleared!
Compiled services and packages files removed!
Caches cleared successfully!
I have a php-fpm docker container . Php-fpm is run inside container, can i get php-fpm's log on host machine? If i can, how to do?
The common approach is that applications inside a container don't log to a file, but output logs on stdout / stderr. Anything that's printed to stdout / stderr by the container's main process is collected by the built-in logging facilities of docker, and can be viewed using docker logs <container-name>.
By default, the logs are stored per-container using the json-file logging driver, and will be deleted when the container itself is deleted, but there are other logging drivers available (see Configure logging drivers) that allow you to send those logs to (e.g.) syslog, journald, gelf.
Also see
View a container's logs
docker logs
The standard for Docker containers is to log to stdout/stderr. However this doesn't work well for some PHP runtimes, for example php-fpm, because of how logs get mangled in length and format.
Therefore, I switched my approach to write logs on a volume and using a sidecar container to get it into stderr and hence into Docker's log collection and/or your orchestrator.
Sample docker-compose.yml section:
cli:
build: .
volumes:
- logs:/srv/annotations/var/logs
logger:
image: busybox:1.27.2
volumes:
- logs:/logs
# be careful, this will only tail alredy existing files
command: tail -f /logs/all.json
depends_on:
- cli
I am using Symfony 2. I have gone through all of the installation with configuring parameters.yml and running composer install and following the guide on http://symfony.com/doc/current/cookbook/deployment-tools.html
I have only a blank page when I enter my adress online, and the log files in app/logs is not written to. I have run the command sudo chgrp apache app/logs and chmod g+w /app/logs to make the folder writable but no success
under config_prod.yml:
monolog:
handlers:
main:
type: fingers_crossed
action_level: warning
handler: nested
nested:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
A white page usually means that an error occurred before the Symfony environment (which includes the logger) is even loaded. That's probably why there's nothing written to the log. If Symfony were running, you would probably see the default 'error 500' message.
So what you should be actually looking for is a PHP error. Production environments offen suppress error messages by disabling display_errors in their php.ini. You could either enable this temporarily in order to see the errors directly on the erroneous page or -- even better -- log in the error log of your web server. PHP error messages should appear there as well.