How to view php logs information with php docker container? - php

I have a php-fpm docker container . Php-fpm is run inside container, can i get php-fpm's log on host machine? If i can, how to do?

The common approach is that applications inside a container don't log to a file, but output logs on stdout / stderr. Anything that's printed to stdout / stderr by the container's main process is collected by the built-in logging facilities of docker, and can be viewed using docker logs <container-name>.
By default, the logs are stored per-container using the json-file logging driver, and will be deleted when the container itself is deleted, but there are other logging drivers available (see Configure logging drivers) that allow you to send those logs to (e.g.) syslog, journald, gelf.
Also see
View a container's logs
docker logs

The standard for Docker containers is to log to stdout/stderr. However this doesn't work well for some PHP runtimes, for example php-fpm, because of how logs get mangled in length and format.
Therefore, I switched my approach to write logs on a volume and using a sidecar container to get it into stderr and hence into Docker's log collection and/or your orchestrator.
Sample docker-compose.yml section:
cli:
build: .
volumes:
- logs:/srv/annotations/var/logs
logger:
image: busybox:1.27.2
volumes:
- logs:/logs
# be careful, this will only tail alredy existing files
command: tail -f /logs/all.json
depends_on:
- cli

Related

set aws credentails folder for php apache container [duplicate]

I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?
A lot has changed in Docker since this question was asked, so here's an attempt at an updated answer.
First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.
Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And you build it with a command in 18.09 or newer like:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
For a docker run, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for a compose file, you'd have:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.
I often version my secrets using a script from: https://github.com/sudo-bmitch/docker-config-update
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
The best way is to use IAM Role and do not deal with credentials at all. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254..... Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to fetch, refresh and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with correct IAM role and you good to go.
As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)
You can access these environment variables by running printenv at the terminal.
Yet another approach is to create temporary read-only volume in docker-compose.yaml. AWS CLI and SDK (like boto3 or AWS SDK for Java etc.) are looking for default profile in ~/.aws/credentials file.
If you want to use other profiles, you just need also to export AWS_PROFILE variable before running docker-compose command.
export AWS_PROFILE=some_other_profile_name
version: '3'
services:
service-name:
image: docker-image-name:latest
environment:
- AWS_PROFILE=${AWS_PROFILE}
volumes:
- ~/.aws/:/root/.aws:ro
In this example, I used root user on docker. If you are using other user, just change /root/.aws to user home directory.
:ro - stands for read-only docker volume
It is very helpful when you have multiple profiles in ~/.aws/credentials file and you are also using MFA. Also helpful when you want to locally test docker-container before deploying it on ECS on which you have IAM Roles, but locally you don't.
Another approach is to pass the keys from the host machine to the docker container. You may add the following lines to the docker-compose file.
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
The following one-liner works for me even when my credentials are set up by aws-okta or saml2aws:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
Please note that for advanced use cases you might need to allow rw (read-write) permissions, so omit the ro (read-only) limitation when mounting the .aws volume in -v$HOME/.aws:/root/.aws:ro
Volume mounting is noted in this thread but as of docker-compose v3.2 + you can Bind Mount.
For example, if you have a file named .aws_creds in the root of your project:
In your service for the compose file do this for volumes:
volumes:
# normal volume mount, already shown in thread
- ./.aws_creds:/root/.aws/credentials
# way 2, note this requires docker-compose v 3.2+
- type: bind
source: .aws_creds # from local
target: /root/.aws/credentials # to the container location
Using this idea, you can publicly store your docker images on docker-hub because your aws credentials will not physically be in the image...to have them associated, you must have the correct directory structure locally where the container is started (i.e. pulling from Git)
You could create ~/aws_env_creds containing:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
Add these value (replace the key of yours):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
Press "esc" to save the file.
Run and test the container:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds
If someone still face the same issue after following the instructions mentioned in accepted answer then make sure that you are not passing environment variables from two different sources. In my case I was passing environment variables to docker run via a file and as parameters which was causing the variables passed as parameters show no effect.
So the following command did not work for me:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
Moving the aws credentials into the mentioned env.list file helped.
for php apache docker the following command works
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache
Based on some of previous answers, I built my own as follows.
My project structure:
├── Dockerfile
├── code
│   └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
My docker-compose.yml file:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
My Docker file:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py

Why do we need to map the project files in both PHP-FPM and web server container?

I am pretty new with all of this docker stuff and I have this docker-compose.yml file:
fpm:
build:
context: "./php-fpm"
dockerfile: "my-fpm-dockerfile"
restart: "always"
ports:
- "9002:9000"
volumes:
- ./src:/var/www/src
depends_on:
- "db"
nginx:
build:
context: "./nginx"
dockerfile: "my-nginx-dockerfile"
restart: "always"
ports:
- "8084:80"
volumes:
- ./docker/logs/nginx/:/var/log/nginx:cached
- ./src:/var/www/src
depends_on:
- "fpm"
I am curious why do I need to add my project files in the fpm container as well as in the nginx?
Why isn't it enough to add it just only to my webserver? A web server is a place that holds the files and handles the request...
I believe that this information would be useful to other docker newbies as well.
Thanks in advance.
In your NGinx container you only need the statics and in your PHP-FPM container you only need the PHP files. If you are capable of splitting the files, you don't need any file in both sites.
Why isn't it enough to add it just only to my webserver? A web server
is a place that holds the files and handles the request...
NGinx handles requests from users. If a request is to a static file (configured in NGinx site), it sends the contents back to the user. If the request is to a PHP file (and NGinx is correctly configured to use FPM on that place), it sends the request to the FPM server (via socket or TCP request), which knows how to execute PHP files (NGinx doesn't know that). You can use PHP-FPM or whatever other interpreter you prefer, but this one works great when configured correctly.
If you just want an explanation why both need the access to the same files under
/var/www/src, I cannot provide a reliable answer since I’m not familiar neither with
nginx not fpm.
But I can provide an explanation what’s the purpose of doing it so.
First, to learn about docker, I highly recommend the official documentation, since it provides a great explanation: docs.docker.com
For learning the syntax of a docker-compose file, see docs.docker.com: Compose file reference
Your specific question
Let me break down what you have her:
You got two different images, fpm and nginx.
fpm:
...
nginx:
...
In principle, these containers (or services as they are called )
run completely independent from each other. This basically means, that they don't know that the other one exists.
Note: depends_on just expresses a dependency between services
Conclusion: Your webserver knows nothing about your second container.
As said: While I don't know the purpose of fpm, I assume that a common folder is the connection between these two containers. By using a common folder ./src on your host, they both have access to this ./src folder, thus can write to and read from it.
The syntax ./src:/var/www/src means, that this ./src folder is mapped (internally in your container) to :/var/www/src.
If a container writes to /var/www/src it will actually write to ./src on your host. This works vice versa.
Conclusion: They share a common directory where both containers can access the very same files.
Hope my explanation helps you understanting your docker-compose better.

Why php development server hangs with docker compose?

I have the following docker-compose.yml configuration file:
silex-twig:
image: php:7.1
command: php -S localhost:4000 /app/index.php
volumes:
- .:/app
ports:
- 3000:4000
When I run docker-compose up it downloads the base image but then it hangs here:
Recreating silextwig_silex-twig_1
Attaching to silextwig_silex-twig_1
What am I doing wrong? There is nothing available on port 3000.
I know there are setups with php-fpm + nginx but that seemed complicated for only development.
That is normal. It is attaching to the stdout of the container (for which there is no stdout being logged). At this point, the container is running.
If you want to run in the background you would run with docker-compose up -d instead of just docker-compose up.
The actual HTTP request to port 3000 won't work because PHP is listening only on localhost. You need to modify your command to be php -S 0.0.0.0:4000 /app/index.php so that it is listening on all IP addresses and can accept connections through the Docker NAT.

Docker: communication between web container and php container

I'm trying to dockerizing a project runs with php + Apache http server. I learned that I need to have a container for apache http server and another container for php script. I searched a lot but still don't understanding how that works. What I know now is I should resort to docker networking, as long as they are in the same network they should be communicating with each other.
The closest info I got is this, but it uses nginx:
https://www.codementor.io/patrickfohjnr/developing-laravel-applications-with-docker-4pwiwqmh4
quote from original article:
vhost.conf
The vhost.conf file contains standard Nginx configuration that will handle http
requests and proxy traffic to our app container on port 9000. Remember from
earlier, we named our container app in the Docker Compose file and linked it to the web container; here, we can just reference that container by its name and Docker will route traffic to that app container.
My question is what configuring I should do to make the communication between php container and web container happen using Apache http server like above? what is the rationale behind this? I'm really confused, any information will be much much appreciated.
The example that you linked to utilizes two containers:
a container that runs nginx
a container that runs php-fpm
The two containers are then able to connect to each other due to the links directive in the web service in the article's example docker-compose.yml. With this, the two containers can resolve the name web and app to the corresponding docker container. This means that the nginx service in the web container is able to forward any requests it receives to the php-fpm container by simply forwarding to app:9000 which is <hostname>:<port>.
If you are looking to stay with PHP + Apache there is a core container php:7-apache that will do what you're looking for within a single container. Assuming the following project structure
/ Project root
- /www/ Your PHP files
You can generate a docker-compose.yml as follows within your project root directory:
web:
image: php:7-apache
ports:
- "8080:80"
volumes:
- ./www/:/var/www/html
Then from your project root run docker-compose up and will be able to visit your app at localhost:8080
The above docker-compose.yml will mount the www directory in your project as a volume at /var/www/html within the container which is where Apache will serve files from.
The configuration in this case is Docker Compose. They are using Docker Compose to facilitate the DNS changes in the containers that allow them to resolve names like app to IP addresses. In the example you linked, the web service links to the app service. The name app can now be resolved via DNS to one of the app service containers.
In the article, the web service nginx configuration they use has a host and port pair of app:9000. The app service is listening inside the container on port 9000 and nginx will resolve app to one of the IP addresses for the app service containers.
The equivalent of this in just Docker commands would be something like:
App container:
docker run --name app -v ./:/var/www appimage
Web container:
docker run --name web --link app:app -v ./:/var/www webimage

Logging PHP API Request Info from Docker container

I'm working on a project where I have an iOs app connecting to a PHP API. I want to log all incoming requests to the website service for development purposes (ie I need a solution that can turn off and based on an environment variable). The API is run in a docker container, which is launched as a docker-compose service.
The PHP API is not using any sort of MVC framework.
My PHP experience is limited, so I know I've got some research ahead of me, but in the meantime, I'd appreciate any jump start to the following questions:
Is there a composer library that I can plug into my PHP code that will write to a tailed log?
Can I plug anything at the nginx or php-fpm container level so that requests to those containers are logged before even getting to the PHP code?
Is there anything I need to configure to in either nginx or php-fpm containers to ensure that logs are tailed when I run docker-compose up?
Here are my logging needs:
request method
request URL
GET query parameters, PUT and POST parameters (these will be in JSON format)
response code
response body
The logs I'm interested are all application/json. However, I don't mind the kitchen sink option where anything out gets logged.
request and response headers
I will not need these 99% of the time, so they aren't a need. But it'd be nice to configure them off/on.
Below is the docker-compose.yml file.
version: '2'
services:
gearman:
image:gearmand
redis:
image: redis
database:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: ******
MYSQL_DATABASE: database
volumes:
- dbdata:/var/lib/mysql
ports:
- 3306:3306
sphinx:
image:sphinx
links:
- database
memcached:
image: memcached
ports:
- "11211:11211"
php_fpm:
image:php-fpm
links:
- redis
- memcached
- database
environment:
REDIS_SERVER: redis
DATABASE_HOST: database
RW_DATABASE_HOST: database
RO_DATABASE_HOST0: database
DATABASE_USER: root
DATABASE_PASS: ******
volumes:
- ./website:/var/www/website/
- /var/run
nginx:
image:nginx
links:
- php_fpm
ports:
- "80:80"
volumes_from:
- php_fpm
volumes:
dbdata:
driver: local
The container was logging using all the default settings, but my client was pointing to another server, but just to leave a trail.
Inside php_fpm container (docker exec -it dev_php_fpm_1 /bin/bash), you can cat /etc/php5/fpm/php.ini, which shows the default error_log settings:
; Log errors to specified file. PHP's default behavior is to leave this value
; empty.
; http://php.net/error-log
; Example:
;error_log = php_errors.log
; Log errors to syslog (Event Log on Windows).
;error_log = syslog
Just user error_log function to write to default OS logger.
Log your the php_fpm service with docker logs -f -t dev_php_fpm_1.
Update:
In case error_log is truncated for your purposes, you can also simply write a function
file_put_contents(realpath(dirname(__FILE__)) . './requests.log', $msg, FILE_APPEND);
and then tail it: tail -f ./requests.log - either from inside the container or from outside the container if you are using a local volumes.

Categories