I have inherited some code that was written with PHP 5.2, and rather than installing myself, I have it running in a Docker container.
This system also depends on MySQL, so using Docker Compose and extracting the database credentials to a more secured location...
version: "3"
services:
mariadb:
image: mariadb:10.5
environment:
MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
MYSQL_USER: ${DB_USER}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_DATABASE: ${DB_DATABASE}
volumes:
- ./conf/mariadb/initdb.d:/docker-entrypoint-initdb.d/:ro
ports:
- "3306:3306"
nginx:
image: nginx:alpine
depends_on:
- php-fpm
volumes:
- ${LOCAL_WORKING_DIR}:${REMOTE_WORKING_DIR}
- ./conf/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./conf/nginx/conf.d/:/etc/nginx/conf.d/
# - ./conf/nginx/ssl/:/etc/nginx/ssl/
ports:
- "8080:80"
# - "8443:443"
php-fpm:
build:
context: docker/app
args:
APP_ENV: ${APP_ENV}
PHP_VERSION: ${PHP_VERSION}
REMOTE_WORKING_DIR: ${REMOTE_WORKING_DIR}
depends_on:
- mariadb
working_dir: ${REMOTE_WORKING_DIR}
volumes:
- ${LOCAL_WORKING_DIR}:${REMOTE_WORKING_DIR}
- ./conf/php/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
# - ./conf/php/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini:ro
- ./conf/php/php-ini-overrides.ini:/usr/local/etc/php/conf.d/99-overrides.ini:ro
environment:
DB_HOST: mariadb:3306
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
DB_DATABASE: ${DB_DATABASE}
ports:
- "9000:9000"
Dockerfile
FROM devilbox/php-fpm:5.2-base
EXPOSE 9000
CMD ["php-fpm"]
Using phpinfo() shows none of those values in $_ENV or $_SERVER, and getenv() returns empty strings.
I have seen latest php-fpm related issues saying this is solved with clear_env: no, but this is only available in PHP-FPM 5.4+
I have tried to use composer to install dotenv, but that seemed to require PHP7. Same for trying to install Vault to read database credentials remotely.
What else could I try to get this code to run as-is with minimal changes?
Options I have thought:
Start up a secondary REST server that exposes a preconfigured environment, then request to that from PHP... Seems hacky, but better than hard-coding database creds in any code, and would achieve a similar result as using Vault.
Mount my .env file, then parse it out, but that requires more code changes that would be removed later anyway
I found an XML file at /usr/local/etc/php-fpm.conf that contained the Environment variables, and filled it in using dockerize
<value name="environment">
<!-- Add docker environment variables -->
{{ range $k,$v := .Env }}{{ $parts := split $k "APP_" }}{{ if eq (len $parts) 2 -}}
<value name="{{ index $parts 1 }}">{{ $v }}</value>
{{ end }}{{- end -}}
<!-- End Docker Env -->
With docker-compose having
environment:
APP_DB_HOST: mariadb:3306
APP_DB_USER: ${DB_USER}
APP_DB_PASSWORD: ${DB_PASSWORD}
APP_DB_DATABASE: ${DB_DATABASE}
most probably those env are overridden somewhere in image you are using.
docker compose allows to define a command to run on startup. So you can override env vars on startup to whatever you need:
command: bash -c "DB_HOST='mariadb:3306' && DB_USER='some_user ... & ./start_something.sh"
EDIT:
as comment mentioned php require all env to be in php-fpm.conf. strange as for me but it's easy enough to work around by adding env vars you need into this file in same command: statement of docker compose. Simple echo "ENV_NAME" >> ..../php-fpm.conf should help you.
or you can modify Dockerfile so your image has simple sh script which would dump all env vars into that php config.
i am modifying mongo config so works as replica set - works like a charm.
Related
I have a php app dockerized.
My issue is how to capture errors from php service into a dedicated file on the host.
docker file looks is next:
version: "3.9"
services:
web:
image: nginx:latest
ports:
- "3000:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./public:/public
php:
build:
context: .
dockerfile: PHP.Dockerfile
environment:
APP_MODE: 'development'
env_file:
- 'dev.env'
volumes:
- ./app:/app
- ./public:/public
- ./php.conf:/usr/local/etc/php-fpm.d/zz-log.conf
mysql:
image: mariadb:latest
environment:
MYSQL_ROOT_PASSWORD: <pass here>
env_file:
- 'dev.env'
volumes:
- mysqldata:/var/lib/mysql
- ./app:/app
ports:
- '3306:3306'
volumes:
mysqldata: {}
my php.conf that maps as /usr/local/etc/php-fpm.d/zz-log.conf inside php service looks like bellow:
php_admin_value[error_log] = /app/php-error.log
php_admin_flag[log_errors] = on
catch_workers_output = yes
My intention is using php error_log() function and have all the logs recorded in php-error.log which is a file inside volume app.
Now, all logs from containers are shown on terminal only.
I have been struggling with this several hours and have no ideea how to continue. Thank you
I don't know what is your source image. I assume some official docker image for PHP like https://hub.docker.com/_/php
All containerized applications are usually configured to log to stdout so you must override that behaviour. This is really PHP specific and I'm no PHP expert. From what you let us know it looks like you know how to override that behaviour (by using some error_log() function and php_admin_value[error_log] = /app/php-error.log property.
If the behaviour is overridden you should ensure the file app/php-error.log exists inside of the PHP container (i.e. get inside the container by something like docker exec -it my-container-id /bin/bash and then do ls /app/php-error.log and cat /app/php-error.log to see if the file is created.
Because you're mounting the ./app directory from the host to /app directory in container you already have them mirrored. Whatever is inside container's /app you will find in also your /path/to/docker/compose/app directory. You can check if file exists and some content is inside. If not you failed to override the default behaviour of where PHP is logging to.
I have a fully set up docker environment furnished with Xdebug, properly set up with PhpStorm. My environment has multiple containers running for different functions. All appears to work great. CLI/Web interaction both stop at breakpoints as they should, no problems. However ...
I have a code snippet as follows:
// test.php
$host = gethostbyname('db'); //'db' is the name of the other docker box, created with docker-compose
echo $host;
If I run this through bash in the 'web' docker instance:
php test.php
172.21.0.2
If I run it through the browser:
172.21.0.2
If I run it via the PhpStorm run/debug button (Shift+F9):
docker://docker_web:latest/php -dxdebug.remote_enable=1 -dxdebug.remote_mode=req -dxdebug.remote_port=9000 -dxdebug.remote_host=172.17.0.1 /opt/project/test.php
db
It doesn't resolve! Why would that be, and how can I fix it?
As it happens, my docker environment is built with docker-compose, and all the relevant containers are on the same network, and have a proper depends_on hierarchy.
However. PHPStorm was actually set up to use plain docker, rather than docker-compose. It was connecting fine to the docker daemon, but because the container wasn't being build composer-aware, it wasn't leveraging the network layout that was defined in my docker-compose.yml. Once I told PHPStorm to use docker-compose, it worked fine.
As an aside, I noticed that after I run an in-IDE debug session after already loading my container, and causes the container to exit when the script ends. To get around this, I had to create a mirror debug container for PHPStorm to use on demand. My config is as follows:
version: '3'
services:
web: &web
build: ./web
container_name: dev_web
ports:
- "80:80"
volumes:
- ${PROJECTS_DIR}/project:/srv/project
- ./web/logs/httpd:/var/log/httpd
depends_on:
- "db"
networks:
- backend
web-debug:
<< : *web
container_name: dev_web_debug
ports:
- "8181:80"
command: php -v
db:
image: mysql
container_name: dev_db
ports:
- "3306:3306"
volumes:
- ./db/conf.d:/etc/mysql/conf.d
- ./db/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- backend
networks:
backend:
driver: bridge
This allows me to be able to do in-IDE spot debuging on the fly without killing my main web container.
I'm trying to setup a docker-compose system where I'd like to copy dev tools to /usr/local/bin/ on startup.
Thats my docker-compose.yml:
version: '3'
services:
web:
build: docker/container/nginx
ports:
- 4000:80
volumes: &m2volume
- ./src:/var/www/html/
- ./docker/data/bin/:/usr/local/bin/
- ~/.composer:/var/www/.composer
networks: &m2network
- www
links:
- "php"
- "mariadb:mysql"
mariadb:
image: mariadb
ports:
- 8001:3306
networks: *m2network
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: magento2
MYSQL_DATABASE: db
MYSQL_USER: magento2
MYSQL_PASSWORD: magento2
volumes:
- ./docker/container/db/docker-entrypoint-initdb.d/:/docker-entrypoint-initdb.d/
- ./docker/container/db/conf.d:/etc/mysql/conf.d
- ./docker/data/mariadb:/var/lib/mysql
php:
build: docker/container/fpm
volumes: *m2volume
networks: *m2network
networks:
www:
if I leave - ./docker/data/bin/:/usr/local/bin/ in it, I get an error:
ERROR: for m2_php_1 Cannot start service php: oci runtime error: container_linux.go:262: starting container process caused "exec: \"docker-php-entrypoint\": executable file not found in $PATH"
Starting m2_mariadb_1 ... done
ERROR: for php Cannot start service php: oci runtime error: container_linux.go:262: starting container process caused "exec: \"docker-php-entrypoint\": executable file not found in $PATH"
If I uncomment it, all works fine.
What am I doing wrong here?
If i understand this correctly, and mapping the volume ./docker/data/bin/:/usr/local/bin/ is causing an exception, then that's probably because of the entrypoint defined in the mariadb image.
More to the point, you're overwriting the /usr/local/bin container folder, which probably contains an executable used in the entrypoint. When that disappears, you get an error.
in my app i have separated docker containers for nginx, mysql, php and supervisor. But now i require set in supervisor a program which run php script. It`s possible call php from another container?
EDIT
Example:
When i run supervisor program test, then i see error: INFO spawnerr: can't find command 'php'. I know that php is not in the container supervisor, but how i call from container php? And i require same php as for application.
./app/test.php
<?php
echo "hello world";
docker-compose.yml
version: "2"
services:
nginx:
build: ./docker/nginx
ports:
- 8080:80
volumes:
- ./app:/var/www/html
links:
- php
- mysql
php:
build: ./docker/php
volumes:
- ./app:/var/www/html
ports:
- 9001:9001
mysql:
build: ./docker/mysql
ports:
- 3306:3306
volumes:
- ./data/mysql:/var/lib/mysql
supervisor:
build: ./docker/supervisor
volumes:
- ./app:/var/www/html
ports:
- 9000:9000
supervisor.conf
[program:test]
command = php /var/www/html/test.php
process_name = %(process_num)02d
numprocs = 1
autostart = false
autorestart = true
Please check this repo in github
I used angular, laravel and mongo
with 3 containers, for mongo, php-fpm and nginx to make proxi to api and angular.
Angular dose not use nodejs container because I build angular ng build and this out the build in the folder angular-dist.
The folder angular-src its the source code of angular
into folder laravel run the command composer install, if you use Linux use sudo chmod 777 -R laravel
you can see this
and the route http://localhost:8000/api/
and the route http://localhost:8000/api/v1.0
I've got a database backup bundle (https://github.com/dizda/CloudBackupBundle) installed on a Symfony3 project using Docker, but I can't get it to work due to it either not finding PHP or not finding MySQL
When I run php app/console --env=prod dizda:backup:start via exec, run, or via cron. I get mysqldump command not found error through the PHP image, or PHP not found error from the Mysql/db image.
How do I go about running a php command that then runs a mysqldump command.
My docker-compose file is as follows:
version: '2'
services:
web:
# image: nginx:latest
build: .
restart: always
ports:
- "80:80"
volumes:
- .:/usr/share/nginx/html
links:
- php
- db
- node
volumes_from:
- php
volumes:
- ./logs/nginx/:/var/log/nginx
php:
# image: php:fpm
restart: always
build: ./docker_setup/php
links:
- redis
expose:
- 9000
volumes:
- .:/usr/share/nginx/html
db:
image: mysql:5.7
volumes:
- "/var/lib/mysql"
restart: always
ports:
- 8001:3306
environment:
MYSQL_ROOT_PASSWORD: gfxhae671
MYSQL_DATABASE: boxstat_db_live
MYSQL_USER: boxstat_live
MYSQL_PASSWORD: GfXhAe^7!
node:
# image: //digitallyseamless/nodejs-bower-grunt:5
build: ./docker_setup/node
volumes_from:
- php
redis:
image: redis:latest
I'm pretty new to docker, so and easy improvements you can see feel free t flag...I'm in the trial and error stage!
Your image that has your code should have all the dependencies needed for your code to run.
In this case, your code needs mysqldump installed locally for it to run. I would consider this to be a dependency of your code.
It might make sense to add a RUN line to your Dockerfile that will install the mysqldump command so that your code can use it.
Another approach altogether would be to externalize the database backup process instead of leaving that up to your application. You could have some container that runs on a cron and does the mysqldump process that way.
I would consider both approaches to be clean.