I have a fully set up docker environment furnished with Xdebug, properly set up with PhpStorm. My environment has multiple containers running for different functions. All appears to work great. CLI/Web interaction both stop at breakpoints as they should, no problems. However ...
I have a code snippet as follows:
// test.php
$host = gethostbyname('db'); //'db' is the name of the other docker box, created with docker-compose
echo $host;
If I run this through bash in the 'web' docker instance:
php test.php
172.21.0.2
If I run it through the browser:
172.21.0.2
If I run it via the PhpStorm run/debug button (Shift+F9):
docker://docker_web:latest/php -dxdebug.remote_enable=1 -dxdebug.remote_mode=req -dxdebug.remote_port=9000 -dxdebug.remote_host=172.17.0.1 /opt/project/test.php
db
It doesn't resolve! Why would that be, and how can I fix it?
As it happens, my docker environment is built with docker-compose, and all the relevant containers are on the same network, and have a proper depends_on hierarchy.
However. PHPStorm was actually set up to use plain docker, rather than docker-compose. It was connecting fine to the docker daemon, but because the container wasn't being build composer-aware, it wasn't leveraging the network layout that was defined in my docker-compose.yml. Once I told PHPStorm to use docker-compose, it worked fine.
As an aside, I noticed that after I run an in-IDE debug session after already loading my container, and causes the container to exit when the script ends. To get around this, I had to create a mirror debug container for PHPStorm to use on demand. My config is as follows:
version: '3'
services:
web: &web
build: ./web
container_name: dev_web
ports:
- "80:80"
volumes:
- ${PROJECTS_DIR}/project:/srv/project
- ./web/logs/httpd:/var/log/httpd
depends_on:
- "db"
networks:
- backend
web-debug:
<< : *web
container_name: dev_web_debug
ports:
- "8181:80"
command: php -v
db:
image: mysql
container_name: dev_db
ports:
- "3306:3306"
volumes:
- ./db/conf.d:/etc/mysql/conf.d
- ./db/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- backend
networks:
backend:
driver: bridge
This allows me to be able to do in-IDE spot debuging on the fly without killing my main web container.
Related
I have an application developed with PHP, Nginx and dynamodb. I have create a simple docker-compose to work in local.
version: '3.7'
services:
nginx_broadway_demo:
container_name: nginx_broadway_demo
image: nginx:latest
ports:
- 8080:80
volumes:
- ./www:/var/www
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
links:
- php_fpm_broadway_demo
php_fpm_broadway_demo:
container_name: php_fpm_broadway_demo
build:
context: ./docker/php
ports:
- 9000:9000
volumes:
- .:/var/www/web
dynamodb:
image: amazon/dynamodb-local
ports:
- 8000:8000
expose:
- 8000
Now I need to add dynamodb URL params to allows PHP to make queries to the database.
So, if I make a ping from PHP docker container like this works fine:
ping dynamodb
This doesn't work.
ping http://dynamodb:8000
I need to use http://dynamodb:8000 because AWS needs a URI because I have this error if I use http://dynamodb:8000:
Endpoints must be full URIs and include a scheme and host
So: how can I call a docker container like an URL?
I have tried with docker-compose parameters like: depends, links, network without success
As discussed in the chat, the error come when dependency installed on the host and use inside the container as composer work base on the underlying platform.
So we investigate that the issue come due to above reason. installing dependency inside the container fix the issue.
docker exec -it php bash -c "cd web; composer install"
Currently, I have two containers php-fpm and NGINX where I run the PHP application.
Now my question, is there a way to "connect" two docker containers without using a volume?
Both containers need my application (NGINX to send static files e.g. css/js and php-fpm to interpret the PHP files).
Currently, my application is cloned from git into my NGINX container and I had a volume so the php-fpm also had the files to interpret PHP.
I search for a solution without that my application is on the host system.
Am, not sure what you trying to archive. But my docker-compose.yml is look lite this:
php:
container_name: custom_php
build:
context: php-fpm
args:
TIMEZONE: 'UTC'
volumes:
- ./website:/var/www/symfony
networks:
- app_net
nginx:
build: nginx
container_name: custom_nginx
ports:
- 80:80
volumes:
- ./website:/var/www/symfony
networks:
- app_net
networks:
app_net:
driver: bridge
Just make sure they are in one network and then you can talk from container to container by container name and port. Hope that helps
On my local machine, WordPress Page load time is very slow on docker with nginx and php7-fpm and in network call its shows 2 - 4 sec to load first doc. but when I calculate PHP execution time it shows me 0.02 - 0.1 sec. how can I optimize docker setup to speed up the local environment?
below are some details of my local environment
My Local Environment is set up on Mac Sierra and I run the docker by
docker-compose up -d
and here is my docker-compose.yml file
version: '2'
services:
mysql:
container_name: db
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=dummy
- MYSQL_DATABASE=dummy
- MYSQL_USER=dummy
- MYSQL_PASSWORD=dummy
volumes:
- dummy_path/dump.sql.gz:/docker-entrypoint-initdb.d/sql1.sql.gz
nginx:
container_name: nginx
image: nginx:latest
ports:
- "80:80"
- "443:443"
links:
- mysql:db
- php
volumes:
- dummy_path:/app/www
- dummy_path/nginx/conf.d/:/etc/nginx/conf.d/
- dummy_path/nginx/ssl:/etc/ssl/
- dummy_path/nginx/nginx.conf/:/etc/nginx/nginx.conf
- dummy_path/hosts:/etc/hosts
php:
container_name: php
image: droidhive/php-memcached
links:
- mysql:db
- memcached
volumes:
- dummy_path:/app/www
- dummy_path/php/custom.ini:/usr/local/etc/php/conf.d/custom.ini
- dummy_path/hosts:/etc/hosts
memcached:
container_name: memcached
image: memcached
volumes:
- dummy_path:/app/www
First thing I would try is to update your Dockerfile to ADD or COPY all your files into each image rather than mounting them as volumes. #fiber-optic mentioned this in the comments, but the new Dockerfile for your PHP container would be something like this:
FROM droidhive/php-memcached
ADD dummy_path:/app/www
ADD dummy_path/php/custom.ini:/usr/local/etc/php/conf.d/custom.ini
ADD dummy_path/hosts:/etc/hosts
Do this for at least the PHP container, but the MySQL container might also be an issue.
If that doesn't help or you can't get it to work, try adding :ro or :cached to each of your volumes.
:ro means "read-only", which allows your container to assume the volume won't change. Obviously this won't work if you need to do local dev with the code in a volume, but for some of your configuration files this will probably be fine.
:cached means that the host's files are authoritative, and the container won't constantly be checking for updates internally. This is usually ideal for code that you're editing on your host.
in my app i have separated docker containers for nginx, mysql, php and supervisor. But now i require set in supervisor a program which run php script. It`s possible call php from another container?
EDIT
Example:
When i run supervisor program test, then i see error: INFO spawnerr: can't find command 'php'. I know that php is not in the container supervisor, but how i call from container php? And i require same php as for application.
./app/test.php
<?php
echo "hello world";
docker-compose.yml
version: "2"
services:
nginx:
build: ./docker/nginx
ports:
- 8080:80
volumes:
- ./app:/var/www/html
links:
- php
- mysql
php:
build: ./docker/php
volumes:
- ./app:/var/www/html
ports:
- 9001:9001
mysql:
build: ./docker/mysql
ports:
- 3306:3306
volumes:
- ./data/mysql:/var/lib/mysql
supervisor:
build: ./docker/supervisor
volumes:
- ./app:/var/www/html
ports:
- 9000:9000
supervisor.conf
[program:test]
command = php /var/www/html/test.php
process_name = %(process_num)02d
numprocs = 1
autostart = false
autorestart = true
Please check this repo in github
I used angular, laravel and mongo
with 3 containers, for mongo, php-fpm and nginx to make proxi to api and angular.
Angular dose not use nodejs container because I build angular ng build and this out the build in the folder angular-dist.
The folder angular-src its the source code of angular
into folder laravel run the command composer install, if you use Linux use sudo chmod 777 -R laravel
you can see this
and the route http://localhost:8000/api/
and the route http://localhost:8000/api/v1.0
I've got a database backup bundle (https://github.com/dizda/CloudBackupBundle) installed on a Symfony3 project using Docker, but I can't get it to work due to it either not finding PHP or not finding MySQL
When I run php app/console --env=prod dizda:backup:start via exec, run, or via cron. I get mysqldump command not found error through the PHP image, or PHP not found error from the Mysql/db image.
How do I go about running a php command that then runs a mysqldump command.
My docker-compose file is as follows:
version: '2'
services:
web:
# image: nginx:latest
build: .
restart: always
ports:
- "80:80"
volumes:
- .:/usr/share/nginx/html
links:
- php
- db
- node
volumes_from:
- php
volumes:
- ./logs/nginx/:/var/log/nginx
php:
# image: php:fpm
restart: always
build: ./docker_setup/php
links:
- redis
expose:
- 9000
volumes:
- .:/usr/share/nginx/html
db:
image: mysql:5.7
volumes:
- "/var/lib/mysql"
restart: always
ports:
- 8001:3306
environment:
MYSQL_ROOT_PASSWORD: gfxhae671
MYSQL_DATABASE: boxstat_db_live
MYSQL_USER: boxstat_live
MYSQL_PASSWORD: GfXhAe^7!
node:
# image: //digitallyseamless/nodejs-bower-grunt:5
build: ./docker_setup/node
volumes_from:
- php
redis:
image: redis:latest
I'm pretty new to docker, so and easy improvements you can see feel free t flag...I'm in the trial and error stage!
Your image that has your code should have all the dependencies needed for your code to run.
In this case, your code needs mysqldump installed locally for it to run. I would consider this to be a dependency of your code.
It might make sense to add a RUN line to your Dockerfile that will install the mysqldump command so that your code can use it.
Another approach altogether would be to externalize the database backup process instead of leaving that up to your application. You could have some container that runs on a cron and does the mysqldump process that way.
I would consider both approaches to be clean.