Access to docker container like an URL - php

I have an application developed with PHP, Nginx and dynamodb. I have create a simple docker-compose to work in local.
version: '3.7'
services:
nginx_broadway_demo:
container_name: nginx_broadway_demo
image: nginx:latest
ports:
- 8080:80
volumes:
- ./www:/var/www
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
links:
- php_fpm_broadway_demo
php_fpm_broadway_demo:
container_name: php_fpm_broadway_demo
build:
context: ./docker/php
ports:
- 9000:9000
volumes:
- .:/var/www/web
dynamodb:
image: amazon/dynamodb-local
ports:
- 8000:8000
expose:
- 8000
Now I need to add dynamodb URL params to allows PHP to make queries to the database.
So, if I make a ping from PHP docker container like this works fine:
ping dynamodb
This doesn't work.
ping http://dynamodb:8000
I need to use http://dynamodb:8000 because AWS needs a URI because I have this error if I use http://dynamodb:8000:
Endpoints must be full URIs and include a scheme and host
So: how can I call a docker container like an URL?
I have tried with docker-compose parameters like: depends, links, network without success

As discussed in the chat, the error come when dependency installed on the host and use inside the container as composer work base on the underlying platform.
So we investigate that the issue come due to above reason. installing dependency inside the container fix the issue.
docker exec -it php bash -c "cd web; composer install"

Related

PhpStorm multi-docker hostname resolution

I have a fully set up docker environment furnished with Xdebug, properly set up with PhpStorm. My environment has multiple containers running for different functions. All appears to work great. CLI/Web interaction both stop at breakpoints as they should, no problems. However ...
I have a code snippet as follows:
// test.php
$host = gethostbyname('db'); //'db' is the name of the other docker box, created with docker-compose
echo $host;
If I run this through bash in the 'web' docker instance:
php test.php
172.21.0.2
If I run it through the browser:
172.21.0.2
If I run it via the PhpStorm run/debug button (Shift+F9):
docker://docker_web:latest/php -dxdebug.remote_enable=1 -dxdebug.remote_mode=req -dxdebug.remote_port=9000 -dxdebug.remote_host=172.17.0.1 /opt/project/test.php
db
It doesn't resolve! Why would that be, and how can I fix it?
As it happens, my docker environment is built with docker-compose, and all the relevant containers are on the same network, and have a proper depends_on hierarchy.
However. PHPStorm was actually set up to use plain docker, rather than docker-compose. It was connecting fine to the docker daemon, but because the container wasn't being build composer-aware, it wasn't leveraging the network layout that was defined in my docker-compose.yml. Once I told PHPStorm to use docker-compose, it worked fine.
As an aside, I noticed that after I run an in-IDE debug session after already loading my container, and causes the container to exit when the script ends. To get around this, I had to create a mirror debug container for PHPStorm to use on demand. My config is as follows:
version: '3'
services:
web: &web
build: ./web
container_name: dev_web
ports:
- "80:80"
volumes:
- ${PROJECTS_DIR}/project:/srv/project
- ./web/logs/httpd:/var/log/httpd
depends_on:
- "db"
networks:
- backend
web-debug:
<< : *web
container_name: dev_web_debug
ports:
- "8181:80"
command: php -v
db:
image: mysql
container_name: dev_db
ports:
- "3306:3306"
volumes:
- ./db/conf.d:/etc/mysql/conf.d
- ./db/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- backend
networks:
backend:
driver: bridge
This allows me to be able to do in-IDE spot debuging on the fly without killing my main web container.

Best way to "connect" two Docker Container without volume

Currently, I have two containers php-fpm and NGINX where I run the PHP application.
Now my question, is there a way to "connect" two docker containers without using a volume?
Both containers need my application (NGINX to send static files e.g. css/js and php-fpm to interpret the PHP files).
Currently, my application is cloned from git into my NGINX container and I had a volume so the php-fpm also had the files to interpret PHP.
I search for a solution without that my application is on the host system.
Am, not sure what you trying to archive. But my docker-compose.yml is look lite this:
php:
container_name: custom_php
build:
context: php-fpm
args:
TIMEZONE: 'UTC'
volumes:
- ./website:/var/www/symfony
networks:
- app_net
nginx:
build: nginx
container_name: custom_nginx
ports:
- 80:80
volumes:
- ./website:/var/www/symfony
networks:
- app_net
networks:
app_net:
driver: bridge
Just make sure they are in one network and then you can talk from container to container by container name and port. Hope that helps

How to increase load time on docker with nginx and php7-fpm on local machine

On my local machine, WordPress Page load time is very slow on docker with nginx and php7-fpm and in network call its shows 2 - 4 sec to load first doc. but when I calculate PHP execution time it shows me 0.02 - 0.1 sec. how can I optimize docker setup to speed up the local environment?
below are some details of my local environment
My Local Environment is set up on Mac Sierra and I run the docker by
docker-compose up -d
and here is my docker-compose.yml file
version: '2'
services:
mysql:
container_name: db
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=dummy
- MYSQL_DATABASE=dummy
- MYSQL_USER=dummy
- MYSQL_PASSWORD=dummy
volumes:
- dummy_path/dump.sql.gz:/docker-entrypoint-initdb.d/sql1.sql.gz
nginx:
container_name: nginx
image: nginx:latest
ports:
- "80:80"
- "443:443"
links:
- mysql:db
- php
volumes:
- dummy_path:/app/www
- dummy_path/nginx/conf.d/:/etc/nginx/conf.d/
- dummy_path/nginx/ssl:/etc/ssl/
- dummy_path/nginx/nginx.conf/:/etc/nginx/nginx.conf
- dummy_path/hosts:/etc/hosts
php:
container_name: php
image: droidhive/php-memcached
links:
- mysql:db
- memcached
volumes:
- dummy_path:/app/www
- dummy_path/php/custom.ini:/usr/local/etc/php/conf.d/custom.ini
- dummy_path/hosts:/etc/hosts
memcached:
container_name: memcached
image: memcached
volumes:
- dummy_path:/app/www
First thing I would try is to update your Dockerfile to ADD or COPY all your files into each image rather than mounting them as volumes. #fiber-optic mentioned this in the comments, but the new Dockerfile for your PHP container would be something like this:
FROM droidhive/php-memcached
ADD dummy_path:/app/www
ADD dummy_path/php/custom.ini:/usr/local/etc/php/conf.d/custom.ini
ADD dummy_path/hosts:/etc/hosts
Do this for at least the PHP container, but the MySQL container might also be an issue.
If that doesn't help or you can't get it to work, try adding :ro or :cached to each of your volumes.
:ro means "read-only", which allows your container to assume the volume won't change. Obviously this won't work if you need to do local dev with the code in a volume, but for some of your configuration files this will probably be fine.
:cached means that the host's files are authoritative, and the container won't constantly be checking for updates internally. This is usually ideal for code that you're editing on your host.

How to run linux daemon in container linked to another container with docker-compose?

I have the following docker-compose.yml file which runs nginx with PHP support:
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
The PHP application inside /var/www/my-app needs to communicate with a linux daemon (let's call it myappd).
The way I see it, I need to either:
Copy the myappd into the nginx container to /usr/local/bin, make it executable with chmod +x and run it in the background.
Create a different container, copy myappd to /usr/local/bin, make it executable with chmod +x and run it in the foreground.
Now, I'm new to Docker and I'm researching and learning about it but my best guess, given that I'm using Docker Composer, is that option 2 is probably the recommended one? Given my limited knowledge about Docker, I'd have to guess that this container would require some sort of linux-based image (like Ubuntu or something) to run this binary. So maybe option 1 is preferred? Or maybe option 2 is possible with a minimal Ubuntu image or maybe it's possible without such image?
Either way, I have no idea how would I implement that on the composer file. Especially option 2, how would the PHP application communicate with the daemon in a different container? Just "sharing" a volume (where the binary is located) like I did for nginx/php services would suffice? Or something else is required?
Simple answer is adding command entry to php service in docker-compose.yml.
Given that myappd is at ./my-app/ on host machine and at /var/www/my-app/, updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
command: ["/bin/sh", "/var/www/my-app/mappd", "&&", "php-fpm"]
Better answer is to create the third container which runs linux daemon.
New Dockerfile is something like following.
FROM debian:jessie
COPY ./myappd /usr/src/app/
EXPOSE 44444
ENTRYPOINT ['/bin/sh']
CMD ['/usr/src/app/myappd']
Build image and name it myapp/myappd.
Updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
networks:
- network1
depends_on:
- daemon
daemon:
container_name: my-app-daemon
image: myapp/myappd
restart: always
networks:
- network1
networks:
network1:
You can send request with hostname daemon from inside php. Docker container has capability to resolve hostname of another container in the same network.

How use php from another docker container

in my app i have separated docker containers for nginx, mysql, php and supervisor. But now i require set in supervisor a program which run php script. It`s possible call php from another container?
EDIT
Example:
When i run supervisor program test, then i see error: INFO spawnerr: can't find command 'php'. I know that php is not in the container supervisor, but how i call from container php? And i require same php as for application.
./app/test.php
<?php
echo "hello world";
docker-compose.yml
version: "2"
services:
nginx:
build: ./docker/nginx
ports:
- 8080:80
volumes:
- ./app:/var/www/html
links:
- php
- mysql
php:
build: ./docker/php
volumes:
- ./app:/var/www/html
ports:
- 9001:9001
mysql:
build: ./docker/mysql
ports:
- 3306:3306
volumes:
- ./data/mysql:/var/lib/mysql
supervisor:
build: ./docker/supervisor
volumes:
- ./app:/var/www/html
ports:
- 9000:9000
supervisor.conf
[program:test]
command = php /var/www/html/test.php
process_name = %(process_num)02d
numprocs = 1
autostart = false
autorestart = true
Please check this repo in github
I used angular, laravel and mongo
with 3 containers, for mongo, php-fpm and nginx to make proxi to api and angular.
Angular dose not use nodejs container because I build angular ng build and this out the build in the folder angular-dist.
The folder angular-src its the source code of angular
into folder laravel run the command composer install, if you use Linux use sudo chmod 777 -R laravel
you can see this
and the route http://localhost:8000/api/
and the route http://localhost:8000/api/v1.0

Categories