AWS Beanstalk and docker-compose, php-fpm, ignore database, proper deploy? - php

I've been looking it for at least few hours and I was unable to find a solution, so I would like to ask for your advice.
I have a docker-compose.yaml file:
version: "3.5"
services:
php:
build:
context: dev/php
- ./source:/application
nginx:
build:
context: dev/nginx
depends_on:
- php
volumes:
- ./source:/application
- ./dev/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
mysql:
image: mysql:8.0.22
environment:
- MYSQL_ROOT_PASSWORD=${RDS_PASSWORD}
- MYSQL_DATABASE=${RDS_DB_NAME}
- MYSQL_USER=npuser
- MYSQL_PASSWORD=${RDS_PASSWORD}
ports:
- "3306:3306"
and now I would like to deploy it using the proper way with CLI commands, so it could be done by CI/CD.
To do so - right now I'm doing the following:
cut the mysql and other databases part from docker-compose.yaml, as I would like to use RDS database
run eb init
run eb deploy --staged
What I don't like about this approach is point 1, where I need to modify the original docker-compose.yaml file, and point 3, where I need to add --staged because docker-compose.yaml changed in point 1.
Of course, I don't want to remove entirely mysql from docker-compose.yaml file, as I would like it to be easily run in local dev env, but I see no option in eb to deploy only selected containers.
Also, I was wondering, maybe I should use Dockerrun.aws.json instead of docker-compose.yaml for eb deploy? I hope you can get me to the right direction, as I have no idea what should be proper deploy in this scenario.

Related

Access to docker container like an URL

I have an application developed with PHP, Nginx and dynamodb. I have create a simple docker-compose to work in local.
version: '3.7'
services:
nginx_broadway_demo:
container_name: nginx_broadway_demo
image: nginx:latest
ports:
- 8080:80
volumes:
- ./www:/var/www
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
links:
- php_fpm_broadway_demo
php_fpm_broadway_demo:
container_name: php_fpm_broadway_demo
build:
context: ./docker/php
ports:
- 9000:9000
volumes:
- .:/var/www/web
dynamodb:
image: amazon/dynamodb-local
ports:
- 8000:8000
expose:
- 8000
Now I need to add dynamodb URL params to allows PHP to make queries to the database.
So, if I make a ping from PHP docker container like this works fine:
ping dynamodb
This doesn't work.
ping http://dynamodb:8000
I need to use http://dynamodb:8000 because AWS needs a URI because I have this error if I use http://dynamodb:8000:
Endpoints must be full URIs and include a scheme and host
So: how can I call a docker container like an URL?
I have tried with docker-compose parameters like: depends, links, network without success
As discussed in the chat, the error come when dependency installed on the host and use inside the container as composer work base on the underlying platform.
So we investigate that the issue come due to above reason. installing dependency inside the container fix the issue.
docker exec -it php bash -c "cd web; composer install"

How to use Docker-compose and Connecting a PHP:Apache Container and MSSQL Container

I have just started to learn PHP and MSSQL and set up both environments in docker using docker-compose. For the most part its going great as I have gotten pages created in PHP and displayed on the web while being able to create a database in azure data studio. The issue is I am trying to link my php:apache container with my MSSQ: container and be able to use PHP to display the database on the web or manipulate the database with PHP code on the web. I have done lots of searching and I can seem to figure out a way to do this. I even with a last ditch effort tried to set up PHP, the PHP driver and Apache on the MSSQL server Ubuntu container but I could only get them connected by the command line not the web. So I was wondering what code do I need to write in PHP to connect the two and what do I need to install to make the php:apache and MSSQL containers work together? Below is my docker-compose.yaml for reference. Also my file setup has the docker-compose.yaml at the root with the build: directories where all their files go. Hopefully that's some helpful info and sorry for being a beginner and not knowing exactly what info to provide to help resolve my issue? Thank you all who tries to help out in advance.
version: "3.7"
services:
homepage:
build: ./homepage
volumes:
- ./homepage/public-html:/usr/local/apache2/htdocs/
ports:
- 5001:80
php:
image: php:apache
volumes:
- ./php:/var/www/html
ports:
- 5000:80
db:
image: "mcr.microsoft.com/mssql/server:latest"
volumes:
- ./db:/Documents
environment:
SA_PASSWORD: "hidden password so my password is not leaked"
ACCEPT_EULA: "Y"
ports:
- 1433:1433

PhpStorm multi-docker hostname resolution

I have a fully set up docker environment furnished with Xdebug, properly set up with PhpStorm. My environment has multiple containers running for different functions. All appears to work great. CLI/Web interaction both stop at breakpoints as they should, no problems. However ...
I have a code snippet as follows:
// test.php
$host = gethostbyname('db'); //'db' is the name of the other docker box, created with docker-compose
echo $host;
If I run this through bash in the 'web' docker instance:
php test.php
172.21.0.2
If I run it through the browser:
172.21.0.2
If I run it via the PhpStorm run/debug button (Shift+F9):
docker://docker_web:latest/php -dxdebug.remote_enable=1 -dxdebug.remote_mode=req -dxdebug.remote_port=9000 -dxdebug.remote_host=172.17.0.1 /opt/project/test.php
db
It doesn't resolve! Why would that be, and how can I fix it?
As it happens, my docker environment is built with docker-compose, and all the relevant containers are on the same network, and have a proper depends_on hierarchy.
However. PHPStorm was actually set up to use plain docker, rather than docker-compose. It was connecting fine to the docker daemon, but because the container wasn't being build composer-aware, it wasn't leveraging the network layout that was defined in my docker-compose.yml. Once I told PHPStorm to use docker-compose, it worked fine.
As an aside, I noticed that after I run an in-IDE debug session after already loading my container, and causes the container to exit when the script ends. To get around this, I had to create a mirror debug container for PHPStorm to use on demand. My config is as follows:
version: '3'
services:
web: &web
build: ./web
container_name: dev_web
ports:
- "80:80"
volumes:
- ${PROJECTS_DIR}/project:/srv/project
- ./web/logs/httpd:/var/log/httpd
depends_on:
- "db"
networks:
- backend
web-debug:
<< : *web
container_name: dev_web_debug
ports:
- "8181:80"
command: php -v
db:
image: mysql
container_name: dev_db
ports:
- "3306:3306"
volumes:
- ./db/conf.d:/etc/mysql/conf.d
- ./db/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- backend
networks:
backend:
driver: bridge
This allows me to be able to do in-IDE spot debuging on the fly without killing my main web container.

How to increase load time on docker with nginx and php7-fpm on local machine

On my local machine, WordPress Page load time is very slow on docker with nginx and php7-fpm and in network call its shows 2 - 4 sec to load first doc. but when I calculate PHP execution time it shows me 0.02 - 0.1 sec. how can I optimize docker setup to speed up the local environment?
below are some details of my local environment
My Local Environment is set up on Mac Sierra and I run the docker by
docker-compose up -d
and here is my docker-compose.yml file
version: '2'
services:
mysql:
container_name: db
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=dummy
- MYSQL_DATABASE=dummy
- MYSQL_USER=dummy
- MYSQL_PASSWORD=dummy
volumes:
- dummy_path/dump.sql.gz:/docker-entrypoint-initdb.d/sql1.sql.gz
nginx:
container_name: nginx
image: nginx:latest
ports:
- "80:80"
- "443:443"
links:
- mysql:db
- php
volumes:
- dummy_path:/app/www
- dummy_path/nginx/conf.d/:/etc/nginx/conf.d/
- dummy_path/nginx/ssl:/etc/ssl/
- dummy_path/nginx/nginx.conf/:/etc/nginx/nginx.conf
- dummy_path/hosts:/etc/hosts
php:
container_name: php
image: droidhive/php-memcached
links:
- mysql:db
- memcached
volumes:
- dummy_path:/app/www
- dummy_path/php/custom.ini:/usr/local/etc/php/conf.d/custom.ini
- dummy_path/hosts:/etc/hosts
memcached:
container_name: memcached
image: memcached
volumes:
- dummy_path:/app/www
First thing I would try is to update your Dockerfile to ADD or COPY all your files into each image rather than mounting them as volumes. #fiber-optic mentioned this in the comments, but the new Dockerfile for your PHP container would be something like this:
FROM droidhive/php-memcached
ADD dummy_path:/app/www
ADD dummy_path/php/custom.ini:/usr/local/etc/php/conf.d/custom.ini
ADD dummy_path/hosts:/etc/hosts
Do this for at least the PHP container, but the MySQL container might also be an issue.
If that doesn't help or you can't get it to work, try adding :ro or :cached to each of your volumes.
:ro means "read-only", which allows your container to assume the volume won't change. Obviously this won't work if you need to do local dev with the code in a volume, but for some of your configuration files this will probably be fine.
:cached means that the host's files are authoritative, and the container won't constantly be checking for updates internally. This is usually ideal for code that you're editing on your host.

Docker - deliver the code to nginx and php-fpm

How do I deliver the code of a containerized PHP application, whose image is based on busybox and contains only the code, between separate NGINX and PHP-FPM containers? I use the 3rd version of docker compose.
The Dockerfile of the image containing the code would be:
FROM busybox
#the app's code
RUN mkdir /app
VOLUME /app
#copy the app's code from the context into the image
COPY code /app
The docker-compose.yml file would be:
version: "3"
services:
#the application's code
#the volume is currently mounted from the host machine, but the code will be copied over into the image statically for production
app:
image: app
volumes:
- ../../code/cms/storage:/storage
networks:
- backend
#webserver
web:
image: web
depends_on:
- app
- php
networks:
- frontend
- backend
ports:
- '8080:80'
- '8081:443'
#php
php:
image: php:7-fpm
depends_on:
- app
networks:
- backend
networks:
cms-frontend:
driver: "bridge"
cms-backend:
driver: "bridge"
The solutions I thought of, neither appropriate:
1) Use the volume from the app's container in the PHP and NGINX containers, but compose v3 doesn't allow it (the volumes_from directive). Can't use it.
2) Place the code in a named volume and connect it to the containers. Going this way I can't containerize the code. Can't use. (I'll also have to manually create this volume on every node in a swarm?)
3) Copy the code twice directly into images based on NGINX and PHP-FPM. Bad idea, I'll have to maintain them to be in concert.
Got stuck with this. Any other options? I might have misunderstood something, only beginning with Docker.
I too have been looking around to solve a similar issue and it seems Nginx + PHP-FPM is one of those exceptions when it is better to have both services running in one container for production. In development you can bind mount the project folder to both nginx and php containers. As per Bret Fisher's guide for good defaults for php: php-docker-good-defaults
So far, the Nginx + PHP-FPM combo is the only scenario that I recommend using multi-service containers for. It's a rather unique problem that doesn't always fit well in the model of "one container, one service". You could use two separate containers, one with nginx and one with php:fpm but I've tried that in production, and there are lots of downsides. A copy of the PHP code has to be in each container, they have to communicate over TCP which is much slower than Linux sockets used in a single container, and since you usually have a 1-to-1 relationship between them, the argument of individual service control is rather moot.
You can read more about setting up multiple service containers on the docker page here (it's also listed in the link above): Docker Running Multiple Services in a Container
The way I see it, you have two options:
(1) Using Docker-compose : (this is for very simplistic development env)
You will have to build two separate container from nginx and php-fpm images. And then simply serve app folder from php-fpm on a web folder on nginx.
# The Application
app:
build:
context: ./
dockerfile: app.dev.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
expose:
- 9000
# The Web Server
web:
build:
context: ./
dockerfile: web.dev.dockerfile
working_dir: /var/www
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
(2) Use a single Dockerfile to build everything in it.
Start with some flavor of linux or php image
install nginx
Build your custom image
And serve multi services docker container using supervisord

Categories