How to link 2 containers properly? - php

This is kinda a newbie question since I'm still trying to understand how containers "communicate" to each other.
This is roughly what my docker-compose.yml looks like
...
api:
build: ./api
container_name: api
volumes:
- $HOME/devs/apps/api:/var/www/api
laravel:
build: ./laravel
container_name: laravel
volumes:
- $HOME/devs/apps/laravel:/var/www/laravel
depends_on:
- api
links:
- api
...
nginx-proxy:
build: ./nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
links:
- api
- laravel
- mysql-api
nginx configs have blocks referring to upstream exposed by those 2 php-fpm containers, like this:
location ~* \.php$ {
fastcgi_pass laravel:9000;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
similar for the api block.
I can hit each container individually from the web browser/postman (from the host).
Inside the laravel app, there is some php_curl to call a REST service exposed by the api service. I got 500, with this error (from the nginx container):
PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in /var/www/laravel/vendor/symfony/debug/Exception/FatalErrorException.php on line 1" while reading response header from upstream, client: 172.22.0.1, server: laravel.lo, request: "POST {route_name} HTTP/1.1", upstream: "fastcgi://172.22.0.5:9000", host: "laravel.lo"
I tried hitting the api from the laravel container using wget
root#a34903360679:/app# wget api.lo
--2018-08-01 09:57:51-- http://api.lo/
Resolving api.lo (api.lo)... 127.0.0.1
Connecting to api.lo (api.lo)|127.0.0.1|:80... failed: Connection refused.
It resolves to localhost, but I believe 127.0.0.1 in this context seems to be the laravel container itself, not the host/nginx services. I used to have all the services in a single centos VM for development, which didn't have this problem.
Can anyone give some advice on how I could achieve this environment?
EDIT: I found the answer (not long after posting this question).
Refer to here: https://medium.com/#yani/two-way-link-with-docker-compose-8e774887be41
To get the laravel container reaches back to nginx services (so nginx can resolve api request to the api container), use internal network. So something like:
networks:
internal-api:
Then alias the laravel and nginx containers, like so:
laravel:
...
networks:
internal-api:
aliases:
- laravel
...
nginx-proxy:
...
networks:
internal-api:
aliases:
- api
networks:
internal-api:

Newer versions of Docker Compose will do all of the networking setup for you. It will create a Docker-internal network and register an alias for each container under its block name. You don't need (and shouldn't use) links:. You only need depends_on: if you want to bring up only parts of your stack from the command line.
When setting up inter-container connections, always use the other container's name from the Compose YAML file as a DNS name (without Compose, that container's --name or an alias you explicitly declared at docker run time). Configuring these as environment variables is better, particularly if you'll run the same code outside of Docker with different settings. Never directly look up a container's IP address or use localhost or 127.0.0.1 in this context: it won't work.
I'd write your docker-compose.yml file something like:
version: '3'
services:
api:
build: ./api
laravel:
build: ./laravel
env:
API_BASEURL: 'http://api/rest_endpoint'
nginx-proxy:
build: ./nginx-proxy
env:
LARAVEL_FCGI: 'laravel:9000'
ports:
- "80:80"
You will probably need to write a custom entrypoint script for your nginx proxy that fills in the config file from environment variables. If you're using a container based on a full Linux distribution then envsubst is an easy tool for this.

I found the answer (not long after posting this question). Refer to here: https://medium.com/#yani/two-way-link-with-docker-compose-8e774887be41
To get the laravel container reaches back to nginx services (so nginx can resolve api request to the api container), use internal network. So something like:
networks:
internal-api:
With networks config, all the links config can be taken out. Then alias the laravel and nginx containers, like so:
laravel:
...
networks:
internal-api:
aliases:
- laravel
...
nginx-proxy:
...
networks:
internal-api:
aliases:
- api
networks:
internal-api:
Then laravel can hit api url like:
.env:
API_BASEURL=http://api/{rest_endpoint}

Related

Using Laravel Websocker with docker/ docker-compose.yml is not working

I am developing a Laravel application. I am trying to use Laravel Websocket in my application, https://docs.beyondco.de/laravel-websockets. I am using Docker/ docker-compose.yml. Since the Laravel Websocket run locally on port 6001, I am having problem with integrating it with docker-compose. Searching solution I found this link, https://github.com/laradock/laradock/issues/2002. I tried it but not working. Here is what I did.
I created a folder called workspace under the project root directory. Inside that folder, I created a file called, Dockerfile.
This is the content of Dockerfile
EXPOSE 6001
In the docker-compose.yml file, I added this content.
workspace:
port:
- 6001:6001
My docker-compose.yml file looks something like this
version: "3"
services:
workspace:
port:
- 6001:6001
apache:
container_name: web_one_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: web-one.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- web-one-network
ports:
- "80:80"
- "443:443"
php-fpm:
container_name: web-one-php
image: php-fpm-laravel:7.2-minimal
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- web-one-network
When I run "docker-compose up --build -d", it is giving me the following error.
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.workspace: 'port' (did you mean 'ports'?)
What is wrong and how can I fix it? How can I use Laravel Web Socket with docker-compose?
I tried changing from 'port' to 'ports', then I got the following error message instead.
ERROR: The Compose file is invalid because:
Service workspace has neither an image nor a build context specified. At least one must be provided.
Your Dockerfile is wrong. A Dockerfile must start with a FROM <image> directive as explained in the documentation. In your case it might be sufficient to run an existing php:<version>-cli image though, avoiding the whole Dockerfile stuff:
workspace:
image: php:7.3-cli
command: ["php", "artisan", "websockets:serve"]
Of course you will also need to add a volume with the application code and a suitable network. If you add a reverse proxy like Nginx in front of your services, you don't need to export the ports on your workspace either. Services may access other services as long as they are in the same network.

Docker-Symfony-Mysql: SQLSTATE[HY000] [2002] Connection refused

So I am new to docker and this is the first project I have tried to run using it. I am running into the issue of my php-apache container is not able to connect to my mysql container, but only in the browser. I can do:
docker exec -it <php container id> bash
and then run
composer install
and all of the
php bin/console doctrine:<>
commands just fine with no database errors or connection problems. When I try to actually load up the site in my browser I am met with the "SQLSTATE[HY000] [2002] Connection refused" error. I am not really sure why this is the case and no results I have found on the web seem to work. Here are my files:
docker-compose.yml
version: '3'
services:
db:
restart: always
container_name: hcp-db
build:
context: ../
dockerfile: docker/db/Dockerfile
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=symfony
- MYSQL_USER=root
- MYSQL_PASSWORD=password
volumes:
- db:/var/lib/mysql
expose:
- 3306
ports:
- "4306:3306"
application:
container_name: hcp-symfony
build:
context: ../
dockerfile: docker/Dockerfile
working_dir: /app
expose:
- 80
ports:
- 8000:80
depends_on:
- db
volumes:
- ../:/app
environment:
- SYMFONY_ENV=dev
- SYMFONY_DEBUG=1
volumes:
db:
parameters.yml
parameters:
database_driver: pdo_mysql
database_host: db
database_port: 3306
database_name: symfony
database_user: root
database_password: null
I am also able to connect to the database just fine through Sequel Pro while my container is running using the same credentials I provide in the parameters, but with the host as "127.0.0.1", which I find interesting.
Any help with what could be wrong would be much appreciated.
Sup, I can't comment yet to ask a bit more of details since I'm kind of new so I will take a wild guess and assume there is no issues on your app.
This kind of issue is usually related to firewall stuff. Docker does a lot of firewall configuration for us so we don't have to deal with the ufw or IP tables ourselves, but that also means that we need to specify how each container will interact with each other ourselves withing our docker compose file.
You have to specify that your app container links to the database container in order to allow hostnaming access since docker modify the container /etc/hosts file everytime a container starts so it maps each container current IP, like this:
services:
...
application:
...
depends_on:
- db
links:
- db
With that done, you just need to make sure your parameters file is configured correctly as #dbrumann pointed out, the database_password should contains your password and not a null value.
It is also a good practice to create a custom network for your docker project so you can have a nice private lan between your containers without being open to any other container using docker's default network, a custom network with defaults can do nicely, like this:
services:
db:
...
networks:
- my_project_network
application:
...
depends_on:
- db
links:
- db
networks:
- my_project_network
networks:
my_project_network:
You don't really need to expose your database port to the host for this to work but having it exposed makes development way easier when checking the database so just remember to remove the database container ports statement when going to production environment to avoid security issues.
Hope this helps, here is a link to docker network documentation in case you want to learn more: https://docs.docker.com/compose/compose-file/#network-configuration-reference

Docker - deliver the code to nginx and php-fpm

How do I deliver the code of a containerized PHP application, whose image is based on busybox and contains only the code, between separate NGINX and PHP-FPM containers? I use the 3rd version of docker compose.
The Dockerfile of the image containing the code would be:
FROM busybox
#the app's code
RUN mkdir /app
VOLUME /app
#copy the app's code from the context into the image
COPY code /app
The docker-compose.yml file would be:
version: "3"
services:
#the application's code
#the volume is currently mounted from the host machine, but the code will be copied over into the image statically for production
app:
image: app
volumes:
- ../../code/cms/storage:/storage
networks:
- backend
#webserver
web:
image: web
depends_on:
- app
- php
networks:
- frontend
- backend
ports:
- '8080:80'
- '8081:443'
#php
php:
image: php:7-fpm
depends_on:
- app
networks:
- backend
networks:
cms-frontend:
driver: "bridge"
cms-backend:
driver: "bridge"
The solutions I thought of, neither appropriate:
1) Use the volume from the app's container in the PHP and NGINX containers, but compose v3 doesn't allow it (the volumes_from directive). Can't use it.
2) Place the code in a named volume and connect it to the containers. Going this way I can't containerize the code. Can't use. (I'll also have to manually create this volume on every node in a swarm?)
3) Copy the code twice directly into images based on NGINX and PHP-FPM. Bad idea, I'll have to maintain them to be in concert.
Got stuck with this. Any other options? I might have misunderstood something, only beginning with Docker.
I too have been looking around to solve a similar issue and it seems Nginx + PHP-FPM is one of those exceptions when it is better to have both services running in one container for production. In development you can bind mount the project folder to both nginx and php containers. As per Bret Fisher's guide for good defaults for php: php-docker-good-defaults
So far, the Nginx + PHP-FPM combo is the only scenario that I recommend using multi-service containers for. It's a rather unique problem that doesn't always fit well in the model of "one container, one service". You could use two separate containers, one with nginx and one with php:fpm but I've tried that in production, and there are lots of downsides. A copy of the PHP code has to be in each container, they have to communicate over TCP which is much slower than Linux sockets used in a single container, and since you usually have a 1-to-1 relationship between them, the argument of individual service control is rather moot.
You can read more about setting up multiple service containers on the docker page here (it's also listed in the link above): Docker Running Multiple Services in a Container
The way I see it, you have two options:
(1) Using Docker-compose : (this is for very simplistic development env)
You will have to build two separate container from nginx and php-fpm images. And then simply serve app folder from php-fpm on a web folder on nginx.
# The Application
app:
build:
context: ./
dockerfile: app.dev.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
expose:
- 9000
# The Web Server
web:
build:
context: ./
dockerfile: web.dev.dockerfile
working_dir: /var/www
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
(2) Use a single Dockerfile to build everything in it.
Start with some flavor of linux or php image
install nginx
Build your custom image
And serve multi services docker container using supervisord

Is there any way to access a docker (nginx) container via a local url like http://mydomain.dev?

I'm basically using the setup from: https://github.com/wouterds/docker (but not using mysql)
The following is my docker-compose.yml file:
nginx:
image: nginx:1.10.2
ports:
- 80:80
restart: always
volumes:
- ./nginx/conf:/etc/nginx/conf.d
- ~/server/firebase-test/code:/code
links:
- php
depends_on:
- php
php:
build: php
expose:
- 9000
restart: always
volumes:
- ./php/conf/php.ini:/usr/local/etc/php/conf.d/custom.ini
- ~/server/firebase-test/code:/code
It works correctly if I go to http://localhost
But I'm using this container setup to develop a bunch of different sites. Sometimes I need them running at the same time so they both try to use the localhost url which isn't possible. Is there a way for me to manually have them referenced by something like http://websitename.dev or http://[container-name].dev locally? Does docker-compose auto generated some kind of network mapping that I can use to access the container instead of http://localhost?
Pretty new to Docker so I'm a little lost and googling for the last hour didn't result in much other than a tool called "docker-hostmanager" but it doesn't work with the v2 syntax.
I believe you should be able to configure nginx to do this using the server_name directive:
server {
listen 80;
server_name example.org www.example.org;
...
}
http://nginx.org/en/docs/http/server_names.html

Localhost connection refused in Docker development environment

I use Docker for my PHP development environment, and I set up my images with Docker Compose this way:
myapp:
build: myapp/
volumes:
- ./myapp:/var/www/myapp
php:
build: php-fpm/
expose:
- 9000:9000
links:
- elasticsearch
volumes_from:
- myapp
extra_hosts:
# Maybe the problem is related to this line
- "myapp.localhost.com:127.0.0.1"
nginx:
build: nginx/
ports:
- 80:80
links:
- php
volumes_from:
- myapp
elasticsearch:
image: elasticsearch:1.7
ports:
- 9200:9200
Nginx is configured (in its Docker file) with a virtual host named myapp.localhost.com (server_name parameter) and that points to the /var/www/myapp folder.
All this works fine.
But here is my problem: my web app is calling itself via the myapp.localhost.com URL with cURL (in the PHP code), which can be more easily reproduced by running this command:
docker-compose run php curl http://myapp.localhost.com
The cURL response is the following:
cURL error 7: Failed to connect to myapp.localhost.com port 80: Connection refused
Do you have any idea on how I can call the app URL? Is there something I missed in my docker-compose.yml file?
Months later, I come back to post the (quite straightforward) answer to my question:
Remove the server_name entry in the Nginx host configuration
Remove the extra_hosts entry in docker-compose.yml file (not necessary, but it's useless)
Simply call the server with the Nginx container name as a host (nginx here):
docker-compose exec php curl http://nginx

Categories