Getting Docker linked containers localhost pointing to host localhost - php

So on my host environment I have this tomcat service running on port 10000 so I can access the office internal services.
I have a windows hosts entry:
localhost developer.mycompany.com
So I can access the endpoint developer.mycompany.com:10000/some/url and if successful return a json response.
I also have a Docker compose file that has spun up nginx and php-fpm containers and linked them to run a linux based PHP development environment.
What I am trying to achieve is to make the docker container(s) aware of the developer.mycomoany.com host entry. So when my PHP code on my linke containers sends a POST request to http://developer.mycompany.com:10000/some/urlit knows about the host entry and is able to hit that end point.
I have tried the config net=host but that doesn't work with linked containers.
PHP app error message:
{"result":{"success":false,"message":"Error creating resource: [message] fopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known\n[file] /srv/http/vendor/guzzlehttp/guzzle/src/Handler/StreamHandler.php\n[line] 282\n[message] fopen(http://developer.mycompany.com:10000/app/register): failed to open stream: php_network_getaddresses: getaddrinfo failed: Name or service not known\n[file] /srv/http/vendor/guzzlehttp/guzzle/src/Handler/StreamHandler.php\n[line] 282"}}
How do I enable my PHP app on my linked containers to talk to the host developer.mycompany.com (localhost) entry?
Here is my docker-compose:
app:
image: linxlad/docker-php-fpm
tty: true
ports:
- "9000:9000"
volumes:
- ./logs/php-fpm:/var/log/php-fpm
- ~/Development/Apps/php-hello-world:/srv/http
web:
image: linxlad/docker-nginx
ports:
- "8080:80"
volumes:
- ./conf/nginx:/etc/nginx/conf.d
- ./logs/nginx:/var/log/nginx
- ~/Development/Apps/php-hello-world:/srv/http
links:
- app
Edits:
Her is my ifconfig output from docker machine.
Thanks

I found the IP of my docker machine and used that in my hosts nginx proxy config like such which then redirected it to the nginx location config in my docker container.
Host nginx:
location /apps/myapp/appversion {
proxy_pass http://192.168.99.100:8080;
}
Docker Nginx container config:
location /apps/myapp/appversion {
try_files $uri $uri/ /index.php?args;
if (!-e $request_filename) {
rewrite ^/(.*)$ /index.php last;
}
}

Related

How to connect php and nginx containers together

I'm trying to create a simple docker project and connect PHP and Nginx containers for a test. but i got this error :
Building php
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM php:latest
---> 52cdb5f30a05
Successfully built 52cdb5f30a05
Successfully tagged test_php:latest
WARNING: Image for service php was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building nginx
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM nginx:latest
---> 55f4b40fe486
Step 2/2 : ADD default.conf /etc/nginx/conf.d/default.conf
---> 20190910ffec
Successfully built 20190910ffec
Successfully tagged test_nginx:latest
WARNING: Image for service nginx was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating php ... done
Creating nginx ... done
Attaching to php, nginx
php | Interactive shell
php |
nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
php | php > nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
php exited with code 0
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/07/10 05:34:07 [emerg] 1#1: host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx | nginx: [emerg] host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx exited with code 1
Here is the full directory structure of the project:
- docker
-- nginx
-- default.conf
-- Dockerfile
-- php
-- Dockerfile
- src
-- index.php
docker-compose.yml
and this is all files and their contents which i use :
# docker/nginx/default.conf
server {
listen 80;
index index.php index.htm index.html;
root /var/www/html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
# docke/nginx/Dockerfile
FROM nginx:latest
ADD default.conf /etc/nginx/conf.d/default.conf
# docker/php/Dockerfile
FROM php:latest
# src/index.php
<?php
echo phpinfo();
# docker-compose.yml
version: "3.8"
services:
nginx:
container_name: nginx
build: ./docker/nginx
command: nginx -g "daemon off;"
links:
- php
ports:
- "80:80"
volumes:
- ./src:/var/www/html
php:
container_name: php
build: ./docker/php
ports:
- "9000:9000"
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
the main problem occurs when I want to connect the PHP container to the project and without PHP, Nginx will work correctly.
You can try and add depends_on: php in your nginx service to at least try to make sure the nginx service doesnt' start until the php service is Running. Probably the dependency is starting after the main service that requires it. This is a race condition problem, I think.
I had 3 nodes, where nginx and php containers lived on different nodes.
After trying various methods, such as:
define dedicated network for services inside docker-compose
in nginx config use upstream definition instead of direct name of the service
explicitly adding docker's 127.0.0.11 resolver to nginx
neither worked...
And the reason actually was in a closed ports: https://docs.docker.com/engine/swarm/networking/#firewall-considerations
Docker daemons participating in a swarm need the ability to communicate with each other over the following ports:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container overlay network.
After I revert back all changes I did (network, resolver, upstream definition) to original simple setup and open the ports for inner node communication - service discovery begin to work as expected.
Docker 20.10
Several issues:
It seems you have containers that can't see other.
It seems containers exits/fails and it is certainly not because of the first issue; nginx would still work of php-fpm socket is unavailable, it might crash but it should manage such unavailability very well
Make sure the php.ini is really opening an fpm socket on port 9000.
you index.php file script is not closed with "?>" [but that does not matter here]
For a summary, you were suggested:
to consider docker swarm networking configuration [but it seems you are not using docker swarm]
to use depends_on which helps docker decide what to start first, but it should not be an issue in your case, nginx can wait. it will use the socket only upon web user requests.
So it seems the internal docker name resolution is your issue and It seems defining the network manually is best practice. In my case I wandered too long before just giving the docker-compose file a specific network name and attaching containers to that network.
If containers are in the same docker-compose file they should be in the same yourserver_default network that is autogenerated for your composed services.
Have a look at https://blog.devsense.com/2019/php-nginx-docker, they actually define that network manually.
And eventually redo everything from scratch, if you haven't solved this yet. Else all the best to you!

Database connection from php container with host name "localhost" instead of container name

Our website-framework(s) are designed to work on both xampp and docker environments. We are recognizing our database hosts by host name/IP-address (dev, test, staging, live env). People who are working with xampp are using https://localhost, so they get the environment variable called Development. People who are working with docker are using https://docker as their host. They get the env-variable called Development/Docker. We need this differentiation because inside the php applications our xampp users are connecting to their mysql service with host localhost. Docker users have to connect via host called mysql (this is the container name of the mysql-service).
Because of the last occurred problems (not relevant to be mentioned here) we want a unique solution for both user-groups concerning the database connection: Docker users should be able to connect to their mysql service with host localhost.
docker-compose.yaml (shortened for better overview):
version: '2'
services:
#######################################
# PHP application Docker container
#######################################
app:
build:
context: .
dockerfile: Dockerfile.development
links:
- mail
- mysql
- redis
ports:
- "80:80"
- "443:443"
- "10022:22"
- "3307:3306"
volumes:
- ./app/:/app/
- ./:/docker/
cap_add:
- SYS_PTRACE
env_file:
- etc/environment.yml
- etc/environment.development.yml
environment:
- POSTFIX_RELAYHOST=[mail]:1025
#######################################
# MySQL server
#######################################
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10.Dockerfile
ports:
- "3306"
volumes:
- mysql:/var/lib/mysql
env_file:
- etc/environment.yml
- etc/environment.development.yml
#######################################
# phpMyAdmin
#######################################
# /// #
#######################################
# Mail
#######################################
# /// #
#######################################
# Redis
#######################################
# /// #
# Volumes
volumes:
mysql:
phpmyadmin:
redis:
I tried a lot and played with the docker-compose but didn't find a solution for weeks. Tried with links, networks and so on. I think my docker skills are exhausted by now...
I also added to the mysql.conf:
bind-address = 0.0.0.0
Any ideas?
It is because of docker's networking structure.
docker creates 3 base interfaces. docker0 , host and none.
each container uses docker0 by default. then each container will have a virtual network interface for communicating between containers. so you can use your database with mysql address.
if you want to connect to database with address localhost you should config docker to use host network mode (you can do it by adding one line to defenition of your app service in docker compose file). it will be able to connect to every app which is running on your system. By the way you would loose communication with other container by their name. maybe you loose your connection with your redis (which is connected with redis address)
In this network mode, every dependency should be deployed or be exposed to your localhost.

How to link 2 containers properly?

This is kinda a newbie question since I'm still trying to understand how containers "communicate" to each other.
This is roughly what my docker-compose.yml looks like
...
api:
build: ./api
container_name: api
volumes:
- $HOME/devs/apps/api:/var/www/api
laravel:
build: ./laravel
container_name: laravel
volumes:
- $HOME/devs/apps/laravel:/var/www/laravel
depends_on:
- api
links:
- api
...
nginx-proxy:
build: ./nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
links:
- api
- laravel
- mysql-api
nginx configs have blocks referring to upstream exposed by those 2 php-fpm containers, like this:
location ~* \.php$ {
fastcgi_pass laravel:9000;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
similar for the api block.
I can hit each container individually from the web browser/postman (from the host).
Inside the laravel app, there is some php_curl to call a REST service exposed by the api service. I got 500, with this error (from the nginx container):
PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in /var/www/laravel/vendor/symfony/debug/Exception/FatalErrorException.php on line 1" while reading response header from upstream, client: 172.22.0.1, server: laravel.lo, request: "POST {route_name} HTTP/1.1", upstream: "fastcgi://172.22.0.5:9000", host: "laravel.lo"
I tried hitting the api from the laravel container using wget
root#a34903360679:/app# wget api.lo
--2018-08-01 09:57:51-- http://api.lo/
Resolving api.lo (api.lo)... 127.0.0.1
Connecting to api.lo (api.lo)|127.0.0.1|:80... failed: Connection refused.
It resolves to localhost, but I believe 127.0.0.1 in this context seems to be the laravel container itself, not the host/nginx services. I used to have all the services in a single centos VM for development, which didn't have this problem.
Can anyone give some advice on how I could achieve this environment?
EDIT: I found the answer (not long after posting this question).
Refer to here: https://medium.com/#yani/two-way-link-with-docker-compose-8e774887be41
To get the laravel container reaches back to nginx services (so nginx can resolve api request to the api container), use internal network. So something like:
networks:
internal-api:
Then alias the laravel and nginx containers, like so:
laravel:
...
networks:
internal-api:
aliases:
- laravel
...
nginx-proxy:
...
networks:
internal-api:
aliases:
- api
networks:
internal-api:
Newer versions of Docker Compose will do all of the networking setup for you. It will create a Docker-internal network and register an alias for each container under its block name. You don't need (and shouldn't use) links:. You only need depends_on: if you want to bring up only parts of your stack from the command line.
When setting up inter-container connections, always use the other container's name from the Compose YAML file as a DNS name (without Compose, that container's --name or an alias you explicitly declared at docker run time). Configuring these as environment variables is better, particularly if you'll run the same code outside of Docker with different settings. Never directly look up a container's IP address or use localhost or 127.0.0.1 in this context: it won't work.
I'd write your docker-compose.yml file something like:
version: '3'
services:
api:
build: ./api
laravel:
build: ./laravel
env:
API_BASEURL: 'http://api/rest_endpoint'
nginx-proxy:
build: ./nginx-proxy
env:
LARAVEL_FCGI: 'laravel:9000'
ports:
- "80:80"
You will probably need to write a custom entrypoint script for your nginx proxy that fills in the config file from environment variables. If you're using a container based on a full Linux distribution then envsubst is an easy tool for this.
I found the answer (not long after posting this question). Refer to here: https://medium.com/#yani/two-way-link-with-docker-compose-8e774887be41
To get the laravel container reaches back to nginx services (so nginx can resolve api request to the api container), use internal network. So something like:
networks:
internal-api:
With networks config, all the links config can be taken out. Then alias the laravel and nginx containers, like so:
laravel:
...
networks:
internal-api:
aliases:
- laravel
...
nginx-proxy:
...
networks:
internal-api:
aliases:
- api
networks:
internal-api:
Then laravel can hit api url like:
.env:
API_BASEURL=http://api/{rest_endpoint}

Docker: can not communicate between containers

I have a Docker setup with a php-fpm container, a node container and an nginx container which serves as a proxy. Now in the browser (http://project.dev), the php container responds with json like I expect. All good. However, when I make a request from the node container to this php container (view code), I get an error on the request: ECONNRESET. So apparently, the node container can not communicate with the php container. The nginx error does not seem to add an entry.
Error: read ECONNRESET at _errnoException(util.js: 1031: 13) at TCP.onread(net.js: 619: 25)
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
Any ideas?
I've made a github repo: https://github.com/thomastilkema/docker-nginx-php-fpm-node
Trimmed version of docker-compose.yml (view file)
nginx:
depends_on:
- php-fpm
- node
networks:
- app
ports:
- 80:80
php-fpm:
networks:
- app
node:
networks:
- app
networks:
app:
driver: overlay
Trimmed version of nginx.conf (view file)
http {
upstream php-fpm {
server php-fpm:9000;
}
upstream node {
server node:4000;
}
server {
listen 80 reuseport;
server_name api.project.dev;
location ~ \.php$ {
fastcgi_pass php-fpm;
...
}
}
server {
listen 80;
server_name project.dev;
location / {
proxy_pass http://node;
}
}
}
php-fpm/Dockerfile (view file)
FROM php:7.1-fpm-alpine
WORKDIR /var/www
EXPOSE 9000
CMD ["php-fpm"]
Request which gives an error
const response = await axios.get('http://php-fpm:9000');
How to reproduce
Create a swarm manager (and a worker) node
Find out the ip address of your swarm manager node (usually 192.168.99.100): docker-machine ip manager or docker-machine ls. Edit your hosts file (on a Mac, sudo vi /private/etc/hosts) by adding 192.168.99.100 project.dev and 192.168.99.100 api.project.dev
git clone https://github.com/thomastilkema/docker-nginx-php-fpm-node project
cd project ./scripts/up.sh
Have a look at the logs of the container: docker logs <container-id> -f
ECONNRESET is the other end closing the connection which can usually be attributed to a protocol error.
The FastCGI Process Manager (FPM) uses the FastCGI protocol to transport data.
Go via the nginx container, which translates the HTTP request to FastCGI
axios.get('http://nginx/whatever.php')

Localhost connection refused in Docker development environment

I use Docker for my PHP development environment, and I set up my images with Docker Compose this way:
myapp:
build: myapp/
volumes:
- ./myapp:/var/www/myapp
php:
build: php-fpm/
expose:
- 9000:9000
links:
- elasticsearch
volumes_from:
- myapp
extra_hosts:
# Maybe the problem is related to this line
- "myapp.localhost.com:127.0.0.1"
nginx:
build: nginx/
ports:
- 80:80
links:
- php
volumes_from:
- myapp
elasticsearch:
image: elasticsearch:1.7
ports:
- 9200:9200
Nginx is configured (in its Docker file) with a virtual host named myapp.localhost.com (server_name parameter) and that points to the /var/www/myapp folder.
All this works fine.
But here is my problem: my web app is calling itself via the myapp.localhost.com URL with cURL (in the PHP code), which can be more easily reproduced by running this command:
docker-compose run php curl http://myapp.localhost.com
The cURL response is the following:
cURL error 7: Failed to connect to myapp.localhost.com port 80: Connection refused
Do you have any idea on how I can call the app URL? Is there something I missed in my docker-compose.yml file?
Months later, I come back to post the (quite straightforward) answer to my question:
Remove the server_name entry in the Nginx host configuration
Remove the extra_hosts entry in docker-compose.yml file (not necessary, but it's useless)
Simply call the server with the Nginx container name as a host (nginx here):
docker-compose exec php curl http://nginx

Categories