I have a Docker setup with a php-fpm container, a node container and an nginx container which serves as a proxy. Now in the browser (http://project.dev), the php container responds with json like I expect. All good. However, when I make a request from the node container to this php container (view code), I get an error on the request: ECONNRESET. So apparently, the node container can not communicate with the php container. The nginx error does not seem to add an entry.
Error: read ECONNRESET at _errnoException(util.js: 1031: 13) at TCP.onread(net.js: 619: 25)
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
Any ideas?
I've made a github repo: https://github.com/thomastilkema/docker-nginx-php-fpm-node
Trimmed version of docker-compose.yml (view file)
nginx:
depends_on:
- php-fpm
- node
networks:
- app
ports:
- 80:80
php-fpm:
networks:
- app
node:
networks:
- app
networks:
app:
driver: overlay
Trimmed version of nginx.conf (view file)
http {
upstream php-fpm {
server php-fpm:9000;
}
upstream node {
server node:4000;
}
server {
listen 80 reuseport;
server_name api.project.dev;
location ~ \.php$ {
fastcgi_pass php-fpm;
...
}
}
server {
listen 80;
server_name project.dev;
location / {
proxy_pass http://node;
}
}
}
php-fpm/Dockerfile (view file)
FROM php:7.1-fpm-alpine
WORKDIR /var/www
EXPOSE 9000
CMD ["php-fpm"]
Request which gives an error
const response = await axios.get('http://php-fpm:9000');
How to reproduce
Create a swarm manager (and a worker) node
Find out the ip address of your swarm manager node (usually 192.168.99.100): docker-machine ip manager or docker-machine ls. Edit your hosts file (on a Mac, sudo vi /private/etc/hosts) by adding 192.168.99.100 project.dev and 192.168.99.100 api.project.dev
git clone https://github.com/thomastilkema/docker-nginx-php-fpm-node project
cd project ./scripts/up.sh
Have a look at the logs of the container: docker logs <container-id> -f
ECONNRESET is the other end closing the connection which can usually be attributed to a protocol error.
The FastCGI Process Manager (FPM) uses the FastCGI protocol to transport data.
Go via the nginx container, which translates the HTTP request to FastCGI
axios.get('http://nginx/whatever.php')
Related
I'm trying to create a simple docker project and connect PHP and Nginx containers for a test. but i got this error :
Building php
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM php:latest
---> 52cdb5f30a05
Successfully built 52cdb5f30a05
Successfully tagged test_php:latest
WARNING: Image for service php was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building nginx
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM nginx:latest
---> 55f4b40fe486
Step 2/2 : ADD default.conf /etc/nginx/conf.d/default.conf
---> 20190910ffec
Successfully built 20190910ffec
Successfully tagged test_nginx:latest
WARNING: Image for service nginx was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating php ... done
Creating nginx ... done
Attaching to php, nginx
php | Interactive shell
php |
nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
php | php > nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
php exited with code 0
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/07/10 05:34:07 [emerg] 1#1: host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx | nginx: [emerg] host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx exited with code 1
Here is the full directory structure of the project:
- docker
-- nginx
-- default.conf
-- Dockerfile
-- php
-- Dockerfile
- src
-- index.php
docker-compose.yml
and this is all files and their contents which i use :
# docker/nginx/default.conf
server {
listen 80;
index index.php index.htm index.html;
root /var/www/html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
# docke/nginx/Dockerfile
FROM nginx:latest
ADD default.conf /etc/nginx/conf.d/default.conf
# docker/php/Dockerfile
FROM php:latest
# src/index.php
<?php
echo phpinfo();
# docker-compose.yml
version: "3.8"
services:
nginx:
container_name: nginx
build: ./docker/nginx
command: nginx -g "daemon off;"
links:
- php
ports:
- "80:80"
volumes:
- ./src:/var/www/html
php:
container_name: php
build: ./docker/php
ports:
- "9000:9000"
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
the main problem occurs when I want to connect the PHP container to the project and without PHP, Nginx will work correctly.
You can try and add depends_on: php in your nginx service to at least try to make sure the nginx service doesnt' start until the php service is Running. Probably the dependency is starting after the main service that requires it. This is a race condition problem, I think.
I had 3 nodes, where nginx and php containers lived on different nodes.
After trying various methods, such as:
define dedicated network for services inside docker-compose
in nginx config use upstream definition instead of direct name of the service
explicitly adding docker's 127.0.0.11 resolver to nginx
neither worked...
And the reason actually was in a closed ports: https://docs.docker.com/engine/swarm/networking/#firewall-considerations
Docker daemons participating in a swarm need the ability to communicate with each other over the following ports:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container overlay network.
After I revert back all changes I did (network, resolver, upstream definition) to original simple setup and open the ports for inner node communication - service discovery begin to work as expected.
Docker 20.10
Several issues:
It seems you have containers that can't see other.
It seems containers exits/fails and it is certainly not because of the first issue; nginx would still work of php-fpm socket is unavailable, it might crash but it should manage such unavailability very well
Make sure the php.ini is really opening an fpm socket on port 9000.
you index.php file script is not closed with "?>" [but that does not matter here]
For a summary, you were suggested:
to consider docker swarm networking configuration [but it seems you are not using docker swarm]
to use depends_on which helps docker decide what to start first, but it should not be an issue in your case, nginx can wait. it will use the socket only upon web user requests.
So it seems the internal docker name resolution is your issue and It seems defining the network manually is best practice. In my case I wandered too long before just giving the docker-compose file a specific network name and attaching containers to that network.
If containers are in the same docker-compose file they should be in the same yourserver_default network that is autogenerated for your composed services.
Have a look at https://blog.devsense.com/2019/php-nginx-docker, they actually define that network manually.
And eventually redo everything from scratch, if you haven't solved this yet. Else all the best to you!
I can't container to listen on port 8000, even after adding the port mapping on my docker-compose.yml file.
All relevant files can be found here: https://github.com/salvatore-esposito/laravel-dockerized
I ran the following commands: docker-compose exec app php artisan serve and it has run successfully.
Anyway if I go inside the container, curl works as expected, but it doesn't work from the outside. The connection gets refused.
I fetched the ip using docker-machine ip
Please note that I mapped the outside-inside port in my container via docker-compose.yml even if in the repository there si no map.
I tried to copy all files to a built image and launch:
docker run --rm -p 8000:8000 --name laravel salvio/php-laravel php artisan serve
and
docker exec -it laravel bash
Once more time if a run "curl localhost:80" and "curl localhost:8000" the former doesn't work and the latter it does whereas if I take the container's ip via docker inspect name_container and digit curl ip_of_container:8000 nothing.
When using docker-compose exec a command keeps running until it's interactive session is stopped(by using ctrl-c or closing the terminal) because it isn't running as a service. To be able to keep the following command running
docker-compose exec app php artisan serve
you would have to open 2 terminals, 1 with the command and 1 to connect to the container and ping port 8000
If you want to access your container port 8000 you would have to expose the port 8000 in the Dockerfile:
# rest of docker file
# Copy existing application directory permissions
#COPY --chown=www-data:www-data ./code /var/www/html
# Change current user to www-data
#USER www-data
# Expose port 9000 and start php-fpm server
EXPOSE 80
EXPOSE 80000
and map it to your host in docker-compose(file):
app:
build:
context: .
dockerfile: .config/php/Dockerfile
image: salvio/php-composer-dev
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www/html
ports:
- "80:80"
- "8000:8000"
volumes:
- ./code/:/var/www/html
- .config/php/php.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- myproject-network
Please keep in mind php artisan serve binds to localhost:8000. This means this is only reachable within the container. Use
php artisan serve --host 0.0.0.0
to bind to the shared network interface. Checkout the following resources:
https://stackoverflow.com/a/54022753/6310593
How do you dockerize a WebSocket Server?
I try to set up Apache2 and PHP-FPM via unix socket but result is
(111)Connection refused: AH02454: FCGI: attempt to connect to Unix domain socket /run/php/php7.2-fpm.sock (*) failed
docker-compose.yml
version: "2"
services:
php:
build: "php:7.2-rc-alpine"
container_name: "php"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "./php7.2-fpm.sock:/run/php/php7.2-fpm.sock"
apache2:
build: "httpd:2.4-alpine"
container_name: "apache2"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "./php7.2-fpm.sock:/run/php/php7.2-fpm.sock"
ports:
- 80:80
links:
- php
www.conf
listen = /run/php/php7.2-fpm.sock
httpd-vhosts.conf
<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php/php7.2-fpm.sock|fcgi://localhost/"
</FilesMatch>
But it's work when connect via TCP.
www.conf
listen = 127.0.0.1:9000
httpd-vhosts.conf
<FilesMatch \.php$>
SetHandler "proxy:fcgi://php:9000"
</FilesMatch>
Okie, so have the repo helped to fix the issue.
Issue #1 - www.conf being copied in apache container
You had below statement in your apache container Dockerfile
COPY ./www.conf /usr/local/etc/php-fpm.d/www.conf
This is actually intended for the php container which will be running php-fpm and not the apache container
Issue #2 - Socket was never being created
Your volume bind - "./php7.2-fpm.sock:/run/php/php7.2-fpm.sock" was creating the socket and they were not being created by php-fpm as such. So you created a blank file and trying to connect to it won't do anything
Issue #3 - No config in php to create socket
The docker container by default create listen to 0.0.0.0:9000 inside the fpm container. You needed to override the zz-docker.conf file inside the container to fix the issue.
zz-docker.conf
[global]
daemonize = no
[www]
listen = /run/php/php7.2-fpm.sock
listen.mode = 0666
Updated docker file
FROM php:7.2-rc-fpm-alpine
LABEL maintainer="Eakkapat Pattarathamrong (overbid#gmail.com)"
RUN docker-php-ext-install \
sockets
RUN set -x \
&& deluser www-data \
&& addgroup -g 500 -S www-data \
&& adduser -u 500 -D -S -G www-data www-data
COPY php-fpm.d /usr/local/etc/php-fpm.d/
Issue #4 - Sockets being shared as volumes to host
You should be sharing sockets using a named volume, so the socket should not be on host at all.
Updated docker-compose.yml
version: "2"
services:
php:
build: "./php"
container_name: "php"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "phpsocket:/run/php"
apache2:
build: "./apache2"
container_name: "apache2"
volumes:
- "./code:/usr/local/apache2/htdocs"
- "phpsocket:/run/php"
ports:
- 7080:80
links:
- php
volumes:
phpsocket:
After fixing all the issues I was able to get the php page working
I start jwilder nginx proxy on 8080
docker run -d -p 8080:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I'm running a standard php:apache container without mapping the port (so 80 is exposed on the container network).
I use an env var to connect with the proxy:
docker run -d -e VIRTUAL_HOST=test.example.com php:apache
on my localhost I've added this in /etc/hosts:
IP test.example.com
Now I visit text.example.com:8080 on my computer so I try to connect with the reverse proxy (8080) which will route me to php-apache on container port 80.
But I get this error:
Error:
Forbidden
You don't have permission to access / on this server.
Apache/2.4.10 (Debian) Server at test.example.com Port 8080
What am I missing? Do I have the change the apache configuration somewhere? (it's now all default). It seems to go throught the nginx because I'm seeing an apache error so I think I need to tell the apache inside (php apache): Allow this 'route')?
Your title appears to be misleading. From your description, you've setup a properly functioning reverse proxy and the target you are connecting to with your reverse proxy is broken. If you review the docker hub page on the php:apache image you'll find multiple examples for how to load your php code into the image and get it working. E.g.:
$ docker run -d -e VIRTUAL_HOST=test.example.com \
-v "$PWD/your/php/code/dir":/var/www/html php:7.0-apache
So on my host environment I have this tomcat service running on port 10000 so I can access the office internal services.
I have a windows hosts entry:
localhost developer.mycompany.com
So I can access the endpoint developer.mycompany.com:10000/some/url and if successful return a json response.
I also have a Docker compose file that has spun up nginx and php-fpm containers and linked them to run a linux based PHP development environment.
What I am trying to achieve is to make the docker container(s) aware of the developer.mycomoany.com host entry. So when my PHP code on my linke containers sends a POST request to http://developer.mycompany.com:10000/some/urlit knows about the host entry and is able to hit that end point.
I have tried the config net=host but that doesn't work with linked containers.
PHP app error message:
{"result":{"success":false,"message":"Error creating resource: [message] fopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known\n[file] /srv/http/vendor/guzzlehttp/guzzle/src/Handler/StreamHandler.php\n[line] 282\n[message] fopen(http://developer.mycompany.com:10000/app/register): failed to open stream: php_network_getaddresses: getaddrinfo failed: Name or service not known\n[file] /srv/http/vendor/guzzlehttp/guzzle/src/Handler/StreamHandler.php\n[line] 282"}}
How do I enable my PHP app on my linked containers to talk to the host developer.mycompany.com (localhost) entry?
Here is my docker-compose:
app:
image: linxlad/docker-php-fpm
tty: true
ports:
- "9000:9000"
volumes:
- ./logs/php-fpm:/var/log/php-fpm
- ~/Development/Apps/php-hello-world:/srv/http
web:
image: linxlad/docker-nginx
ports:
- "8080:80"
volumes:
- ./conf/nginx:/etc/nginx/conf.d
- ./logs/nginx:/var/log/nginx
- ~/Development/Apps/php-hello-world:/srv/http
links:
- app
Edits:
Her is my ifconfig output from docker machine.
Thanks
I found the IP of my docker machine and used that in my hosts nginx proxy config like such which then redirected it to the nginx location config in my docker container.
Host nginx:
location /apps/myapp/appversion {
proxy_pass http://192.168.99.100:8080;
}
Docker Nginx container config:
location /apps/myapp/appversion {
try_files $uri $uri/ /index.php?args;
if (!-e $request_filename) {
rewrite ^/(.*)$ /index.php last;
}
}