How to connect php and nginx containers together - php

I'm trying to create a simple docker project and connect PHP and Nginx containers for a test. but i got this error :
Building php
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM php:latest
---> 52cdb5f30a05
Successfully built 52cdb5f30a05
Successfully tagged test_php:latest
WARNING: Image for service php was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building nginx
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM nginx:latest
---> 55f4b40fe486
Step 2/2 : ADD default.conf /etc/nginx/conf.d/default.conf
---> 20190910ffec
Successfully built 20190910ffec
Successfully tagged test_nginx:latest
WARNING: Image for service nginx was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating php ... done
Creating nginx ... done
Attaching to php, nginx
php | Interactive shell
php |
nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
php | php > nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
php exited with code 0
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/07/10 05:34:07 [emerg] 1#1: host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx | nginx: [emerg] host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx exited with code 1
Here is the full directory structure of the project:
- docker
-- nginx
-- default.conf
-- Dockerfile
-- php
-- Dockerfile
- src
-- index.php
docker-compose.yml
and this is all files and their contents which i use :
# docker/nginx/default.conf
server {
listen 80;
index index.php index.htm index.html;
root /var/www/html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
# docke/nginx/Dockerfile
FROM nginx:latest
ADD default.conf /etc/nginx/conf.d/default.conf
# docker/php/Dockerfile
FROM php:latest
# src/index.php
<?php
echo phpinfo();
# docker-compose.yml
version: "3.8"
services:
nginx:
container_name: nginx
build: ./docker/nginx
command: nginx -g "daemon off;"
links:
- php
ports:
- "80:80"
volumes:
- ./src:/var/www/html
php:
container_name: php
build: ./docker/php
ports:
- "9000:9000"
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
the main problem occurs when I want to connect the PHP container to the project and without PHP, Nginx will work correctly.

You can try and add depends_on: php in your nginx service to at least try to make sure the nginx service doesnt' start until the php service is Running. Probably the dependency is starting after the main service that requires it. This is a race condition problem, I think.

I had 3 nodes, where nginx and php containers lived on different nodes.
After trying various methods, such as:
define dedicated network for services inside docker-compose
in nginx config use upstream definition instead of direct name of the service
explicitly adding docker's 127.0.0.11 resolver to nginx
neither worked...
And the reason actually was in a closed ports: https://docs.docker.com/engine/swarm/networking/#firewall-considerations
Docker daemons participating in a swarm need the ability to communicate with each other over the following ports:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container overlay network.
After I revert back all changes I did (network, resolver, upstream definition) to original simple setup and open the ports for inner node communication - service discovery begin to work as expected.
Docker 20.10

Several issues:
It seems you have containers that can't see other.
It seems containers exits/fails and it is certainly not because of the first issue; nginx would still work of php-fpm socket is unavailable, it might crash but it should manage such unavailability very well
Make sure the php.ini is really opening an fpm socket on port 9000.
you index.php file script is not closed with "?>" [but that does not matter here]
For a summary, you were suggested:
to consider docker swarm networking configuration [but it seems you are not using docker swarm]
to use depends_on which helps docker decide what to start first, but it should not be an issue in your case, nginx can wait. it will use the socket only upon web user requests.
So it seems the internal docker name resolution is your issue and It seems defining the network manually is best practice. In my case I wandered too long before just giving the docker-compose file a specific network name and attaching containers to that network.
If containers are in the same docker-compose file they should be in the same yourserver_default network that is autogenerated for your composed services.
Have a look at https://blog.devsense.com/2019/php-nginx-docker, they actually define that network manually.
And eventually redo everything from scratch, if you haven't solved this yet. Else all the best to you!

Related

When running Nginx + PHP-FPM in two different containers, can that configuration ever work without sharing a code volume?

I have the following docker-compose.yaml file for local development that works without issue:
Nginx container just runs the webserver with an upstream pointing to php
Php runs just php-fpm + my extensions
I have an external docker-sync volume which contains my code base which is shared with both nginx + php.
The entire contents of my application is purely PHP returning a bunch of json api data. No static assets get served up.
version: '3.9'
networks:
backend:
driver: bridge
services:
site:
container_name: nginx
depends_on: [php]
image: my-nginx:latest
networks: [backend]
ports: ['8080:80', '8081:443']
restart: always
volumes: [code:/var/www/html:nocopy]
working_dir: /var/www/html
php:
container_name: php
image: my-php-fpm:latest
networks: [backend]
ports: ['9000:9000']
volumes: [code:/var/www/html:nocopy]
working_dir: /var/www/html
volumes:
code:
external: true
I'm playing around with ways to deploy this in my production infrastructure and am liking AWS ECS for it. I can create a single task definition, that launches a single service with both containers defined (and both sharing a code volume that I add in during my build process) and the application works.
This solution seems odd to me because now the only way my application can scale out is by giving me a {php + nginx} container set each time. My PHP needs are going to scale faster than my nginx ones, so this strikes me as a bit wasteful.
I've tried experimenting with the following setup:
1 ECS service for just nginx
1 different ECS service for just php
Both are load balanced, but by virtue of using Fargate and them being on different services, I don't have a way to add a volumesFrom block on the nginx container that would give it access to my code (which I package on the PHP container during my build process). There is no reference to the PHP docker container that I can make that allows this to happen.
My configuration "works" in that the load balanced Nginx service can now scale independent of the load balanced PHP service. They're able to both talk to each other. But Nginx not having my code means it can't help but return a 404 on anything that I want my php upstream to handle.
server {
listen 80;
server_name localhost;
root /var/www/html/public;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location /health {
access_log off;
return 200 'PASS';
add_header Content-Type text/plain;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app-upstream;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
proxy_http_version 1.1;
proxy_set_header "Connection" "";
}
}
Is there any nginx configuration I can write that would make this setup (without nginx having access to my code) work?
It feels like my only options are either copying the same code onto both containers (which feels weird), combining them both into the same container (which violates the 1 service/1 container rule), or accepting that I can't scale them as independently as I would like (which is not the end of the world).
I've never seen a setup where Nginx and PHP were running in separate ECS tasks that scaled independently. I've always seen it where they are both running in the same ECS task, with a shared folder.
I wouldn't worry too much about this being "wasteful". You're adding a tiny amount of CPU usage to each Fargate ECS task by adding Nginx to each task. I would focus more on the fact that you are keeping latency as low as possible by running them both in the same task, so Nginx can pass requests to PHP over localhost.
It's not required to share the volume between those two containers, the PHP scripts are required only by the PHP container, for Nginx it's only required to have network access to the PHP container, so it can proxy the requests.
To run your application on AWS ECS, you need to pack Nginx + PHP in the same container, so the load balancer proxy the request to the container, Nginx accepts the connection and proxy it to PHP, and then return the response.
Using one container for Nginx to act as a proxy to multiple PHP containers it's not possible using Fargate, it would require running the containers on the same network and somehow making the Nginx container proxy and balancing the incoming connections. Besides that, when a new PHP container were deployed, it should be registered on Nginx to start receiving connections.
I had the same struggle for a long time till I have moved all my PHP Apps to NGINX Unit.
https://unit.nginx.org/howto/cakephp/
This is an example how easy it is to have a single container setup to handle static files (html, css, js) as well as all the php code. To learn more about Unit in Docker check this out. https://unit.nginx.org/howto/docker/
Let me know if you have any issues with the tutorials.

Nginx, PHP-FPM, Docker - 113: Host is unreachable

I'm struggling to understand where my error is. I've looked at various answers and tried the remedies, only to find that their solutions did not rectify my problem. I've stripped everything down to the VERY basics to see if I can just get a basic PHP index.php to present itself.
Here is what I'm trying to accomplish at the core:
I have docker-compose standing up 1 network, and 2 services connected to the network. One service is PHP-FPM, and the other is nginx to serve the PHP-FPM. Every time I stand this up, no matter how I seem to configure it, I just get a 502 Bad Gateway, and when I inspect the nginx container logs, I get [error] 29#29: *1 connect() failed (113: Host is unreachable) while connecting to upstream.
./docker-compose.yml
version: "3.7"
networks:
app:
driver: bridge
services:
php:
image: php:7.4-fpm
container_name: php
volumes:
- /home/admin/dev/test/php/www.conf:/usr/local/etc/php-fpm.d/www.conf
- /home/admin/dev/test/src/:/var/www/html
networks:
- app
nginx:
image: nginx:alpine
container_name: nginx
depends_on:
- php
ports:
- "80:80"
- "443:443"
volumes:
- /home/admin/dev/test/src/:/usr/share/nginx/html
- /home/admin/dev/test/nginx/conf.d/app.conf:/etc/nginx/conf.d/app.conf
networks:
- app
./php/www.conf -> /usr/local/etc/php-fpm.d/www.conf
[www]
user = www-data
group = www-data
listen = 0.0.0.0:9000
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
./nginx/conf.d/app.conf -> /etc/nginx/conf.d/app.conf
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
./src/index.php -> (php)/var/www/html && (nginx)/usr/share/nginx/html (just for reference)
<?php
phpinfo();
Docker: Docker version 19.03.12, build 48a66213fe
Docker-compose: docker-compose version 1.25.4, build unknown
Environment: Linux localhost.localdomain 5.7.14-200.fc32.x86_64 #1 SMP Fri Aug 7 23:16:37 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux (Fedora 32 Workstation)
I believe I just have a major misunderstanding of PHP-FPM but maybe there is something else.
Update During Troubleshooting
The thought occurred that my overall environment (i.e. Fedora 32) was messing it up. Fedora 32 is not supported out of the box for Docker (had to change the repo settings in /etc/yum.repos.d to get it to work - had to use Fedora 31's repo). So I decided to spin up an Ubuntu 20.0.4 VM and test it there. Now the PHP-FPM and Nginx are talking; I get responses from the PHP-FPM container! However, now even with just the basic script, I'm getting 404 errors, but that is MUCH closer to where I need to be... now to fix the 404.
The exact error is: [error] 30#30: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream
Final Update (Answer)
For anyone coming across this, as of today's date, Docker didn't work with Fedora 32 (some parts did). At least not with the time I had available to troubleshoot/patch. It was a fresh Fedora 32 with no previous docker/docker-compose or anything installed.
I stood up a fresh Fedora 31 and Ubuntu 20.0.4 just to verify my "conclusion". Both worked right out of the box with no extra tweaks.
Can you check if your php-fpm service is running ?
Issue could be the php-fpm service is not running hence the nginx could not connect to it

Docker, Symfony nginx/php-fpm initialized very slow

Using this project/Docker setup:
https://gitlab.com/martinpham/symfony-5-docker
When I do docker-compose up -d I have to wait about 2-3 minutes to actually get it working.
Before it loads, it gives me "502 Bad Gateway" and logs error:
2020/05/10 09:22:23 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.28.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.28.0.3:9000", host: "localhost"
Why nginx or php-fpm or smth else is loading so slow ?
It's my first time using nginx and Symfony. Is it something normal ? I expect it to be loaded max in 1-2 second, not 2-3 minutes.
Yes, I have seen similar issues, but not appropriate solutions for me.
Some nginx/php-fpm/docker-compose configuration should be changed - I tried, but no luck.
I modified a little bit nginx/sites/default.conf (just added xdebug stuff)
server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 128k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#!!!!fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
fastcgi_param PHP_VALUE "xdebug.remote_autostart=1
xdebug.idekey=PHPSTORM
xdebug.remote_enable=1
xdebug.remote_port=9001
xdebug.remote_host=192.168.0.12";
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
}
nginx/conf.d/default.conf:
upstream php-upstream {
server php-fpm:9000;
}
docker-compose.yml:
version: '3'
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
build:
context: ./php-fpm
depends_on:
- database
environment:
- TIMEZONE=Europe/Tallinn
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ../src:/var/www
nginx:
build:
context: ./nginx
volumes:
- ../src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"
EDIT:
I think I know why now your project is taking ages to start. I had a closer look at the Dockerfile in the php-fpm folder and you have that command:
CMD composer install ; wait-for-it database:3306 -- bin/console doctrine:migrations:migrate ; php-fpm
As you can see, that command will install all composer dependencies and then wait until it can connect to the database container defined in the docker-compose.yml configuration :
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
Once the database is up and running, it will run the migration files in src/src/Migrations to update the database and then start php-fpm.
Until all of this is done, your project won't be ready and you will get the nice '502 Bad Gateway' error.
You can verify this and what's happening by running docker-compose up but omitting the -d argument this time so that you don't run in detached mode, and this will display all your container logs in real time.
You will see a bunch of logs, including the ones related to what composer is doing in the background, ex:
api-app | - Installing ocramius/package-versions (1.8.0): Downloading (100%)
api-app | - Installing symfony/flex (v1.6.3): Downloading (100%)
api-app |
api-app | Prefetching 141 packages
api-app | - Downloading (100%)
api-app |
api-app | - Installing symfony/polyfill-php73 (v1.16.0): Loading from cache
api-app | - Installing symfony/polyfill-mbstring (v1.16.0): Loading from cache
Composer install might take more or less time depending on whether you have all the repositories cached or not.
The solution here if you want to speed things up during your development would be remove the composer install command from the Dockerfile and run it manually only when you want to update/install new dependencies. This way, you avoid composer install to be run every time you run docker-compose up -d.
To do it manually, you would need to connect to your container and then run composer install manually or if you have composer directly installed in your OS, you could simply navigate to the src folder and run that same command.
This + the trick below should help you have a nice a fast enough project locally.
I have a similar configuration and everything is working well, the command docker-compose should take some time the first time you run it as your images need to be built, but then it shouldn't even take a second to run.
From what I see however, you have a lot of mounted volumes that could affect your performances. When I ran tests with nginx and Symfony on a Mac, I had really bad performances at the beginning with pages taking at least 30 seconds load.
One solution to speed this up in my case was to use the :delegated option on some of my volumes to speed up their access.
Try adding that options to your volumes and see if it changes anything for you:
[...]
volumes:
- ../src:/var/www:delegated
[...]
If delegated is not good option for you, read more about the other options consistent and cached here to see what would best fits your needs.

Docker: can not communicate between containers

I have a Docker setup with a php-fpm container, a node container and an nginx container which serves as a proxy. Now in the browser (http://project.dev), the php container responds with json like I expect. All good. However, when I make a request from the node container to this php container (view code), I get an error on the request: ECONNRESET. So apparently, the node container can not communicate with the php container. The nginx error does not seem to add an entry.
Error: read ECONNRESET at _errnoException(util.js: 1031: 13) at TCP.onread(net.js: 619: 25)
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
Any ideas?
I've made a github repo: https://github.com/thomastilkema/docker-nginx-php-fpm-node
Trimmed version of docker-compose.yml (view file)
nginx:
depends_on:
- php-fpm
- node
networks:
- app
ports:
- 80:80
php-fpm:
networks:
- app
node:
networks:
- app
networks:
app:
driver: overlay
Trimmed version of nginx.conf (view file)
http {
upstream php-fpm {
server php-fpm:9000;
}
upstream node {
server node:4000;
}
server {
listen 80 reuseport;
server_name api.project.dev;
location ~ \.php$ {
fastcgi_pass php-fpm;
...
}
}
server {
listen 80;
server_name project.dev;
location / {
proxy_pass http://node;
}
}
}
php-fpm/Dockerfile (view file)
FROM php:7.1-fpm-alpine
WORKDIR /var/www
EXPOSE 9000
CMD ["php-fpm"]
Request which gives an error
const response = await axios.get('http://php-fpm:9000');
How to reproduce
Create a swarm manager (and a worker) node
Find out the ip address of your swarm manager node (usually 192.168.99.100): docker-machine ip manager or docker-machine ls. Edit your hosts file (on a Mac, sudo vi /private/etc/hosts) by adding 192.168.99.100 project.dev and 192.168.99.100 api.project.dev
git clone https://github.com/thomastilkema/docker-nginx-php-fpm-node project
cd project ./scripts/up.sh
Have a look at the logs of the container: docker logs <container-id> -f
ECONNRESET is the other end closing the connection which can usually be attributed to a protocol error.
The FastCGI Process Manager (FPM) uses the FastCGI protocol to transport data.
Go via the nginx container, which translates the HTTP request to FastCGI
axios.get('http://nginx/whatever.php')

Getting Docker linked containers localhost pointing to host localhost

So on my host environment I have this tomcat service running on port 10000 so I can access the office internal services.
I have a windows hosts entry:
localhost developer.mycompany.com
So I can access the endpoint developer.mycompany.com:10000/some/url and if successful return a json response.
I also have a Docker compose file that has spun up nginx and php-fpm containers and linked them to run a linux based PHP development environment.
What I am trying to achieve is to make the docker container(s) aware of the developer.mycomoany.com host entry. So when my PHP code on my linke containers sends a POST request to http://developer.mycompany.com:10000/some/urlit knows about the host entry and is able to hit that end point.
I have tried the config net=host but that doesn't work with linked containers.
PHP app error message:
{"result":{"success":false,"message":"Error creating resource: [message] fopen(): php_network_getaddresses: getaddrinfo failed: Name or service not known\n[file] /srv/http/vendor/guzzlehttp/guzzle/src/Handler/StreamHandler.php\n[line] 282\n[message] fopen(http://developer.mycompany.com:10000/app/register): failed to open stream: php_network_getaddresses: getaddrinfo failed: Name or service not known\n[file] /srv/http/vendor/guzzlehttp/guzzle/src/Handler/StreamHandler.php\n[line] 282"}}
How do I enable my PHP app on my linked containers to talk to the host developer.mycompany.com (localhost) entry?
Here is my docker-compose:
app:
image: linxlad/docker-php-fpm
tty: true
ports:
- "9000:9000"
volumes:
- ./logs/php-fpm:/var/log/php-fpm
- ~/Development/Apps/php-hello-world:/srv/http
web:
image: linxlad/docker-nginx
ports:
- "8080:80"
volumes:
- ./conf/nginx:/etc/nginx/conf.d
- ./logs/nginx:/var/log/nginx
- ~/Development/Apps/php-hello-world:/srv/http
links:
- app
Edits:
Her is my ifconfig output from docker machine.
Thanks
I found the IP of my docker machine and used that in my hosts nginx proxy config like such which then redirected it to the nginx location config in my docker container.
Host nginx:
location /apps/myapp/appversion {
proxy_pass http://192.168.99.100:8080;
}
Docker Nginx container config:
location /apps/myapp/appversion {
try_files $uri $uri/ /index.php?args;
if (!-e $request_filename) {
rewrite ^/(.*)$ /index.php last;
}
}

Categories