Docker, Symfony nginx/php-fpm initialized very slow - php

Using this project/Docker setup:
https://gitlab.com/martinpham/symfony-5-docker
When I do docker-compose up -d I have to wait about 2-3 minutes to actually get it working.
Before it loads, it gives me "502 Bad Gateway" and logs error:
2020/05/10 09:22:23 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.28.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.28.0.3:9000", host: "localhost"
Why nginx or php-fpm or smth else is loading so slow ?
It's my first time using nginx and Symfony. Is it something normal ? I expect it to be loaded max in 1-2 second, not 2-3 minutes.
Yes, I have seen similar issues, but not appropriate solutions for me.
Some nginx/php-fpm/docker-compose configuration should be changed - I tried, but no luck.
I modified a little bit nginx/sites/default.conf (just added xdebug stuff)
server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-upstream;
fastcgi_index index.php;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 128k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#!!!!fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
fastcgi_param PHP_VALUE "xdebug.remote_autostart=1
xdebug.idekey=PHPSTORM
xdebug.remote_enable=1
xdebug.remote_port=9001
xdebug.remote_host=192.168.0.12";
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
}
nginx/conf.d/default.conf:
upstream php-upstream {
server php-fpm:9000;
}
docker-compose.yml:
version: '3'
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
build:
context: ./php-fpm
depends_on:
- database
environment:
- TIMEZONE=Europe/Tallinn
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ../src:/var/www
nginx:
build:
context: ./nginx
volumes:
- ../src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "80:80"
- "443:443"

EDIT:
I think I know why now your project is taking ages to start. I had a closer look at the Dockerfile in the php-fpm folder and you have that command:
CMD composer install ; wait-for-it database:3306 -- bin/console doctrine:migrations:migrate ; php-fpm
As you can see, that command will install all composer dependencies and then wait until it can connect to the database container defined in the docker-compose.yml configuration :
services:
database:
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3306:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
Once the database is up and running, it will run the migration files in src/src/Migrations to update the database and then start php-fpm.
Until all of this is done, your project won't be ready and you will get the nice '502 Bad Gateway' error.
You can verify this and what's happening by running docker-compose up but omitting the -d argument this time so that you don't run in detached mode, and this will display all your container logs in real time.
You will see a bunch of logs, including the ones related to what composer is doing in the background, ex:
api-app | - Installing ocramius/package-versions (1.8.0): Downloading (100%)
api-app | - Installing symfony/flex (v1.6.3): Downloading (100%)
api-app |
api-app | Prefetching 141 packages
api-app | - Downloading (100%)
api-app |
api-app | - Installing symfony/polyfill-php73 (v1.16.0): Loading from cache
api-app | - Installing symfony/polyfill-mbstring (v1.16.0): Loading from cache
Composer install might take more or less time depending on whether you have all the repositories cached or not.
The solution here if you want to speed things up during your development would be remove the composer install command from the Dockerfile and run it manually only when you want to update/install new dependencies. This way, you avoid composer install to be run every time you run docker-compose up -d.
To do it manually, you would need to connect to your container and then run composer install manually or if you have composer directly installed in your OS, you could simply navigate to the src folder and run that same command.
This + the trick below should help you have a nice a fast enough project locally.
I have a similar configuration and everything is working well, the command docker-compose should take some time the first time you run it as your images need to be built, but then it shouldn't even take a second to run.
From what I see however, you have a lot of mounted volumes that could affect your performances. When I ran tests with nginx and Symfony on a Mac, I had really bad performances at the beginning with pages taking at least 30 seconds load.
One solution to speed this up in my case was to use the :delegated option on some of my volumes to speed up their access.
Try adding that options to your volumes and see if it changes anything for you:
[...]
volumes:
- ../src:/var/www:delegated
[...]
If delegated is not good option for you, read more about the other options consistent and cached here to see what would best fits your needs.

Related

How to connect php and nginx containers together

I'm trying to create a simple docker project and connect PHP and Nginx containers for a test. but i got this error :
Building php
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM php:latest
---> 52cdb5f30a05
Successfully built 52cdb5f30a05
Successfully tagged test_php:latest
WARNING: Image for service php was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building nginx
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM nginx:latest
---> 55f4b40fe486
Step 2/2 : ADD default.conf /etc/nginx/conf.d/default.conf
---> 20190910ffec
Successfully built 20190910ffec
Successfully tagged test_nginx:latest
WARNING: Image for service nginx was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating php ... done
Creating nginx ... done
Attaching to php, nginx
php | Interactive shell
php |
nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
php | php > nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
php exited with code 0
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/07/10 05:34:07 [emerg] 1#1: host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx | nginx: [emerg] host not found in upstream "php" in /etc/nginx/conf.d/default.conf:14
nginx exited with code 1
Here is the full directory structure of the project:
- docker
-- nginx
-- default.conf
-- Dockerfile
-- php
-- Dockerfile
- src
-- index.php
docker-compose.yml
and this is all files and their contents which i use :
# docker/nginx/default.conf
server {
listen 80;
index index.php index.htm index.html;
root /var/www/html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
# docke/nginx/Dockerfile
FROM nginx:latest
ADD default.conf /etc/nginx/conf.d/default.conf
# docker/php/Dockerfile
FROM php:latest
# src/index.php
<?php
echo phpinfo();
# docker-compose.yml
version: "3.8"
services:
nginx:
container_name: nginx
build: ./docker/nginx
command: nginx -g "daemon off;"
links:
- php
ports:
- "80:80"
volumes:
- ./src:/var/www/html
php:
container_name: php
build: ./docker/php
ports:
- "9000:9000"
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
the main problem occurs when I want to connect the PHP container to the project and without PHP, Nginx will work correctly.
You can try and add depends_on: php in your nginx service to at least try to make sure the nginx service doesnt' start until the php service is Running. Probably the dependency is starting after the main service that requires it. This is a race condition problem, I think.
I had 3 nodes, where nginx and php containers lived on different nodes.
After trying various methods, such as:
define dedicated network for services inside docker-compose
in nginx config use upstream definition instead of direct name of the service
explicitly adding docker's 127.0.0.11 resolver to nginx
neither worked...
And the reason actually was in a closed ports: https://docs.docker.com/engine/swarm/networking/#firewall-considerations
Docker daemons participating in a swarm need the ability to communicate with each other over the following ports:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container overlay network.
After I revert back all changes I did (network, resolver, upstream definition) to original simple setup and open the ports for inner node communication - service discovery begin to work as expected.
Docker 20.10
Several issues:
It seems you have containers that can't see other.
It seems containers exits/fails and it is certainly not because of the first issue; nginx would still work of php-fpm socket is unavailable, it might crash but it should manage such unavailability very well
Make sure the php.ini is really opening an fpm socket on port 9000.
you index.php file script is not closed with "?>" [but that does not matter here]
For a summary, you were suggested:
to consider docker swarm networking configuration [but it seems you are not using docker swarm]
to use depends_on which helps docker decide what to start first, but it should not be an issue in your case, nginx can wait. it will use the socket only upon web user requests.
So it seems the internal docker name resolution is your issue and It seems defining the network manually is best practice. In my case I wandered too long before just giving the docker-compose file a specific network name and attaching containers to that network.
If containers are in the same docker-compose file they should be in the same yourserver_default network that is autogenerated for your composed services.
Have a look at https://blog.devsense.com/2019/php-nginx-docker, they actually define that network manually.
And eventually redo everything from scratch, if you haven't solved this yet. Else all the best to you!

Laravel is suspiciously slow (fresh app, everything is by default)

I work with Windows 10 (WSL 2).
My hardware is:
Intel(R) Core(TM) i7-9750H CPU # 2.60GHz 2.60 GHz
RAM 8.00GB
SSD
Actually, this is a game laptop (MSI GL 65 95CK) if you are interested.
I decided to install Laravel, went to documentation and implemented described steps:
In WSL terminal (I use Ubuntu) curl -s "https://laravel.build/example-app?with=mysql,redis" | bash
cd example-app && /vendor/bin/sail up
I went to browser and realized that the main page took almost 1 second to render! Sometimes even two seconds!
I thought, "ok, maybe the framework is in a not optimized mode. Debug and so on", and decided to turn APP_DEBUG in .env to false. I also removed all routes and put this instead:
Route::get('/', [\App\Http\Controllers\TestController::class, 'test']);
Before this, I created the TestController:
class TestController extends Controller
{
public function test() {
return response()->json([
'name' => 'Abigail',
'state' => 'CA',
]);
}
}
Then I run php artisan optimaze, open in browser http://localhost/api
and the result is a big sorrow:
Why 800ms? I did not do anything.
Ok, I decided just to rename the index.php file in the public folder to index2 for example, put new one index.php with the array printing just to test whether this is a Laravel problem or this is just an infrastructure issue.
New index.php:
Much better!
Then I thought, "let's compare with another framework, for example with .NET Core". And I made a very simple Web Api project.
Controller:
namespace MockWebApi.Controllers
{
[ApiController]
[Route("")]
public class MainController : ControllerBase
{
[Route("test")]
public IActionResult Test()
{
return Ok(new
{
Test = "hello world!!!"
});
}
}
}
The result is:
Ok, you can argue that this is a compiled language. I decided to check with Node.js and Express:
Code:
router.get('/', function(req, res, next) {
res.json({"test": "123"})
});
Result:
As you can see, Node as fast as C# in this case.
So, what is wrong with Laravel? Did I miss something in installation?
UPDATE
I raised Laravel without Sail. My docker-compose file:
version: '3'
services:
php-fpm:
build:
context: docker/php-fpm
volumes:
- ./:/var/www
networks:
- internal
nginx:
build:
context: docker/nginx
volumes:
- ./:/var/www
ports:
- "80:80"
depends_on:
- php-fpm
networks:
- internal
networks:
internal:
driver: bridge
Nginx Dockerfile:
FROM nginx
ADD ./default.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www
Nginx config:
server {
listen 80;
index index.php;
server_name 127.0.0.1 localhost;
root /var/www/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_read_timeout 1000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
php-fpm Dockerfile:
FROM php:7.4-fpm
RUN apt-get update && apt-get install -y wget git unzip \
&& apt-get install libpq-dev -y
RUN wget https://getcomposer.org/installer -O - -q \
| php -- --install-dir=/bin --filename=composer --quiet
RUN groupadd -r -g 1000 developer && useradd -r -u 1000 -g developer developer
USER developer
WORKDIR /var/www
I did not get any performance improvement(

Nginx, PHP-FPM, Docker - 113: Host is unreachable

I'm struggling to understand where my error is. I've looked at various answers and tried the remedies, only to find that their solutions did not rectify my problem. I've stripped everything down to the VERY basics to see if I can just get a basic PHP index.php to present itself.
Here is what I'm trying to accomplish at the core:
I have docker-compose standing up 1 network, and 2 services connected to the network. One service is PHP-FPM, and the other is nginx to serve the PHP-FPM. Every time I stand this up, no matter how I seem to configure it, I just get a 502 Bad Gateway, and when I inspect the nginx container logs, I get [error] 29#29: *1 connect() failed (113: Host is unreachable) while connecting to upstream.
./docker-compose.yml
version: "3.7"
networks:
app:
driver: bridge
services:
php:
image: php:7.4-fpm
container_name: php
volumes:
- /home/admin/dev/test/php/www.conf:/usr/local/etc/php-fpm.d/www.conf
- /home/admin/dev/test/src/:/var/www/html
networks:
- app
nginx:
image: nginx:alpine
container_name: nginx
depends_on:
- php
ports:
- "80:80"
- "443:443"
volumes:
- /home/admin/dev/test/src/:/usr/share/nginx/html
- /home/admin/dev/test/nginx/conf.d/app.conf:/etc/nginx/conf.d/app.conf
networks:
- app
./php/www.conf -> /usr/local/etc/php-fpm.d/www.conf
[www]
user = www-data
group = www-data
listen = 0.0.0.0:9000
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
./nginx/conf.d/app.conf -> /etc/nginx/conf.d/app.conf
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
./src/index.php -> (php)/var/www/html && (nginx)/usr/share/nginx/html (just for reference)
<?php
phpinfo();
Docker: Docker version 19.03.12, build 48a66213fe
Docker-compose: docker-compose version 1.25.4, build unknown
Environment: Linux localhost.localdomain 5.7.14-200.fc32.x86_64 #1 SMP Fri Aug 7 23:16:37 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux (Fedora 32 Workstation)
I believe I just have a major misunderstanding of PHP-FPM but maybe there is something else.
Update During Troubleshooting
The thought occurred that my overall environment (i.e. Fedora 32) was messing it up. Fedora 32 is not supported out of the box for Docker (had to change the repo settings in /etc/yum.repos.d to get it to work - had to use Fedora 31's repo). So I decided to spin up an Ubuntu 20.0.4 VM and test it there. Now the PHP-FPM and Nginx are talking; I get responses from the PHP-FPM container! However, now even with just the basic script, I'm getting 404 errors, but that is MUCH closer to where I need to be... now to fix the 404.
The exact error is: [error] 30#30: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream
Final Update (Answer)
For anyone coming across this, as of today's date, Docker didn't work with Fedora 32 (some parts did). At least not with the time I had available to troubleshoot/patch. It was a fresh Fedora 32 with no previous docker/docker-compose or anything installed.
I stood up a fresh Fedora 31 and Ubuntu 20.0.4 just to verify my "conclusion". Both worked right out of the box with no extra tweaks.
Can you check if your php-fpm service is running ?
Issue could be the php-fpm service is not running hence the nginx could not connect to it

Connect to MariaDB with localhost from Docker container

First of I did read thoses links
Connect to Docker MySQL container from localhost?
Connect to Mysql on localhost from docker container
From inside of a Docker container, how do I connect to the localhost of the machine?
But as a beginner with docker. It did not help me.
What you need to know:
Yes, I need localhost. I'm working on an app that interact
directly with the database. It create/remove user privileges and
allow some user to access with limited privileges from a remote
access. When initialized, the app will drop the default remote access to root and forge user and grant them full privilege on localhost.
I'm using a docker-compose.yml generated by https://phpdocker.io
Ubuntu 18.10
Docker version 18.09.3, build 774a1f4
docker-compose version 1.21.0, build unknown
I'm using docker only for development purpose. On production I use forge
./docker-compose.yml
###############################################################################
# Generated on phpdocker.io #
###############################################################################
version: "3.1"
services:
mailhog:
image: mailhog/mailhog:latest
container_name: myapp-mailhog
ports:
- "8081:8025"
redis:
image: redis:alpine
container_name: myapp-redis
mariadb:
image: mariadb:10.4
container_name: myapp-mariadb
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=myapp
- MYSQL_USER=forge
- MYSQL_PASSWORD=forge
ports:
- "8083:3306"
webserver:
image: nginx:alpine
container_name: myapp-webserver
working_dir: /application
volumes:
- .:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
php-fpm:
build: phpdocker/php-fpm
container_name: myapp-php-fpm
working_dir: /application
volumes:
- .:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
./phpdocker/nginx/nginx.conf
server {
listen 80 default;
client_max_body_size 108M;
access_log /var/log/nginx/application.access.log;
root /application/public;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
./phpdocker/php-fpm/Dockerfile (slightly modified to add mysql_client and not installing git in a second RUN command)
FROM phpdockerio/php73-fpm:latest
WORKDIR "/application"
# Fix debconf warnings upon build
ARG DEBIAN_FRONTEND=noninteractive
# Install selected extensions and other stuff
RUN apt-get update \
&& apt-get -y --no-install-recommends install \
php7.3-mysql php-redis php7.3-sqlite3 php-xdebug php7.3-bcmath php7.3-bz2 php7.3-gd \
git \
mysql-client \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
./php-ini-overrides.ini
upload_max_filesize = 100M
post_max_size = 108M
I tried to use network_mode: host but it makes the webserver stopping with Exit 1
Ok but as remember it, localhost in mysql/mariadb means access thru the local unix socket. There are ways of sharing these between containers.
Have a look here Connection between docker containers via UNIX sockets
#F.Maden gave me the right direction. I accepted his answer but here's how I made it in details.
Basically has he said, I need to share mysqld.sock between my services mariadb and php-fpm
The first step is to share a folder between both services. Since I already
have /application that contains the docker config /application/phpdocker, I will reuse this one.
I had to create a custom my.cnf file to edit the default mariadb config configuration and add a custom socket path:
./phpdocker/mariadb/my.cnf
[mysql]
socket = /application/phpdocker/shared/mysqld.sock
[mysqld]
socket = /application/phpdocker/shared/mysqld.sock
Then I had to share the config file with my mariadb container
./docker-compose.yml
mariadb:
image: mariadb:10.4
container_name: myapp-mariadb
working_dir: /application
volumes:
- .:/application
- ./phpdocker/mariadb/my.cnf:/etc/mysql/my.cnf # notice this line
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=myapp
- MYSQL_USER=forge
- MYSQL_PASSWORD=forge
ports:
- "8083:3306"
I created a folder ./phpdocker/shared with privileges 777 where mariadb will be able to create mysqld.sock (I couldn't start mariadb with 755. In my case this is only used on local not on production so it's fine)
From the terminal
$ mkdir ./phpdocker/shared && chmod 777 ./phpdocker/shared
And now test it!
From the terminal
$ docker-compose up -d --force-recreate --build
$ docker exec -it -u $(id -u):$(id -g) myapp-php-fpm /bin/bash
Once in the container
$ mysql -u root -p -h localhost --socket=/application/phpdocker/shared/mysqld.sock
$ mysql > select user();
+----------------+
| user() |
+----------------+
| root#localhost |
+----------------+
If the problem with connection to DB persists:
We can interogate what IP DB container has:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>
We'll receive container IP (eg: 172.21.0.3);
After this, we can place this IP into connection "host" section.
Enjoy!
Ref How can I access my docker maria db?

How to link 2 containers properly?

This is kinda a newbie question since I'm still trying to understand how containers "communicate" to each other.
This is roughly what my docker-compose.yml looks like
...
api:
build: ./api
container_name: api
volumes:
- $HOME/devs/apps/api:/var/www/api
laravel:
build: ./laravel
container_name: laravel
volumes:
- $HOME/devs/apps/laravel:/var/www/laravel
depends_on:
- api
links:
- api
...
nginx-proxy:
build: ./nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
links:
- api
- laravel
- mysql-api
nginx configs have blocks referring to upstream exposed by those 2 php-fpm containers, like this:
location ~* \.php$ {
fastcgi_pass laravel:9000;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
similar for the api block.
I can hit each container individually from the web browser/postman (from the host).
Inside the laravel app, there is some php_curl to call a REST service exposed by the api service. I got 500, with this error (from the nginx container):
PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in /var/www/laravel/vendor/symfony/debug/Exception/FatalErrorException.php on line 1" while reading response header from upstream, client: 172.22.0.1, server: laravel.lo, request: "POST {route_name} HTTP/1.1", upstream: "fastcgi://172.22.0.5:9000", host: "laravel.lo"
I tried hitting the api from the laravel container using wget
root#a34903360679:/app# wget api.lo
--2018-08-01 09:57:51-- http://api.lo/
Resolving api.lo (api.lo)... 127.0.0.1
Connecting to api.lo (api.lo)|127.0.0.1|:80... failed: Connection refused.
It resolves to localhost, but I believe 127.0.0.1 in this context seems to be the laravel container itself, not the host/nginx services. I used to have all the services in a single centos VM for development, which didn't have this problem.
Can anyone give some advice on how I could achieve this environment?
EDIT: I found the answer (not long after posting this question).
Refer to here: https://medium.com/#yani/two-way-link-with-docker-compose-8e774887be41
To get the laravel container reaches back to nginx services (so nginx can resolve api request to the api container), use internal network. So something like:
networks:
internal-api:
Then alias the laravel and nginx containers, like so:
laravel:
...
networks:
internal-api:
aliases:
- laravel
...
nginx-proxy:
...
networks:
internal-api:
aliases:
- api
networks:
internal-api:
Newer versions of Docker Compose will do all of the networking setup for you. It will create a Docker-internal network and register an alias for each container under its block name. You don't need (and shouldn't use) links:. You only need depends_on: if you want to bring up only parts of your stack from the command line.
When setting up inter-container connections, always use the other container's name from the Compose YAML file as a DNS name (without Compose, that container's --name or an alias you explicitly declared at docker run time). Configuring these as environment variables is better, particularly if you'll run the same code outside of Docker with different settings. Never directly look up a container's IP address or use localhost or 127.0.0.1 in this context: it won't work.
I'd write your docker-compose.yml file something like:
version: '3'
services:
api:
build: ./api
laravel:
build: ./laravel
env:
API_BASEURL: 'http://api/rest_endpoint'
nginx-proxy:
build: ./nginx-proxy
env:
LARAVEL_FCGI: 'laravel:9000'
ports:
- "80:80"
You will probably need to write a custom entrypoint script for your nginx proxy that fills in the config file from environment variables. If you're using a container based on a full Linux distribution then envsubst is an easy tool for this.
I found the answer (not long after posting this question). Refer to here: https://medium.com/#yani/two-way-link-with-docker-compose-8e774887be41
To get the laravel container reaches back to nginx services (so nginx can resolve api request to the api container), use internal network. So something like:
networks:
internal-api:
With networks config, all the links config can be taken out. Then alias the laravel and nginx containers, like so:
laravel:
...
networks:
internal-api:
aliases:
- laravel
...
nginx-proxy:
...
networks:
internal-api:
aliases:
- api
networks:
internal-api:
Then laravel can hit api url like:
.env:
API_BASEURL=http://api/{rest_endpoint}

Categories