Painful slow docker PHP setup on Windows - php

I run Docker for Windows with Hyper-V, 4 cores and 8GB RAM but page loads of my PHP project are in the order of 40 seconds per page.
My setup uses self signed certificates, but I think the problem is related to something else.
During my docker build I get the following warning:
---> Running in 46329f96a79f
Restarting Apache httpd web server: apache2[Mon Jun 11 09:17:26.151516 2018] [ssl:warn] [pid 23] AH01906: localhost:443:0 server certificate is a CA certificate (BasicConstraints: CA == TRUE !?)
[Mon Jun 11 09:17:26.151605 2018] [ssl:warn] [pid 23] AH01909: localhost:443:0 server certificate does NOT include an ID which matches the server name
Since non-https pages load also very slow, I think it is something else.
My Docker file is as follows
FROM php:5.6-apache
COPY server.crt /etc/apache2/ssl/server.crt
COPY server.key /etc/apache2/ssl/server.key
RUN docker-php-ext-install pdo pdo_mysql mysqli
RUN apt-get update &&\
apt-get install --no-install-recommends --assume-yes --quiet ca-certificates
curl git &&\
rm -rf /var/lib/apt/lists/*
RUN curl -Lsf 'https://storage.googleapis.com/golang/go1.8.3.linux-
amd64.tar.gz' | tar -C '/usr/local' -xvzf -
ENV PATH /usr/local/go/bin:$PATH
RUN go get github.com/mailhog/mhsendmail
RUN cp /root/go/bin/mhsendmail /usr/bin/mhsendmail
RUN echo 'sendmail_path = /usr/bin/mhsendmail --smtp-addr mailhog:1025' >
/usr/local/etc/php/php.ini
COPY ./ /var/www/html/
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN a2enmod rewrite
RUN a2enmod ssl
COPY dev.conf /etc/apache2/sites-enabled/dev.conf
RUN service apache2 restart
EXPOSE 80
EXPOSE 443
When I click a link, it shows Waiting... in de browser bar for ~40sec, but showing the page content itself is pretty fast
Could it be a DNS issue?

I will share with you my docker settings using PHP + Redis + MySQL + Nginx, see if it will be useful for you!
My Dockerfile
FROM php:7.1-fpm
RUN apt-get update
RUN apt-get install -y zlib1g-dev \
libjpeg-dev \
libpng-dev \
libfreetype6-dev
# Add Microsoft repo for Microsoft ODBC Driver 13 for Linux
RUN apt-get update && apt-get install -y \
apt-transport-https \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/8/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update
# Install Dependencies
RUN ACCEPT_EULA=Y apt-get install -y \
unixodbc \
unixodbc-dev \
libgss3 \
odbcinst \
msodbcsql \
locales \
&& echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
RUN pecl install pdo_sqlsrv-4.1.8preview sqlsrv-4.1.8preview \
&& docker-php-ext-enable pdo_sqlsrv sqlsrv
RUN ln -s /usr/lib/x86_64-linux-gnu/libsybdb.a /usr/lib/
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-install zip
RUN mkdir -p /code
ENV HOME=/code
WORKDIR $HOME
USER root
COPY ./ $HOME
In this docker file has an SQLServer connection plugin too (I have many projects that I have integrated with it).
Now my docker-compose.yml
web:
container_name: your_web_container_name
image: nginx
ports:
- "80:80"
volumes:
- ./:/code
- ./host.conf:/etc/nginx/conf.d/default.conf
links:
- php:php
redis:
container_name: your_redis_container_name
image: redis
php:
container_name: your_php_container_name
build: ./
dockerfile: ./Dockerfile
volumes:
- ./:/code
links:
- db
- redis
db:
container_name: your_database_container_name
image: mysql:5.6
volumes:
- /var/lib/mysql
ports:
- "3306:3306"
environment:
- MYSQL_USER=docker
- MYSQL_DATABASE=docker
- MYSQL_ROOT_PASSWORD=docker
- MYSQL_PASSWORD=docker
The default.conf of the nginx:
server {
listen 80 default_server;
root /var/www/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9001;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
I hope it could be helpful to you.

Related

502 Bad Gateway - Google Cloud Run with php-fpm (socket not found) on cold starts

I am dealing with an issue where the first request towards my Google Cloud Run container results in a 502 Bad Gateway error. The following can be seen in the logs:
[crit] 23#23: *3 connect() to unix:/run/php/php7.4-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.8.129, server: domain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:", host: "my.domain.com"
However, on the second request it works fine.
I am using supervisord together with php-fpm and nginx.
Dockerfile
FROM php:7.4-fpm
ENV COMPOSER_MEMORY_LIMIT='-1'
RUN apt-get update && \
apt-get install -y --force-yes --no-install-recommends \
nginx \
supervisor \
libmemcached-dev \
libzip-dev \
libz-dev \
libzip-dev \
libpq-dev \
libjpeg-dev \
libpng-dev \
libfreetype6-dev \
libssl-dev \
openssh-server \
libmagickwand-dev \
git \
cron \
nano \
libxml2-dev \
libreadline-dev \
libgmp-dev \
mariadb-client \
unzip
RUN docker-php-ext-install exif zip pdo_mysql intl
#####################################
# PHPRedis:
#####################################
RUN pecl install redis && docker-php-ext-enable redis
#####################################
# Imagick:
#####################################
RUN pecl install imagick && \
docker-php-ext-enable imagick
#####################################
# PHP Memcached:
#####################################
# Install the php memcached extension
RUN pecl install memcached && docker-php-ext-enable memcached
#####################################
# Composer:
#####################################
# Install composer and add its bin to the PATH.
RUN curl -s http://getcomposer.org/installer | php && \
echo "export PATH=${PATH}:/var/www/vendor/bin" >> ~/.bashrc && \
mv composer.phar /usr/local/bin/composer
# Source the bash
RUN . ~/.bashrc
#####################################
# Laravel Schedule Cron Job:
#####################################
# RUN echo "* * * * * www-data /usr/local/bin/php /var/www/artisan schedule:run >> /dev/null 2>&1" >> /etc/cron.d/laravel-scheduler
# RUN chmod 0644 /etc/cron.d/laravel-scheduler
#
#--------------------------------------------------------------------------
# NGINX
#--------------------------------------------------------------------------
#
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
ADD infrastructure/nginx/nginx.conf /etc/nginx/sites-available/default
ADD infrastructure/supervisor/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
#
#--------------------------------------------------------------------------
# Move config files
#--------------------------------------------------------------------------
#
ADD ./infrastructure/php/php.ini /usr/local/etc/php/conf.d/php.ini
ADD ./infrastructure/php/php-fpm.conf /usr/local/etc/php-fpm.d/zz-docker.conf
#
#--------------------------------------------------------------------------
# Ensure fpm socket folder is present
#--------------------------------------------------------------------------
#
RUN mkdir -p /run/php/
RUN rm -r /var/lib/apt/lists/*
RUN usermod -u 1000 www-data
COPY ./infrastructure/entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/entrypoint.sh
RUN ln -s /usr/local/bin/entrypoint.sh /
ENTRYPOINT ["entrypoint.sh"]
WORKDIR /var/www
COPY . .
RUN chown -R www-data:www-data /var/www
RUN composer install
EXPOSE 8000
entrypoint.sh
#!/bin/bash
# Start the cron service.
service cron start
##
# Run artisan migrate
##
php artisan migrate
##
# Run a command or start supervisord
##
if [ $# -gt 0 ];then
# If we passed a command, run it
exec "$#"
else
# Otherwise start supervisord
/usr/bin/supervisord
fi
Nginx.conf
server {
listen 8000 default_server;
root /var/www/public;
index index.html index.htm index.php;
server_name my.domain.com;
charset utf-8;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
# Remove index.php$
if ($request_uri ~* "^(.*/)index\.php/*(.*)") {
return 301 $1$2;
}
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_buffering off;
fastcgi_index index.php;
include fastcgi_params;
}
error_page 404 /index.php;
}
phpfpm custom config
[global]
daemonize = no
[www]
listen = /run/php/php7.4-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
supervisord
[supervisord]
nodaemon=true
user=root
[program:nginx]
command=nginx -g "daemon off;"
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:php-fpm]
command=php-fpm
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Cloud Run containers being cold started will need to perform several tasks before being able to serve requests according to the documentation. This could be the cause of the first request failing, while subsequent requests are served correctly:
The startup routine consists of:
starting the service,
starting the container,
running the entrypoint command to start your server,
checking for the open service port
The same documentation page offers several tips to avoid cold starts and improve container startup performance, specially, setting a number of minimum instances that are always idle. Since Cloud Run can scale down to 0 instances, this will create cold starts when there are no instances active.
As for the specific error message from the logs, it seems that a path from your nginx.conf file might be wrong according to this related thread. According to the thread, the path to fpm should be:
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; //with your specific fpm version

Problem with Laravel, once launched with PHP-FPM and NGINX

Problem with Laravel, once launched with PHP-FPM and NGINX no file is accessible except index.php.
Once my application is launched all the application is executed successfully except that the CSS and my JS scripts present in the public folder of Laravel are not accessible and therefore it is not applied on the application.
My application is deployed with Docker and works like this:
A container for the laravel application launched with PHP-FPM
A NGINX container as web server
A Mysql container
File docker-compose.yml :
version: '3'
services:
#PHP Service
app-dockerTag:
image: appName
build:
context: .
dockerfile: Dockerfile
container_name: app-dockerTag
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www/html/
#Nginx Service
webserver-dockerTag:
image: nginx:alpine
container_name: webserver-dockerTag
restart: unless-stopped
tty: true
ports:
- "portApp:80"
volumes:
- ./:/var/www
- ./build/nginx/:/etc/nginx/conf.d/
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- dbPort:dbPort
volumes:
- ./mysql/:/docker-entrypoint-initdb.d
- ./api:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=dbRootPassword
- MYSQL_DATABASE=dbDatabase
- MYSQL_USER=dbUsername
- MYSQL_PASSWORD=dbPassword
- MYSQL_HOST_ROOT=%
- MYSQL_TCP_PORT=dbPort
File conf nginx :
server {
listen 80;
root /var/www/html/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app-dockerTag:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
File DockerFiles for laravel :
FROM php:7.3-fpm
# Set working directory
WORKDIR /var/www/html
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
mariadb-client \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
COPY composer.json composer.json
#COPY composer.lock composer.lock /var/www/
COPY . .
RUN composer install
RUN composer dump-autoload
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www/html
# Copy existing application directory permissions
COPY --chown=www:www . /var/www/html
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm", "-F"]
My problem does not appear when I work locally with php artisan serve and once docked, all files normally accessible in public generate 404 errors.

111: Connection refused php-fpm container docker with reverse proxy nginx error 502 bad gateway

I think I have a link problem I have an error in my application I have a 502 bad gateway error,
and in the app log
[error] 6 # 6: * 1 connect () failed (111: Connection refused) while connecting to upstream, client: 172.18.0.5, server: laravel-api.art, request: "GET / HTTP / 1.1", upstream: "fastcgi: //172.18.0.19: 9000", host: "laravel-api.art" 172.18.0.5
here is my nginx config file
server {
listen 80;
server_name laravel-api.art;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass laravel-api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
my Dockerfile
``` FROM php:7.4-fpm
COPY composer.lock composer.json /var/www/
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libzip-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql zip exif pcntl
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy application folder
COPY . /var/www
# Copy existing permissions from folder to docker
COPY --chown=www:www . /var/www
RUN chown -R www-data:www-data /var/www
# change current user to www
USER www
EXPOSE 9000
CMD ["php-fpm"]
# production environment
FROM nginx:stable-alpine
RUN rm /etc/nginx/conf.d/default.conf
#COPY nginx/nginx.conf /etc/nginx/conf.d
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]```
and my docker compose
I use an external reverse proxy
```
version: "3.7"
services:
#PHP Service
laravel-api:
build:
context: .
dockerfile: Dockerfile
container_name: ${containerNameServer}
restart: unless-stopped
tty: true
working_dir: /var/www
volumes:
- ./:/var/www
- ./docker-files/php/local.ini:/usr/local/etc/php/conf.d/local.ini
environment:
VIRTUAL_HOST: ${VIRTUAL_HOST_SERVER},${VIRTUAL_HOST_WWW_SERVER}
LETSENCRYPT_HOST: ${LETSENCRYPT_HOST_SERVER}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
volumes:
laravel-api:
networks:
default:
external:
name: webproxy
```
for information it works with other nodejs / vuejs application but this is my first laravel php-fpm application

PHP/Laravel SQLSTATE[HY000] [2002] No such file or directory when using docker-compose and MySql none standard port

I have the following docker-compose file to build a LAMP stack:
version: "3.1"
services:
mariadb:
image: mariadb:latest
container_name: mariadb
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
ports:
- "8052:3306"
webserver:
image: nginx:latest
container_name: nginx
working_dir: /application
volumes:
- .:/application
- ./dev/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8050:80"
php-fpm:
build: dev/php-fpm
container_name: php-fpm
working_dir: /application
volumes:
- .:/application
- ./dev/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
This gets my stack up and running then in my .env file I have:
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=8052
DB_DATABASE=test
DB_USERNAME=test
DB_PASSWORD=test
When Laravel tries to connect I get the following error message: SQLSTATE[HY000] [2002] No such file or directory
After some research I found that changing DB_HOST to 127.0.0.1 changes the way PHP tries to connect (TCP instead of a socket). After trying that I get a different error: SQLSTATE[HY000] [2002] Connection refused
The thing is php artisan migrate works and connects fine. I can also connect to the DB from the CLI and a GUI tool.
Is there something wrong in my docker-compose yaml causing this?
Docker file:
FROM php:fpm
WORKDIR "/application"
ARG DEBIAN_FRONTEND=noninteractiv
# Install selected extensions and other stuff
RUN apt-get update \
&& apt install ca-certificates apt-transport-https \
&& apt install wget \
&& docker-php-ext-install pdo_mysql && docker-php-ext-enable pdo_mysql \
&& wget -q https://packages.sury.org/php/apt.gpg -O- | apt-key add - \
&& echo "deb https://packages.sury.org/php/ stretch main" | tee /etc/apt/sources.list.d/php.list \
&& apt-get -y --no-install-recommends install php7.3-memcached php7.3-mysql php7.3-redis php7.3-xdebug \
&& apt-get clean; rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/doc/*
Nginx conf:
server {
listen 80 default;
client_max_body_size 108M;
access_log /var/log/nginx/application.access.log;
root /application/public;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
fastcgi_param APP_ENV "dev";
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
If you find yourself unable to run php artisan commands which require database connection after adding "DB_HOST=mariadb"
such as:
php artisan migrate
php artisan db:seed
Solution:
Add the following to your hosts file: 127.0.0.1 [name_of_container]
Name of container in your case is: mariadb

Docker impossible to link nginx container with php-fpm container

Hello for my work I am doing a nginx server and php fpm server with docker, but
I do not know how to link nginx and php with fast cgi
Nginx - Docker file
FROM debian:jessie
MAINTAINER Thomas Vidal <thomas-vidal#hotmail.com>
RUN apt-get update && apt-get upgrade
RUN apt-get install -y wget
RUN wget http://nginx.org/keys/nginx_signing.key && apt-key add nginx_signing.key
RUN apt-get update && apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
RUN ln -sf /etc/nginx/conf.d /site-conf
RUN ln -sf /var/www/html /www
VOLUME ["/site-conf", "/www"]
EXPOSE 80 443
CMD nginx
Nginx - default.conf
server {
listen 80;
index index.php index.html;
server_name 192.168.99.100;
root /www;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 192.168.99.100:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Nginx - index.php
<?php phpinfo(); ?>
Php-fpm - Dockerfile
FROM debian:jessie
MAINTAINER Thomas Vidal <thomas-vidal#hotmail.com>
RUN apt-get update && apt-get upgrade
RUN apt-get install -y php5-fpm php5-cli php5-mysql php5-curl php5-mcrypt php5-gd php5-redis
RUN sed -e 's#;daemonize = yes#daemonize = no#' -i /etc/php5/fpm/php-fpm.conf
RUN sed -e 's#listen = /var/run/php5-fpm.sock#listen = [::]:9000#g' -i /etc/php5/fpm/pool.d/www.conf
EXPOSE 9000
CMD php5-fpm
What is being returned:
File not found.
Thanks for your help!

Categories