How can change Nginx default log file location in docker setting - php

I'm very new to docker and trying build docker compose with multiple service/app, also set the log file place separately.
If I run docker compose up will cause the open() file error like
FPM-nginx | 2022/10/06 01:40:54 [emerg] 1#1: open() "/var/www/FPM/log/nginx/error.log" failed (2: No such file or directory)
FPM-nginx | nginx: [emerg] open() "/var/www/FPM/log/nginx/error.log" failed (2: No such file or directory)
According relative answer, I've try adding the new command in Dockerfile but still causing the error.
The answers tried
Nginx access log file path
The answers tried 2
Nginx log location
Currently Dockerfile docker-compose.yml nginx.conf like below
Dockerfile
FROM php:8.0.2-fpm
RUN mkdir -p /var/www/FPM/log/nginx/ \ <<<<<<<<< This is new add
touch /var/www/FPM/log/nginx/error.log \ <<<<<<<<< This is new add
touch /var/www/FPM/log/nginx/access.log \ <<<<<<<<< This is new add
apt-get update && apt-get install -y \
git \
curl \
zip \
unzip
WORKDIR /var/www/FPM
nginx.conf
server {
listen 80;
index index.php;
root /var/www/FPM/public;
error_log /var/www/FPM/log/nginx/error.log;
access_log /var/www/FPM/log/nginx/access.log;
error_page 404 /index.php;
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
docker-compose.yml
version: "3.8"
services:
app:
build:
context: ./
dockerfile: Dockerfile
container_name: FPM-app
restart: always
working_dir: /var/www/FPM
volumes:
- ../src:/var/www/FPM
nginx:
image: nginx:1.23.1-alpine
container_name: FPM-nginx
restart: always
ports:
- 8000:80
volumes:
- ../src:/var/www/FPM
- ./nginx:/etc/nginx/conf.d
I've also tried place
error_log /var/www/FPM/log/nginx/error.log;
access_log /var/www/FPM/log/nginx/access.log;
under location / but still cause the error
location / {
error_log /var/www/FPM/log/nginx/error.log;
access_log /var/www/FPM/log/nginx/access.log;
}

You've multiple issues with you docker and docker-compose files.
According to your docker-compose.yml, when you would be doing docker-compose up you'll have two containers running one brought up from the docker image built at the time and the other brought up from public nginx:1.23.1-alpine image. The first image built will have the /var/www/FPM/log/nginx/ folder along with the error.log and access.log files but docker-compose up will overwrite the content because of this line:
volumes:
- ../src:/var/www/FPM
That being said, you don't even need that folder and those files in the first (app) container in the first place. You need them in the nginx container. So you can remove these lines from the Dockerfile:
mkdir -p /var/www/FPM/log/nginx/ \
touch /var/www/FPM/log/nginx/error.log \
touch /var/www/FPM/log/nginx/access.log \
Instead, create error.log and access.log inside your src directory or if you would choose at src/log/nginx/ location for brevity. And mount these files in your docker compose file.
version: "3.8"
services:
app:
build:
context: ./
dockerfile: Dockerfile
container_name: FPM-app
restart: always
working_dir: /var/www/FPM
volumes:
- ../src:/var/www/FPM
nginx:
image: nginx:1.23.1-alpine
container_name: FPM-nginx
restart: always
ports:
- 8000:80
volumes:
- ../src:/var/www/FPM
- ../src/nginx/logs/error.log:/var/www/FPM/log/nginx/error.log;
- ../src/nginx/logs/access.log:/var/www/FPM/log/nginx/access.log;
- ./nginx:/etc/nginx/conf.d

Related

Getting RuntimeException: A facade root has not been set. in /var/www/html/vendor/.../Illuminate/Support/Facades/Facade.php

I am trying to setup the PHP, MYSQL, NGINX using Docker . I followed the udemy tutorial by Maximillian and
my kept all my docker files in a dockerfiles folder.
The php.dockerfile is as follows:
FROM php:8.0-fpm-alpine
WORKDIR /var/www/html
COPY src .
RUN docker-php-ext-install pdo pdo_mysql
My nginx.dockerfile is :
FROM nginx:stable-alpine
WORKDIR /etc/nginx/conf.d
COPY nginx/nginx.conf .
RUN mv nginx.conf default.conf
WORKDIR /var/www/html
COPY src .
And composer file is:
FROM composer:latest
WORKDIR /var/www/html
ENTRYPOINT [ "composer" ]
The docker-compose.yaml file is :
version: '2.2'
services:
server:
# image: 'nginx:stable-alpine'
build:
context: .
dockerfile: dockerfiles/nginx.dockerfile
ports:
- '8000:80'
volumes:
- ./src:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
# depends_on:
# - php
# - mysql
php:
build:
context: .
dockerfile: dockerfiles/php.dockerfile
volumes:
- ./src:/var/www/html:delegated
mysql:
image: mysql:5.7
env_file:
- ./env/mysql.env
composer:
build:
context: ./dockerfiles
dockerfile: composer.dockerfile
volumes:
- ./src:/var/www/html
artisan:
build:
context: .
dockerfile: dockerfiles/php.dockerfile
volumes:
- ./src:/var/www/html
entrypoint: ['php', '/var/www/html/artisan']
npm:
image: node:14
working_dir: /var/www/html
entrypoint: ['npm']
volumes:
- ./src:/var/www/html
nginx.conf file is :
server {
listen 80;
index index.php index.html;
server_name localhost;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
But When I run the following command :
docker-compose up --build server php mysql
After the command :
docker-compose run --rm composer create-project --prefer-dist laravel/laravel .
I get this Facade error . I went through all the Questions in stack over flow that has this error but I couldn't find the resolution even after hours of searching as I am new to both docker and PHP. Please help me setting up this application. Error screenshot is as follows :
At last I found that we need run the laravel with version 8 and for me the command is :
docker-compose run --rm composer create-project --prefer-dist laravel/laravel=8 . --> to build the docker image
and finally docker-compose up server.
You need to have the permission of the entire working folder to make the modifications.

Docker compose nginx get site running from another container

I have two containers running, one is running PHP and a laravel site which is all working fine. The second container is an nginx container, currently returning 404 error but I would like to render the site via the PHP container.
- app
- bootstrap
- config
- database
- nginx
- default.conf
- DockerFile
- php
- public
- resources
- routes
- storage
- vendor
- docker-compose.yml
- docker-production.yml
- DockerFile
DockerFile
FROM php:7.4
RUN apt-get update -y && apt-get install -y openssl zip unzip git cron
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install pdo pdo_mysql
WORKDIR /app
COPY . .
RUN composer install
ADD config/laravel_cron /etc/cron.d/cron
RUN chmod 0644 /etc/cron.d/cron
RUN touch /var/log/cron.log
RUN chmod 0777 /var/log/cron.log
RUN crontab /etc/cron.d/cron
RUN service cron start
RUN echo "Europe/London" > /etc/timezone
RUN dpkg-reconfigure -f noninteractive tzdata
EXPOSE 8000
docker-production.yml
version: '3.7'
services:
horse-racing-api:
container_name: horse_racing_api
restart: unless-stopped
build:
context: .
dockerfile: DockerFile
stdin_open: true
tty: true
working_dir: /app
volumes:
- ./:/app
web-server:
container_name: web_server
ports:
- 80:80
build:
context: nginx
dockerfile: DockerFile
depends_on:
- horse-racing-api
links:
- horse-racing-api
volumes:
- ./:/app
volumes:
app:
nginx/DockerFile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/default.conf
nginx/default.conf
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass horse-racing-api:8000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
Honestly been piecing this together from resources around the internet :/

After docker compose localhost:8000 not open page in browser

Hello i'm newbie in docker. I have project on laravel 9 with node version 12.14.0,PostgreSql 10,PHP 8.1.2
This my git repository:https://github.com/Daniil1996-vrn/DEVJuniorPHP/tree/main/DEVJuniorPHP
I create docker file, webserver ngnix conf file (but when i'm creating project i usr artisan server) on this repository:https://github.com/othyn/docker-compose-laravel#running-attached
This is my docker-compose.yml:
version: "3.7"
networks:
laravel:
volumes:
database:
services:
database:
image: postgres:10
container_name: postgres
restart: "no"
volumes:
- .:/var/lib/postgresql/data
networks:
- laravel
ports:
- 5432:5432
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "admin1234"
POSTGRES_DB: "DEVJuniorPHP"
composer:
image: composer:latest
container_name: composer
volumes:
- ./:/app
working_dir: /app
command: composer install
node:
image: node:12
container_name: node
volumes:
- ./:/app
working_dir: /app
command: npm install
app:
container_name: app
restart: "no"
volumes:
- ./:/var/www
networks:
- laravel
depends_on:
- composer
- node
build:
context: .
dockerfile: ./docker/app/dockerfile
command: php-fpm
webserver:
image: nginx:stable
container_name: webserver
restart: "no"
volumes:
- ./:/var/www
- ./docker/webserver/nginx.conf/
networks:
- laravel
ports:
- 8000:8000
depends_on:
- database
- app
Docker File:
FROM php:8.1.4-fpm-alpine3.14
# Update package manager ready for deps to be installed
RUN apk update
# Set the working directory of the container to the hosted directory
WORKDIR /var/www
nginx.conf:
server {
listen 8000;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
# Uncomment to extend nginx's max timeout to 1 hour
# fastcgi_read_timeout 3600;
}
}
When i run command in terminal Visual Studio code docker-compose up -d i next have messagess in terminal:
Starting node ... done
Starting composer ... done
Starting postgres ...
Starting postgres ... done
Recreating webserver ... done
PS D:\DEVJuniorPHP\DEVJuniorPHP> docker-compose up -d
Starting node ... done
Starting postgres ...
Starting postgres ... done
app is up-to-date
webserver is up-to-date
But whene i go to page localhost:8000 in browser i see the message:Can't access site
Please help me resolve this problem
Error is that the nginx vhost is pointing to the wrong folder.
You didn't map the nginx.conf volume into the container volume so it doesn't really find the path to the application.
In the volume part of the webserver service, you must put the path of the vhost container:
- ./docker/webserver/nginx.conf:/etc/nginx/nginx.conf
Get back to me if it's good !

Correct way to run artisan commands in docker-compose automatically

I'm trying to make a Laravel project image(for local using at first) with the docker-compose. So, I made the following files:
docker-compose.yml:
version: '3.9'
networks:
laravel:
services:
nginx:
build:
context: .
dockerfile: docker/nginx/Dockerfile
container_name: nginx
ports:
- 8020:80
volumes:
- ./:/var/www/swdocker
depends_on:
- php
- mysql
networks:
- laravel
mysql:
image: mysql
container_name: mysql
restart: always
volumes:
- ./docker/mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
networks:
- laravel
php:
build:
context: .
dockerfile: docker/php/Dockerfile
container_name: php
volumes:
- ./:/var/www/swdocker
- ./storage/app/public:/var/www/public/storage
#entrypoint: sh -c 'sleep 30 && php artisan migrate'
depends_on:
- mysql
networks:
- laravel
Dockerfile for nginx:
FROM nginx
ADD docker/nginx/conf.d /etc/nginx/conf.d
WORKDIR /var/www/swdocker
Dockerfile for php:
FROM php:8-fpm
RUN apt-get update && apt-get install -y \
&& docker-php-ext-install pdo_mysql
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
USER 1000
WORKDIR /var/www/swdocker
And the tuned default.conf file:
server {
listen 80;
server_name localhost;
root /var/www/swdocker/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
If I up the containers without migrations(they are commented) I need to run migrations manually(with docker-compose exec from terminal). And it works. Is it the best practice? I would like to up the project running only once docker image. I need to run artisan queue and scheduler for my project as well.
I tried to run migrations as entrypoint, but unsuccessfully. In this case I see my php container exits after migrations. I cannot understand how to solve this problem. Could anyone help me?
what I normally do is to put all the commands I want to run in a bash script file, and execute this file .this also helped me when I wanted to create a basic CI/CD pipeline
docker-compose down --remove-orphans
docker-compose build //in case I changed docker-compose file
docker-compose up -d
docker exec {container-name} bash -c "composer update"
docker exec {container-name} bash -c "php artisan migrate"
as for the schedulerphp artisan schedule run,the most straight forward way I found was to add
docker exec {container-name} bash -c "php artisan schedule:run" >> /home/{user}/output.txt
where output.txt is just a file that will show you the output of the command.
Hope this would be of any help to you.

How to setup dynamic subdomains in docker with NGINX

Basically, I need to have dynamic subdomains, so the site should be available at any subdomain in Docker like this:
admin.example.com
adrian.example.com
files.example.com .
I don't have a fixed number of subdomains, so I can't just put them all in the hosts file.
Server_name also didn't help: server_name www.$hostname;
They should all point to the same website.
I've tried jwilder reverse proxy, but wasn't able to set it up correctly.
I have a docker-compose.yml and Dockerfile.
Could someone give me a working code that I could use, and then change it for my needs. And if I need to change my hosts file also.
I did some research, but my nginx and docker knowledge is not enough.
Nginx.conf
server {
server_name .example.local;
listen 80 default;
client_max_body_size 1008M;
access_log /var/log/nginx/application.access.log;
error_log /var/log/nginx/error.log;
root /application/web;
index index.php;
if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}
location ~ \.php$ {
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE "error_log=/var/log/nginx/application_php_errors.log";
include fastcgi_params;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
Dockerfile
FROM phpdockerio/php73-fpm:latest
RUN mkdir /application
WORKDIR "/application"
COPY . /application
# Fix debconf warnings upon build
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
# Installing packages
apt-get -y --no-install-recommends --assume-yes --quiet install \
nano curl git ca-certificates ruby-dev gcc automake libtool rubygems build-essential make php-pear \
php7.3-mysql php7.3-bcmath php-imagick php7.3-intl php7.3-gd php-yaml php7.3-soap php7.3-dev mysql-client && \
# Xdebug
pecl install xdebug && \
# Cleaning up after installation
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
docker-compose.yml
version: "3.1"
services:
db:
image: mysql:5.6
container_name: ls-db
working_dir: /application
volumes:
- .:/application:cached # User-guided caching
- ./phpdocker/sql:/docker-entrypoint-initdb.d
environment:
MYSQL_DATABASE: ls
MYSQL_USER: drupal
MYSQL_PASSWORD: drupal
MYSQL_ROOT_PASSWORD: root
ports:
- "6006:3306"
networks:
- ls
web:
image: nginx:alpine
container_name: ls-webserver
working_dir: /application
volumes:
- .:/application:cached # User-guided caching
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "6060:80"
networks:
- ls
php-fpm:
build: phpdocker/php-fpm
container_name: ls-php-fpm
working_dir: /application
volumes:
- .:/application:cached # User-guided caching
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
networks:
- ls
networks:
ls: # this network (app1)
driver: bridge
volumes:
db:
Not sure what have you tried and failed with jwilder's reverse proxy, but it is an excellent way to address the exact issue at hand without dealing with nginx configuration and complex compose configuration.
Here is a working code, and you even do not have to change your host file
version: '3.7'
services:
nginx:
image: jwilder/nginx-proxy
ports: ["80:80"]
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
DEFAULT_HOST: fallback.lvh.me
api:
image: dannyben/whoami
environment:
MESSAGE: I am the API
VIRTUAL_HOST: "*.lvh.me"
web:
image: dannyben/whoami
environment:
MESSAGE: I am the WEB
VIRTUAL_HOST: "www.lvh.me"
In order to make it work, you must first launch the nginx proxy:
$ docker-compose up -d nginx
and only then, the backend services
$ docker-compose up -d api web
Then you can access www.lvh.me to see the web backend, and anything-else.lvh.me to see the API backend.
In addition, you can provide multiple wildcard hosts to the VIRTUAL_HOST environment variable, so that it supports both your local development environment and your production environment, like so:
VIRTUAL_HOST: "*.lvh.me,*.your-real-domain.com"
It is important to note that in order for this to work in a production environment, your DNS should also be set to use a wildcard subdomain.
In this demo, lvh.me is just forwarding all traffic to 127.0.0.1, which in turn gets to your nginx, which then forwards traffic inwards to your actual application.

Categories