I have a working laravel environment using docker. my projects has multiple services in different container such as redis, mongodb, mysqldb and nodejs. I want to use supervisor on my project to interact with redis for the queues and php to run the job. I have done some testing and research but I really can't make it work.
so here is my DockerFile:
FROM php:7.3-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
mariadb-client \
libpng-dev \
libzip-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
cron \
supervisor
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
RUN docker-php-ext-configure bcmath --enable-bcmath
RUN docker-php-ext-install bcmath
# install mongodb ext
RUN pecl install mongodb \
&& docker-php-ext-enable mongodb
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy supervisor configs
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
and my docker-compose.yml file
version: '3'
services:
#PHP Service
php:
build:
context: .
dockerfile: Dockerfile
image: digitalocean.com/php
container_name: php
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
- ./supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
networks:
- app-network
#NODEJS Service
nodejs:
image: node:10
container_name: nodejs
restart: unless-stopped
working_dir: /var/www
volumes:
- ./:/var/www
tty: true
networks:
- app-network
#Nginx Service
nginx:
image: nginx:alpine
container_name: nginx
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
mysqldb:
image: mysql:5.7.22
container_name: mysqldb
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_USER: ${DB_USERNAME}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#MongoDB Service
mongodb:
image: mongo:3
container_name: mongodb
restart: unless-stopped
tty: true
ports:
- "27017:27017"
networks:
- app-network
#Redis Service
redis:
image: redis
container_name: redis
restart: unless-stopped
tty: true
ports:
- "${REDIS_PORT}:6379"
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
you might also want to see my supervisord.conf
[supervisord]
user=www
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
pidfile=/var/run/supervisord.pid
loglevel = INFO
[unix_http_server]
file=/var/run/supervisor.sock
chmod=0700
username=www
password=www
[supervisorctl]
serverurl=unix:///var/run/supervisord.sock
username=www
password=www
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
[program:php-fpm]
command = /usr/local/sbin/php-fpm
autostart=true
autorestart=true
priority=5
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:ohwo-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan horizon
autostart=false
autorestart=true
user=www
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/laravel-worker.log
so from that setup. when the containers is UP it seems that supervisord is not working because if I run php artisan horizon manually on my php container the queuing works perfectly. btw horizon is the tool i use for queuing.
and then I also try to run supervisorctl on my php container and I got this error unix:///var/run/supervisord.sock no such file
so I'm just pretty new to docker just started few months ago. I do know how to configure supervisord on linux but i can't make it work on docker.
so please pardon my stupidity :)
The idea here is to eliminate the supervisor and instead run whatever the supervisor used to run in several different containers. You can easily orchestrate this with docker-compose, for example, all running the same container with different CMD overrides, or the same container with a different CMD layer at the end to split it out. The trouble here is the supervisor won't be able to communicate the status of the processes it manages to Docker. It will always be "alive" even if all of its processes are completely trashed. Exposing those directly means you get to see they crashed.
What's best is to break out each of these services into separate containers. Since there's official pre-built ones for MySQL and so on there's really no reason to build one yourself. What you want to do is translate that supervisord config to docker-compose format.
With separate containers you can do things like docker ps to see if your services are running correctly, they'll all be listed individually. If you need to upgrade one then you can do that easily, you just work with that one container, instead of having to pull down the whole thing.
The way you're attacking it here is treating Docker like a fancy VM, which it really isn't. What it is instead is a process manager, where these processes just so happen to have pre-built disk images and a security layer around them.
Compose your environment out of single-process containers and your life will be way easier both from a maintenance perspective, and a monitoring one.
If you can express this configuration as something docker-compose can deal with then you're one step closer to moving to a more sophisticated management layer like Kubernetes which might be the logical conclusion of this particular migration.
According to official documentation:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
And your Dockerfile has two CMD commands so the command php-fpm will override
/usr/bin/supervisord
So that you can execute PHP commands but can't find supervisor's socket created in the container.
You can fix your issue by deleting the last CMD command related to PHP-FPM as you already configured supervisor to start it and your Dockerfile should have one CMD command:
CMD ["/usr/bin/supervisord"]
I recommend that you familiarize yourself with this simple project on github, there is a configuration for a docker with a working supervisor web interface.
https://github.com/Staniczek/supervisord-docker
Related
I followed step by step this article ( https://www.digitalocean.com/community/tutorials/how-to-set-up-laravel-nginx-and-mysql-with-docker-compose-on-ubuntu-20-04 ).
I just changed php version (8.1) instead of (7.4) and mysql version (8.0) instead of (5.7.22)
When I run (docker ps) command, digitalocean.com/php and nginx:alpine are fine. But mysql:8.0 is always restarting status.
But I tested firstly it in local development.
When local development, everything is okay.
Buy I deployed it in the droplet, I faced the error that MySQL is always restarting status.
When I run (docker logs -f ) command, I faced the following error.
2022-08-13 06:35:20+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
2022-08-13 06:35:20+00:00 [Note] [Entrypoint]: Switching to dedicated user ‘mysql’
2022-08-13 06:35:20+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
‘/var/lib/mysql/mysql.sock’ -> ‘/var/run/mysqld/mysqld.sock’
When I run (docker run --restart=always mysql:8.0) command, I faced the following error.
2022-08-13 06:57:37+00:00 [Note] [Entrypoint]: Entrypoint script - for MySQL Server 8.0.30-1.el8 started.
2022-08-13 06:57:38+00:00 [Note] [Entrypoint]: Switching to dedicated user ‘mysql’ 2022-08-13 06:57:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
2022-08-13 06:57:38+00:00 [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified
You need to specify one of the following:
MYSQL_ROOT_PASSWORD
MYSQL_ALLOW_EMPTY_PASSWORD
MYSQL_RANDOM_ROOT_PASSWORD
Dockerfile
FROM php:8.1-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
docker-compose.yml
version: '3'
services:
#PHP Service
app:
build:
context: .
dockerfile: Dockerfile
image: digitalocean.com/php
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: 123123
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
This message to clear up a problem I am having with the use of docker with my php application.
Indeed, I execute locally my dockers images (nginx, phpmyadmin and php with my application) and everything works fine.
However, I use a volume mounted in my container app with php which allows me to be able to modify hot files (without need to build at each edit).
However when I push this image to a repository and I pull it on another desktop, the volume containing my application is not there.
Have you ever faced this concern?
Please find my docker-compose.yml and Dockerfile :
docker-compose.yml
version: "3.7"
services:
app:
build:
args:
user: web
uid: 1000
context: ./
dockerfile: Dockerfile
image: myblog
container_name: myblog-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- myblog
db:
image: mysql:5.7
container_name: myblog-db
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./.docker/mysql/database.sql:/docker-entrypoint-initdb.d/init.sql
- ./.docker/mysql/data:/var/lib/mysql
networks:
- myblog
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8002:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
networks:
- myblog
nginx:
image: nginx:alpine
container_name: myblog-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./:/var/www
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
networks:
- myblog
networks:
myblog:
driver: bridge
Dockerfile
FROM php:7.3-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
libzip-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
RUN docker-php-ext-install zip
RUN docker-php-ext-enable zip
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user
Thanks in advance
This is expected behavior, because the data inside Volumes are not part of an image. Volumes are used to persist data generated in containers or to pass dynamic data into containers via bind-mounts e.g. configs, credentials or certificates.
https://docs.docker.com/storage/volumes/
Your docker-compose.yml and its services using volumes mounting your local directory via - .:/path/to/dir are only good for local development, because you may see changes of your application instant and without having to rebuild images.
If you want to see your code inside the images on another machine you need to use COPY in your Dockerfile, rebuild the image and push every time you change your code !
You will also need to change your docker-compose.yml by adding volumes.
https://docs.docker.com/compose/compose-file/#volumes
Many thanks for your clear reply.
Now i understand what is wrong with my configuration and what i need to do.
For the people who search a solution to manage in the same Dockerfile the development environment and the production, you can use an Argument :)
Have you tried using a named volume instead of a path based volume?
This will let Docker manage the volume more for you and may give you the behavior you want.
https://nickjanetakis.com/blog/docker-tip-28-named-volumes-vs-path-based-volumes
Docker volume does not persist data
I have the following DockerFile and docker-compose.yml for an Laravel application. I have just added Redis today and this appeared in my terminal in the end of docker-compose up when I tried to connect and use Redis from PHP.
zinc_app | [23-Dec-2019 11:31:27] ALERT: oops, unknown child (32) exited on signal 2 (SIGINT). Please open a bug report (https://bugs.php.net).
zinc_app | [23-Dec-2019 11:31:27] ALERT: oops, unknown child (31) exited on signal 2 (SIGINT). Please open a bug report (https://bugs.php.net).
zinc_app | [23-Dec-2019 11:32:09] ALERT: oops, unknown child (20) exited on signal 9 (SIGKILL). Please open a bug report (https://bugs.php.net).
I am pretty sure the error came from the PHP Redis extension, because it didn't have the problem before. Have anyone encountered this problem before? Have I configured anything wrong?
I have tested this on another computer, but it didn't exhibit the same behavior. I have tried docker system prune -a and build and up again, but my computer still has the same problem.
As for the code I have run. I have run this in artisan tinker.
$redis = new Redis();
$redis->connect('zinc_redis', 6379);
$redis->get('1'); // The Oops message originated from this line.
Docker Compose:
version: '3.6'
services:
#PHP Service
zinc_app:
build:
context: .
dockerfile: Dockerfile
image: zinc/php
container_name: zinc_app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: zinc_app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- /tmp:/tmp #For CS Fixer
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
zinc_webserver:
image: nginx:alpine
container_name: zinc_webserver
restart: unless-stopped
tty: true
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#Postgres Service
zinc_db:
image: postgres:12-alpine
container_name: zinc_db
restart: unless-stopped
tty: true
ports:
- "5432:5432"
environment:
POSTGRES_DB: zinc
POSTGRES_PASSWORD: admin
SERVICE_TAGS: dev
SERVICE_NAME: postgres
volumes:
- dbdata:/var/lib/postgresql
- ./postgres/init:/docker-entrypoint-initdb.d
networks:
- app-network
zinc_redis:
image: redis:5.0-alpine
container_name: zinc_redis
restart: unless-stopped
tty: true
ports:
- "6379:6379"
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
- redisdata:/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
redisdata:
driver: local
FROM php:7.4-fpm
# Switch to root user
USER root
# Set working directory
WORKDIR /var/www
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
RUN chown www:www /var/www
# Copy composer
COPY --chown=www:www composer.json composer.lock /var/www/
# Install dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
build-essential \
libpq-dev \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
libzip-dev \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pgsql pdo_pgsql zip exif pcntl
# Enable igbinary & PHP Redis ext using igbinary
RUN pecl install -o -f igbinary && y | pecl install -o -f redis && docker-php-ext-enable redis igbinary
RUN rm -rf /tmp/pear
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
COPY --chown=www:www . /var/www/
# Change current user to www
USER www
# Install packages
RUN composer global require hirak/prestissimo && composer install
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
I recently spun up a Laravel / Docker environment by following this digitalocean tutorial, and was just wondering if anybody sees any concerns with using this in a production environment? If there are concerns, could you perhaps explain why they are a concern and what I can do to circumvent them?
I would have asked the question in the tutorial comments instead, but that never gets enough visibility.
EDIT: Here are the docker-compose.yml, Dockerfile and .env files, just so you have a little more context without having to visit the tutorial. Let me know if you need anything else.
docker-compose.yml:
version: '3'
services:
#PHP Service
app:
build:
context: .
dockerfile: .docker/Dockerfile
image: digitalocean.com/php
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./.docker/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./.docker/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: laravel_root_password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
- ./.docker/mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
Dockerfile:
FROM php:7.2-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
default-mysql-client \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
nano \
unzip \
git \
curl
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# */
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
.env:
APP_NAME=Laravel
APP_ENV=production
APP_KEY=base64:000/000000000000000000000000000000000000000=
APP_DEBUG=true
APP_URL=http://example.com
LOG_CHANNEL=stack
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=laraveluser
DB_PASSWORD=your_laravel_db_password
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
About a year ago, I deployed a Laravel app using Docker on an Ubuntu host. We've had no issues. Having a homogeneous environment eases team development and has helped streamline continuous deployment.
The production docker environment includes:
PHP
NGINX
MySQL
Redis
Additionally, the local docker environment includes:
Mailhog
A 'toolbox' that includes npm, mysql-client and other tools that would make things more convenient for the team.
There's a chance you'll want different docker-compose configurations for development vs. production. You can manage this by creating a docker-compose.prod.yml file and have your CD pipeline overwrite docker-compose.yml with the prod version upon deployment.
Or, if no CD is in place, you could use a docker-compose.dev.yml file to overwrite production values and add new configurations.
Then run
docker-compose up -d -f docker-compose.yml -f docker-compose.dev.yml
Good luck!
I am setting up a development environment with the docker. In it I need
NGINX (webservice)
PHP 7.2 (app)
MARIADB 10.3 (database)
In the app container I must install composer, I will work using laravel.
Using RUN composer install andCOPY .env.example .env in dockerfile to install vendor and configure.env, I did not get any error messages but no files were created either.
docker-compose.yml
version: '3'
services:
app:
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
build:
context: .
dockerfile: Dockerfile
image: php:7.2
container_name: app
restart: unless-stopped
tty: true
working_dir: /var/www
networks:
- app-network
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
networks:
- app-network
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
db:
image: mariadb:10.3
container_name: db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: pass
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- app-network
volumes:
- dbdata:/var/lib/mysql
networks:
app-network:
driver: bridge
volumes:
dbdata:
driver: local
dockerfile
FROM php:7.2-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
mariadb-client-10.3 \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Run composer install
RUN composer install
# Create the .env from the .env.example
COPY .env.example .env
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
I have created a Setup to deploy a Laravel Application with the help of NGINX Web Server and MYSQL Database at Github.
It should be pretty easy to set up. If you encounter any issues, please feel free to raise an issue at Github, or ask me here.