I am writing a small pet project and using docker
I am using PHP-CLI, PHP-FPM and PGSQL - apline.
Below is the configuration
php-cli Dockerfile
FROM php:8.0-cli-alpine
RUN apk add --no-cache autoconf g++ make
#install PG
RUN apk add --no-cache postgresql-dev bash coreutils \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql
RUN apk add --no-cache unzip
RUN docker-php-ext-install pdo_pgsql pcntl opcache
RUN docker-php-ext-install bcmath fileinfo
ENV COMPOSER_ALLOW_SUPERUSER 1
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/bin --filename=composer --quiet
WORKDIR /app
PHP-fpm dockerfile
FROM php:8.0-fpm-alpine
#install PG
RUN apk add --no-cache postgresql-dev bash coreutils \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql
#enable extensions
RUN docker-php-ext-install pdo_pgsql opcache bcmath fileinfo
WORKDIR /app
docker-compose.yaml file
version: '3.8'
services:
nginx:
container_name: seln_nginx_dev-container
image: nginx
volumes:
- ./docker/nginx/:/etc/nginx/conf.d
- ./:/app
links:
- php-fpm
ports:
- '8000:80'
php-fpm:
container_name: seln_php-fpm-container
build:
context: docker
dockerfile: php-fpm/Dockerfile
environment:
DB_HOST: api-postgres
DB_USER: app
DB_PASSWORD: secret
DB_NAME: app
ports:
- 8080:8080
volumes:
- ./:/app
working_dir: /app
php-cli:
container_name: seln_php-cli-container
build:
context: docker
dockerfile: php-cli/Dockerfile
environment:
DB_HOST: api-postgres
DB_USER: app
DB_PASSWORD: secret
DB_NAME: app
volumes:
- ./:/app
api-postgres:
image: postgres:13.1-alpine
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
POSTGRES_DB: app
ports:
- "127.0.0.1:5432:5432"
And Makefile
init: down build up composer-install init-db
up:
docker-compose -f docker-compose.yaml up -d
down:
docker-compose -f docker-compose.yaml down -v --remove-orphans
composer-install:
docker-compose -f docker-compose.yaml run --rm php-cli composer install
composer-update:
docker-compose -f docker-compose.yaml run --rm php-cli composer update
build:
docker-compose -f docker-compose.yaml build
init-db:
docker-compose -f docker-compose.yaml run --rm php-cli vendor/bin/doctrine orm:schema-tool:drop --force && \
docker-compose -f docker-compose.yaml run --rm php-cli vendor/bin/doctrine orm:schema-tool:create
clear-cache:
docker-compose -f docker-compose.yaml run --rm php-cli vendor/bin/doctrine orm:clear-cache:metadata && \
docker-compose -f docker-compose.yaml run --rm php-cli vendor/bin/doctrine orm:clear-cache:query
What I am running into, when trying to build docker to work, I get an error use make init command
make: /bin/sh: Operation not permitted
make: *** [Makefile:209: pdo_pgsql.lo] Error 127
The command '/bin/sh -c docker-php-ext-install pdo_pgsql opcache bcmath fileinfo' returned a non-zero code: 2
ERROR: Service 'php-fpm' failed to build : Build failed
what is the reason for this error?
P>S - I managed to fix the error(use FROM php:8.0-cli-alpine3.13), but this solution does not suit me, I want to understand the reason, before the Ubuntu update, I could collect images without crutches, what could have changed in two weeks?
Related
So, I want to configure the swoole laravel project. I run a Dockerfile and it successfully run. Then I want to run compose file this give me error
There are no commands defined in the "swoole" namespace.
This is my first experience with swoole. And I don't understand what is the problem.
How can solve this problem?
This is a Dockerfile
FROM php:8.1-fpm-alpine
# Install laravel requirement PHP package
RUN apk add --no-cache --virtual .build-deps $PHPIZE_DEPS libzip-dev sqlite-dev \
libpng-dev libxml2-dev oniguruma-dev libmcrypt-dev curl curl-dev libcurl postgresql-dev
RUN docker-php-ext-install -j$(nproc) gd bcmath zip pdo_mysql pdo_pgsql
RUN pecl install xdebug swoole && docker-php-ext-enable swoole
# Install composer
ENV COMPOSER_HOME /composer
ENV PATH ./vendor/bin:/composer/vendor/bin:$PATH
ENV COMPOSER_ALLOW_SUPERUSER 1
RUN curl -s https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer
# Install PHP_CodeSniffer
WORKDIR /app
COPY ./ ./
USER root
RUN chown -R www-data /app/storage
RUN chmod -R ug+w /app/storage
RUN chmod 777 -R /app/storage
RUN chmod 777 -R /app/public
RUN composer install
RUN php artisan optimize
CMD php artisan swoole:http start
EXPOSE 1215
And this is a docker-compose.yaml file
version: "3.7"
services:
app:
build:
args:
user: www-data
uid: 1000
context: ./
dockerfile: Dockerfile
image: topspot-swoole-image
container_name: topspot-swoole-container
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- topspot-network
nginx:
image: nginx:alpine
container_name: topspot-nginx
restart: unless-stopped
ports:
- 80:80
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d/
networks:
- topspot-network
networks:
topspot-network:
driver: bridge
Solved
I solved it. Firstly install the swoole and publish to a local project. Then run the container and the composer saw the Swoole packages.
I try write CI/CD for symfony, but i have problem with vendor directory. App is copied right, but i don't have vendor. I tried various ways but nothing works. Locally, as instead of variables with the image name, I will give the image name, it all works fine.
My .gitlab-ci.yml
image: tiangolo/docker-with-compose
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
IMAGE_TAG: $CI_REGISTRY_IMAGE/demo:$CI_COMMIT_REF_NAME
IMAGE_PHP_PROD: $CI_REGISTRY_IMAGE/demo:$CI_COMMIT_SHA
IMAGE_NGINX_PROD: $CI_REGISTRY_IMAGE/demo-nginx:latest
VERSION: $CI_COMMIT_SHA
stages:
- build
- deploy
before_script:
- apk add --no-cache python3 python3-dev py3-pip libffi-dev openssl-dev gcc libc-dev make
- docker login -u $CI_USER -p $CI_PASSWORD registry.gitlab.com
# build:
# stage: build
# script:
# - docker build --pull -t $IMAGE_TAG deploy
# - docker push $IMAGE_TAG
production images:
stage: build
script:
- docker build . -f deploy/php/Dockerfile -t $IMAGE_PHP_PROD
- docker push $IMAGE_PHP_PROD
- docker build . -f deploy/Dockerfile-nginx-prod -t $IMAGE_NGINX_PROD
- docker push $IMAGE_NGINX_PROD
pull:
stage: deploy
tags:
- prod
script:
- docker pull $IMAGE_PHP_PROD
- docker pull $IMAGE_NGINX_PROD
production:
stage: deploy
tags:
- deploy
before_script:
script:
- docker-compose -f deploy/docker-compose.yml up -d
docker-compose.yml
version: '3'
services:
nginx:
container_name: sf_nginx
image: $IMAGE_NGINX_PROD
restart: on-failure
ports:
- '81:80'
depends_on:
- php
php:
container_name: sf_php
image: $IMAGE_PHP_PROD
restart: on-failure
user: 1000:1000
Dockerfile for php
FROM php:7.4-fpm
RUN docker-php-ext-install pdo_mysql
RUN pecl install apcu
RUN apt-get update && \
apt-get install -y \
libzip-dev
RUN docker-php-ext-install zip
RUN docker-php-ext-enable apcu
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php composer-setup.php --filename=composer \
&& php -r "unlink('composer-setup.php');" \
&& mv composer /usr/local/bin/composer
WORKDIR /usr/src/app
COPY --chown=1000:1000 ./ /usr/src/app
RUN PATH=$PATH:/usr/src/apps/vendor/bin:bin
Dockerfile for nginx
FROM nginx
ADD ./ /usr/src/app
ADD ./deploy/nginx/default.conf /etc/nginx/conf.d/default.conf
In php container exist vendor directory. Docker-compose run, but my app not working:
Am I wrong or you are just installing composer without executing it?
After && mv composer /usr/local/bin/composer you should put && composer install --no-interaction!
I am trying to run my laravel application inside docker containers on my laptop (during development) and finding that the speed of the application is drastically slow when compared to running it using XAMPP for example.
My laptop is running Windows 10 Pro (64-Bit) with i7-6700HQ CPU, 16 GB RAM and SSD.
When I run my app in docker for windows, average page load time is approx 3.5 Seconds.
Running it on local XAMPP, average page load time is approx 350 Milliseconds (0.35 Second).
For my docker setup, I use the following image/Dockerfile:
FROM alpine:3.8
MAINTAINER Latheesan Kanesamoorthy
RUN apk add \
--no-cache \
--update \
apache2 \
composer \
curl \
php7 \
php7-apache2 \
php7-curl \
php7-bcmath \
php7-dom \
php7-mbstring \
php7-pdo_mysql \
php7-session \
php7-sockets \
php7-tokenizer \
php7-xml \
php7-xmlwriter \
php7-fileinfo \
&& mkdir -p /run/apache2 \
&& ln -sf /dev/stdout /var/log/apache2/access.log \
&& ln -sf /dev/stderr /var/log/apache2/error.log
COPY ./image/*.conf /etc/apache2/conf.d/
COPY ./image/php.ini /etc/php7/conf.d/99_custom.ini
RUN mkdir -p /storage/framework/testing
RUN mkdir -p /storage/framework/views
RUN mkdir -p /storage/framework/sessions
RUN mkdir -p /storage/framework/cache/data
RUN chown -R apache:apache /storage
WORKDIR /app
COPY ./src/composer.* ./
RUN composer install -n --no-autoloader --no-scripts --no-progress --no-suggest
COPY src .
RUN composer dump-autoload -o -n
EXPOSE 80
and docker-compose.yml:
version: '2.1'
services:
mysql:
container_name: myapp-mysql
mem_limit: 512M
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: myapp
MYSQL_DATABASE: myapp
MYSQL_USER: myapp
MYSQL_PASSWORD: myapp
ports:
- "35000:3306"
redis:
container_name: myapp-redis
image: redis:latest
redis-commander:
container_name: myapp-redis-commander
image: rediscommander/redis-commander:latest
hostname: redis-commander
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "7050:8081"
links:
- redis
app:
container_name: myapp-app
mem_limit: 512M
build:
context: ""
dockerfile: image/Dockerfile
env_file:
- image/env/development
volumes:
- ./src:/app:cached
ports:
- "25000:80"
entrypoint: httpd -DFOREGROUND
links:
- mysql
- redis
and I use the following commands to boot it up:
docker-compose down --remove-orphans
docker-compose up -d --build
docker exec myapp-app composer install --prefer-dist --no-suggest
docker exec myapp-app php artisan cache:clear
docker exec myapp-app php artisan migrate:fresh --seed
As you can see, the docker version uses redis as the driver for: cache, eloquent model cache, queue and session.
Locally for XAMPP, I am simply using file driver for all.
Any idea why the performance is so slow on docker?
P.S. The reason why I want to try developing using the docker environment is so that I can keep my development and production environment identical.
I am trying to dockerise my laraver 5.5 application using docker-compose.
Here's my docker-compose.yml file definition:
version: '2.1'
services:
# The Database
database:
image: mysql:5.7
restart: always
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
environment:
- "MYSQL_DATABASE=myapp"
- "MYSQL_USER=myapp"
- "MYSQL_PASSWORD=123456"
- "MYSQL_ROOT_PASSWORD=secret"
ports:
- "33061:3306"
# The Application
app:
depends_on:
database:
condition: service_healthy
build:
context: ./
dockerfile: ./docker-compose/app.dockerfile
volumes:
- ./:/var/www/html
environment:
- "DB_CONNECTION=mysql"
- "DB_HOST=database"
- "DB_PORT=3306"
- "DB_DATABASE=myapp"
- "DB_USERNAME=myapp"
- "DB_PASSWORD=123456"
ports:
- "8080:80"
and this is my ./docker-compose/app.dockerfile:
# Base image
FROM php:7.1-apache
# Configure system
RUN apt-get update && apt-get install -y \
libmcrypt-dev \
mysql-client \
zlib1g-dev \
--no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-install mcrypt pdo_mysql
# Add php.ini and apache2.conf
COPY docker-compose/php.ini $PHP_INI_DIR/php.ini
COPY docker-compose/apache2.conf /etc/apache2/apache2.conf
# Configuring Apache
RUN rm -rf /etc/apache2/sites-available/* \
&& rm -rf /etc/apache2/sites-enabled/*
# Enable rewrite module
RUN a2enmod rewrite
# Download and install composer globally
RUN curl -s http://getcomposer.org/installer | php \
&& mv composer.phar /usr/local/bin/composer
# Add vendor binaries to PATH
ENV PATH=/var/www/html/vendor/bin:$PATH
I use the following command to start up my stack:
docker-compose -d --build via the Docker Quickstart Terminal on my Windows 10.
Everything builds fine and runs (I checked via docker-compose ps). When I visit the app url, I am getting forbidden error from apache, so I decided to login to the container using docker exec -it my_app_1 /bin/bash command and I went into /var/www/html directory and noticed that it's empty.
Doesn't volume mounting works in windows?
My goal is that running a PHP script with Docker compose.
I somehow found how to execute a PHP example script using Dockerfile like the following.
$ ls
Dockerfile test.php
$ cat Dockerfile
FROM php:7.0-cli
COPY ./test.php /tmp
WORKDIR /tmp
CMD [ "php", "./test.php" ]
$ cat test.php
<?php
phpinfo();
$ docker build -t my-php-app .
$ docker run -it --rm --name my-running-app my-php-app
https://docs.docker.com/samples/library/php/
I'm looking into how to connect into MySQL from the PHP script.
Any help will be helpful for me.
Update 1
I had little progress. I was able to connect to the container of mysql from PHP container.
$ cat index.php
<?php
$mysqli = new mysqli("database", "admin", "12dlql*41");
echo $mysqli->server_info;
$ docker run -d --name database -e MYSQL_USER=admin -e MYSQL_PASSWORD=12dlql*41 -e MYSQL_RANDOM_ROOT_PASSWORD=true mysql:latest
$ docker run --rm -v $(pwd):/app -w /app --link database tommylau/php php index.php
5.7.21
https://www.shiphp.com/blog/2017/php-mysql-docker
Update 2
Thanks to everyone I was able to find the way.
Dockerfile
FROM php:7.1.9-fpm
RUN apt-get update \
&& docker-php-ext-install pdo_mysql mysqli
RUN apt-get update \
&& apt-get install -y libmemcached-dev zlib1g-dev \
&& pecl install memcached-3.0.3 \
&& docker-php-ext-enable memcached opcache
docker-compose.yml
version: '3.4'
services:
myapp_memcached:
image: memcached:latest
container_name: memcached
myapp_mysql:
image: mysql:latest
container_name: database
volumes:
- ./docker/:/etc/mysql/conf.d
- ./docker/:/docker-entrypoint-initdb.d
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=true
- MYSQL_DATABASE=counterparty
- MYSQL_USER=admin
- MYSQL_PASSWORD=12dlql*41
myapp_php:
build: .
container_name: myapp
working_dir: /app
volumes:
- ./:/app
external_links:
- database
- memcached
You should set up a docker-compose.yml file, which would interconnect your containers into one network.
For example:
version: '3.4'
myapp_php:
build: ./Dockerfile
container_name: myapp_php
myapp_mysql:
image: mysql:latest
container_name: myapp_mysql
environment:
- MYSQL_ROOT_PASSWORD=somerandompassword
- MYSQL_DATABASE=database
- MYSQL_USER=admin
- MYSQL_PASSWORD=12dlql*41
Now when you run docker-compose up -d, the stack should be up with mysql having pre-set a database with your credentials.