laravel docker php:7.4-fpm-alpine3.12 500 error - php

I have a problem for docker-compose up, when I git clone from the code repository on aws.
Files become root from www
I think it is because of the volumes
But I don't know how to fix this.
Any helps?
And start_script.sh part not working as well.
php artisan config:clear
php artisan cache:clear
php artisan key:generate
php artisan migrate
php artisan db:seed
php-fpm dockerfile
FROM php:7.4-fpm-alpine3.12
USER root
WORKDIR /var/www
RUN apk update && apk add --no-cache $PHPIZE_DEPS \
build-base shadow nano curl gcc git bash vim \
php7 \
php7-fpm \
php7-common \
php7-pdo \
php7-pdo_mysql \
php7-mysqli \
php7-mcrypt \
php7-mbstring \
php7-xml \
php7-openssl \
php7-json \
php7-phar \
php7-zip \
php7-gd \
php7-dom \
php7-session \
php7-zlib \
haveged
# # Install extensions
RUN docker-php-ext-install pdo pdo_mysql
RUN docker-php-ext-enable pdo_mysql
# Remove Cache
RUN rm -rf /var/cache/apk/*
RUN mkdir -p /usr/src/php/ext/redis \
&& curl -L https://github.com/phpredis/phpredis/archive/5.3.4.tar.gz | tar xvz -C
/usr/src/php/ext/redis --strip 1 \
&& echo 'redis' >> /usr/src/php-available-exts \
&& docker-php-ext-install redis
# Copy config
COPY ./config/php/local.ini /usr/local/etc/php/conf.d/local.ini
# verify that the binary w
RUN addgroup -g 1000 -S www && \
adduser -u 1000 -S www -G www -s /bin/sh -D www
COPY --chown=www:www . /var/www/
RUN ["chmod", "+x", "./start_script.sh"]
EXPOSE 9000
RUN chmod -R 775 /var/www/storage
RUN ls -al
USER www
# Run php-fpm
CMD ["./start_script.sh"]
start_script.sh
#!/bin/bash
set -m
#turn on bash's job control
#Start the primary process and put it in the background
php-fpm -y /usr/local/etc/php-fpm.conf -R &
#Start the helper process
php artisan config:clear
php artisan cache:clear
php artisan key:generate
php artisan migrate
php artisan db:seed
#the my_helper_process might need to know how to wait on the
#primary process to start before it does its work and returns
ls -al
#now we bring the primary process back into the foreground
# and leave it there
fg %1
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: ./docker/php-fpm/Dockerfile
image: docker/laravel
container_name: app
tty: true
restart: unless-stopped
environment:
DB_HOST: db
DB_PASSWORD: password
SESSION_DRIVER: redis
REDIS_HOST: redis
volumes:
- ./:/var/www
- ./config/php/local.ini:/usr/local/etc/php/conf.d/local.ini
depends_on:
- db
webserver:
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
image: docker/nginx
container_name: webserver
restart: unless-stopped
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./config/nginx/conf.d/:/etc/nginx/conf.d/
depends_on:
- app
db:
image: mysql:5.7
container_name: db
environment:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: password
tty: true
ports:
- "3306:3306"
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:latest
container_name: redis
volumes:
dbdata:
driver: local
when I build php:7.4-fpm-alpine3.12, ls -al
After I docker-compose up ls -al get
when I write dd(); on app\Excepions\Handler.php
public function register()
{
$this->reportable(function (Throwable $e) {
dd($e);
});
}
And I also did
add this line in config object of composer.json file
"config": {
"platform-check": false
},
docker exec app php artisan config:cache
docker run --rm -v ${PWD}:/app composer dump-autoload
How to fix this?

Related

Laravel Swoole Docker "There are no commands defined in the "swoole" namespace"

So, I want to configure the swoole laravel project. I run a Dockerfile and it successfully run. Then I want to run compose file this give me error
There are no commands defined in the "swoole" namespace.
This is my first experience with swoole. And I don't understand what is the problem.
How can solve this problem?
This is a Dockerfile
FROM php:8.1-fpm-alpine
# Install laravel requirement PHP package
RUN apk add --no-cache --virtual .build-deps $PHPIZE_DEPS libzip-dev sqlite-dev \
libpng-dev libxml2-dev oniguruma-dev libmcrypt-dev curl curl-dev libcurl postgresql-dev
RUN docker-php-ext-install -j$(nproc) gd bcmath zip pdo_mysql pdo_pgsql
RUN pecl install xdebug swoole && docker-php-ext-enable swoole
# Install composer
ENV COMPOSER_HOME /composer
ENV PATH ./vendor/bin:/composer/vendor/bin:$PATH
ENV COMPOSER_ALLOW_SUPERUSER 1
RUN curl -s https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer
# Install PHP_CodeSniffer
WORKDIR /app
COPY ./ ./
USER root
RUN chown -R www-data /app/storage
RUN chmod -R ug+w /app/storage
RUN chmod 777 -R /app/storage
RUN chmod 777 -R /app/public
RUN composer install
RUN php artisan optimize
CMD php artisan swoole:http start
EXPOSE 1215
And this is a docker-compose.yaml file
version: "3.7"
services:
app:
build:
args:
user: www-data
uid: 1000
context: ./
dockerfile: Dockerfile
image: topspot-swoole-image
container_name: topspot-swoole-container
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- topspot-network
nginx:
image: nginx:alpine
container_name: topspot-nginx
restart: unless-stopped
ports:
- 80:80
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d/
networks:
- topspot-network
networks:
topspot-network:
driver: bridge
Solved
I solved it. Firstly install the swoole and publish to a local project. Then run the container and the composer saw the Swoole packages.

Setup docker to use php-fpm + Laravel + supervisor in one container

I need to have supervisor for my laravel queues. But supervisor starts from root user, and I want to start php from another user for safety. I could not find solution to start php in another way, like a standart user from image - www-data.
And I also have files with backend owner. I read that it's safer to have php files with one owner and start php-fpm with another
Question: is it normal to work in producation with www-data user for php-fpm or I have to have another user for it. Or maybe I can unite php-fpm with supervisor(and cron in future) in another way? And if I have to change user how to start php with another user?
Dockerfile
FROM php:8.1-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
git \
curl \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
libonig-dev \
libxml2-dev \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
libpq-dev \
zlib1g-dev \
libzip-dev \
supervisor \
sudo
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pecl install -o -f redis \
&& rm -rf /tmp/pear \
&& docker-php-ext-enable redis
# Install PHP extensions
RUN docker-php-ext-install intl pdo pdo_pgsql pgsql mbstring exif pcntl bcmath gd zip
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Add user for laravel application
RUN groupadd -g 1000 backend
RUN useradd -u 1000 -ms /bin/bash -g backend backend
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=backend:backend . /var/www
RUN ["chmod", "+x", "./my_wrapper_script.sh"]
RUN ["chown", "-R", "www-data:www-data", "./storage/framework"]
RUN ["chown", "-R", "www-data:www-data", "./storage/logs"]
COPY --chown=root:root docker-compose/app/supervisor.conf /etc/supervisor/conf.d/supervisord.conf
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ./my_wrapper_script.sh
Docker-compose
version: 3.7
services:
app:
build:
context: ./
dockerfile: Dockerfile
image: didido
container_name: didido-app
restart: unless-stopped
working_dir: /var/www/
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
volumes:
- ./:/var/www
networks:
- didido
db:
image: postgis/postgis:14-3.1
restart: always
container_name: didido-db
networks:
- didido
environment:
- POSTGRES_DB=${DB_DATABASE}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- ../2. Init Database:/docker-entrypoint-initdb.d
- ./data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d didido"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
nginx:
image: nginx:1.17-alpine
container_name: didido-nginx
restart: unless-stopped
tty: true
ports:
- 80:80
depends_on:
- nodejs
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d
networks:
- didido
networks:
didido:
driver: bridge
my_wrapper_script.sh
#!/bin/bash
# Start the first process
php-fpm &
# Start the second process
supervisord &
# Wait for any process to exit
wait -n
# Exit with status of process that exited first
exit $?

docker entrypoint sh file restarting

I am testing docker with my php project. Everything is ok in testing but if I add ENTRYPOINT, docker is restarting.
Here is my docker compose file
version: "3.7"
services:
#Laravel App
app:
build:
args:
user: maruan
uid: 1000
context: ./docker/7.4
dockerfile: Dockerfile
# command: sh -c "start-container.sh"
image: laravel-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- app-network
#Nginx Service
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./:/var/www
- ./docker/7.4/nginx/conf.d:/etc/nginx/conf.d/default.conf
networks:
- app-network
#Mysl Service
db:
image: mysql:8
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
networks:
- app-network
networks:
app-network:
driver: bridge
Dockerfile
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
WORKDIR /var/www
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install system dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends build-essential mariadb-client libfreetype6-dev libjpeg-dev libpng-dev libwebp-dev zlib1g-dev libzip-dev gcc g++ make vim unzip git jpegoptim optipng pngquant gifsicle locales libonig-dev \
&& docker-php-ext-configure gd \
&& docker-php-ext-install gd \
&& apt-get install -y --no-install-recommends libgmp-dev \
&& docker-php-ext-install gmp \
&& docker-php-ext-install mysqli pdo_mysql zip \
&& docker-php-ext-enable opcache \
&& apt-get autoclean -y \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /tmp/pear/
COPY . /var/www
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
COPY start-container.sh /usr/local/bin/start-container.sh
RUN chmod +x /usr/local/bin/start-container.sh
ENTRYPOINT ["start-container.sh"]
start-container.sh file
#!/usr/bin/env bash
set -e
cd /var/www
php artisan optimize
php artisan view:cache
#composer install && composer dump-autoload
exec "$#"
I also print log for that docker image.
Configuration cached successfully!
Route cache cleared!
Routes cached successfully!
Files cached successfully!
Compiled views cleared!
Blade templates cached successfully!
I think my error is docker container is restarting after running start-container.sh file. When I google, some people use PHP artisan script with ENTRYPOINT sh file.
What should I do not to restart again and again with ENTRYPOINT sh file?
Your entrypoint script ends with the line exec "$#". This runs the image's CMD, and is generally a best practice. However, your image doesn't have a CMD, so that command just expands to a bare exec, which causes the main container process to exit.
An image built FROM php:fpm often won't have a CMD line since the base image's Dockerfile specifies CMD ["php-fpm"]; it is enough to COPY your application code into a derived image, and the base image's CMD knows how to run it. However, setting ENTRYPOINT in a derived image resets the CMD from the base image (see the note in the Dockerfile documentation discussing CMD and ENTRYPOINT together). This means you need to repeat the base image's CMD:
ENTRYPOINT ["start-container.sh"]
CMD ["php-fpm"] # duplicated from base image, because you reset ENTRYPOINT

Need help in executing powershell commands in dockerized laravel

I want to access vsphere config info from powercli script to laravel. But I do not know how to make them work together in docker. Whatever I do, the error is similar to this - The command "'pwsh' '-v'" failed. Exit Code: 127(Command not found) Working directory: /var/www/public Output: ================ Error Output: ================ sh: 1: exec: pwsh: not found
As a last resort, I am here.
docker-compose.yml:
version: '3'
services:
#PHP Service
app:
build:
context: .
dockerfile: Dockerfile
image: vapp
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: vapp
MYSQL_ROOT_PASSWORD: vapp
SERVICE_TAGS: dev
SERVICE_NAME: mysql
TZ: Asia/Kolkata
volumes:
- dbdata:/var/lib/mysql/
- ./mysql/my.cnf:/etc/mysql/my.cnf
- ./mysql:/var/lib/mysql-files/
networks:
- app-network
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpMyAdmin
restart: always
ports:
- "8080:80"
environment:
MYSQL_ROOT_PASSWORD: vapp
PMA_HOST: db
external_links:
- mariadb:mariadb
volumes:
- "./phpmyadmin/sessions:/sessions"
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
Dockerfile
FROM mcr.microsoft.com/powershell:latest
WORKDIR ./
FROM php:7.4-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
libonig-dev \
locales \
libzip-dev \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
RUN snap install powershell --classic
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl mysqli
RUN docker-php-ext-configure gd --enable-gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/
RUN docker-php-ext-install gd
RUN docker-php-ext-enable mysqli
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
Controller:
//$process = new Process(['ls', '-lsa']); #This works but next one do not
$process = new Process(['pwsh', '-v']);
$process->run();
// executes after the command finishes
if (!$process->isSuccessful()) {
throw new ProcessFailedException($process);
}
echo $process->getOutput();
I know how the Process() method works. Above code fails.
I need help on making powershell and laravel work together in docker.
Is there anything wrong with docker configuration or the controller code in accessing powershell.
You are using multistage build in Dockerfile. It can copy artifacts, but you don't copy anything. So pwsh app doesn't copy to PHP image (to the second stage).
You could remove first stage (FROM mcr.microsoft.com/powershell:latest) and install properly Powershell inside PHP image.
For example:
RUN apt-get update && apt-get install -y \
wget
RUN wget https://packages.microsoft.com/config/debian/10/packages-microsoft-prod.deb && \
dpkg -i packages-microsoft-prod.deb
RUN apt-get update && apt-get install -y \
powershell
PHP image use Debian 10, so here is the instruction: https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.1#debian-10
Check pwsh inside container first:
docker exec -it app bash
pwsh
Thanks to #konstantin-bogomolov
The reason was the incorrect docker file. Powershell was not properly installed in container.
For those who stop by, Working dockerfile is below.
FROM php:7.4-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
libonig-dev \
locales \
libzip-dev \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
wget \
apt-utils
# Download the Microsoft repository GPG keys
RUN wget https://packages.microsoft.com/config/debian/10/packages-microsoft-prod.deb
# Register the Microsoft repository GPG keys
RUN dpkg -i packages-microsoft-prod.deb
# Update the list of products
RUN apt-get update
# Install PowerShell
RUN apt-get install -y powershell
# Start PowerShell
RUN pwsh
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl mysqli
RUN docker-php-ext-configure gd --enable-gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/
RUN docker-php-ext-install gd
RUN docker-php-ext-enable mysqli
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]

Laravel application performance issue on Docker for Windows

I am trying to run my laravel application inside docker containers on my laptop (during development) and finding that the speed of the application is drastically slow when compared to running it using XAMPP for example.
My laptop is running Windows 10 Pro (64-Bit) with i7-6700HQ CPU, 16 GB RAM and SSD.
When I run my app in docker for windows, average page load time is approx 3.5 Seconds.
Running it on local XAMPP, average page load time is approx 350 Milliseconds (0.35 Second).
For my docker setup, I use the following image/Dockerfile:
FROM alpine:3.8
MAINTAINER Latheesan Kanesamoorthy
RUN apk add \
--no-cache \
--update \
apache2 \
composer \
curl \
php7 \
php7-apache2 \
php7-curl \
php7-bcmath \
php7-dom \
php7-mbstring \
php7-pdo_mysql \
php7-session \
php7-sockets \
php7-tokenizer \
php7-xml \
php7-xmlwriter \
php7-fileinfo \
&& mkdir -p /run/apache2 \
&& ln -sf /dev/stdout /var/log/apache2/access.log \
&& ln -sf /dev/stderr /var/log/apache2/error.log
COPY ./image/*.conf /etc/apache2/conf.d/
COPY ./image/php.ini /etc/php7/conf.d/99_custom.ini
RUN mkdir -p /storage/framework/testing
RUN mkdir -p /storage/framework/views
RUN mkdir -p /storage/framework/sessions
RUN mkdir -p /storage/framework/cache/data
RUN chown -R apache:apache /storage
WORKDIR /app
COPY ./src/composer.* ./
RUN composer install -n --no-autoloader --no-scripts --no-progress --no-suggest
COPY src .
RUN composer dump-autoload -o -n
EXPOSE 80
and docker-compose.yml:
version: '2.1'
services:
mysql:
container_name: myapp-mysql
mem_limit: 512M
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: myapp
MYSQL_DATABASE: myapp
MYSQL_USER: myapp
MYSQL_PASSWORD: myapp
ports:
- "35000:3306"
redis:
container_name: myapp-redis
image: redis:latest
redis-commander:
container_name: myapp-redis-commander
image: rediscommander/redis-commander:latest
hostname: redis-commander
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "7050:8081"
links:
- redis
app:
container_name: myapp-app
mem_limit: 512M
build:
context: ""
dockerfile: image/Dockerfile
env_file:
- image/env/development
volumes:
- ./src:/app:cached
ports:
- "25000:80"
entrypoint: httpd -DFOREGROUND
links:
- mysql
- redis
and I use the following commands to boot it up:
docker-compose down --remove-orphans
docker-compose up -d --build
docker exec myapp-app composer install --prefer-dist --no-suggest
docker exec myapp-app php artisan cache:clear
docker exec myapp-app php artisan migrate:fresh --seed
As you can see, the docker version uses redis as the driver for: cache, eloquent model cache, queue and session.
Locally for XAMPP, I am simply using file driver for all.
Any idea why the performance is so slow on docker?
P.S. The reason why I want to try developing using the docker environment is so that I can keep my development and production environment identical.

Categories