Docker Laravel Nginx, storage goes to 404 page - php

I'm facing an issue when opening files uploaded from my web app build with Laravel 7.
The files are well stored in my app.
I've also tried to run the project locally without using docker and everything is working fine, I can check the files without any issue.
In docker, I have three containers ( app, web , db )
I've created the symlink using php artisan storage:link
My static assets are accessible.
Here is my docker-compose.yml
version: '2'
services:
# The Application
app:
container_name: app
build:
context: ./
dockerfile: development/app.dockerfile
volumes:
- ./:/var/www
env_file: '.env.dev'
environment:
- "DB_HOST=database"
# The Web Server
web:
container_name: web
build:
context: ./
dockerfile: development/web.dockerfile
volumes:
- ./storage/logs/:/var/log/nginx
ports:
- 8990:80
# The Database
database:
container_name: db
image: mariadb:10.4
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=database"
- "MYSQL_USER=...
- "MYSQL_PASSWORD=...
ports:
- 8991:3306
Grateful if someone could help me with this issue.

Try putting this line of code inside composer.json
"post-install-cmd": [
"ln -sr storage/app/public public/storage" ],
NB: For full article try out this link
https://github.com/laravel/ideas/issues/34#issuecomment-208895323

Related

MacOs - localhost file not found 404

Can someone please help.
I was running successfully my Symfony project via Docker containers. Suddenly when I access http://localhost/ I get the File not found. error?
I now that it means that system can not locate my files, but I am not sure what happened.
I see that my containers are built and running okay.
Also the same message I get when I try to test app endpoints through Postman.
I am on Mac Monterey 12.4.
Everything was working fine couple of hours ago. I just switched branches to change something, then switched back. The problem was on both branches..
Can someone help, I do not know what to do?
Docker config:
services:
db:
image: postgres:${POSTGRES_VERSION:-12}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-name}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-pass}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
volumes:
- $PWD/postgres-data:/var/lib/postgresql/data:rw
profiles:
- db-in-docker
ports:
- "5432:5432"
networks:
- symfony
redis:
image: "redis:alpine"
command: redis-server /usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
volumes:
- $PWD/redis-data:/var/lib/redis
- $PWD/redis/redis.conf:/usr/local/etc/redis/redis.conf
environment:
- REDIS_REPLICATION_MODE=master
networks:
- symfony
php:
container_name: "backend_php"
build:
context: ..
dockerfile: docker/php/Dockerfile
target: dev
args:
TIMEZONE: ${TIMEZONE}
volumes:
- symfony_docker_app_sync:/var/www/symfony/
depends_on:
- redis
networks:
- symfony
nginx:
build:
context: ./nginx
volumes:
- ../:/var/www/symfony/
ports:
- 80:80
depends_on:
- php
networks:
- symfony
env_file:
- .env.nginx.local
First of all: Why do you donĀ“t use the built in symfony server for local development? However - how looks your docker container configuration for your webserver?

Why docker not syncing files inside container on Windows 10?

I have issue after last docker update (seems so) on Windows 10 (local development). When I changed files in PhpStorm (and in another editors - Sublime, Notepad+), after a while, files inside container didn't receive changes.
Steps that can help for a while:
If I completely shut down all containers and after that arise them again. docker-compose down && docker-compoes up
If I get into php-fpm container and for file that not changed ran touch file.php (this file will be immidiatly changed).
What I tried and it didn't help:
I restarted php-fpm and nginx containers docker-compose restart php-fpm nginx (Yes it's strange, because down|up for all container helped)
I changed inside PhpStorm setting Use Safe write(save changes for temporary file first)
Also I checked inode for file inside container. With ls -lai file.php. Before changes worked and after they broked I had the same inode number. There is no determined number of changes I must to do to break syncing, it's random, sometime 2 changes enough.
I have:
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.2, build 698e2846
docker-compose.yml
version: '3'
services:
nginx:
container_name: pr_kpi-nginx
build:
context: ./
dockerfile: docker/nginx.docker
volumes:
- ./:/var/www/kpi
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/fastcgi.conf:/etc/nginx/fastcgi.conf
ports:
- "8081:80"
links:
- php-fpm
networks:
- internal
php-fpm:
container_name: pr_kpi-php-fpm
build:
context: ./
dockerfile: docker/php-fpm.docker
volumes:
- ./:/var/www/kpi
links:
- kpi-mysql
environment:
# 192.168.221.1 -> host.docker.internal for Mac and Windows
XDEBUG_CONFIG: "remote_host=host.docker.internal remote_enable=1"
PHP_IDE_CONFIG: "serverName=Docker"
networks:
- internal
mailhog:
container_name: pr_kpi-mailhog
image: mailhog/mailhog
restart: always
ports:
# smtp
- "1025:1025"
# http
- "8025:8025"
networks:
- internal
kpi-mysql:
container_name: pr_kpi-kpi-mysql
image: mysql:5.7
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
volumes:
- ./docker/storage/kpi-mysql:/var/lib/mysql
environment:
# We must change prod secrets, this is not good approach
- "MYSQL_ROOT_PASSWORD=pass"
- "MYSQL_USER=user"
- "MYSQL_PASSWORD=user_pass"
- "MYSQL_DATABASE=kpi_db"
ports:
- "33061:3306"
networks:
- internal
kpi-npm:
container_name: pr_kpi-npm
build:
context: ./
dockerfile: docker/npm.docker
volumes:
- ./:/var/www/kpi
- /var/www/kpi/admin/node_modules
ports:
- "4200:4200"
networks:
- internal
tty: true
# For xdebug
networks:
internal:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.221.0/28
P.S. There is opened issue:
https://github.com/docker/for-win/issues/5530
P.P.S. We need to update Docker from 2.2.0.0 to 2.2.0.3, Seems it's fixed
I have a separate container for syncing my folder:
app:
image: httpd:2.4.38
volumes:
- ./:/var/www/html
command: "echo true"
I just use the basic apache image, you could use anything really though. Then in my actual containers, I use the following volumes_from key:
awesome.scot:
build: ./build/httpd
links:
- php
ports:
- 80:80
- 443:443
volumes_from:
- app
php:
build: ./build/php
ports:
- 9000
- 9001
volumes_from:
- app
links:
- mariadb
- mail
environment:
APPLICATION_ENV: 'development'
I've never had an issue using this set up, files always sync fast, and I have tested both on Mac OSX and MS Windows.
If you're interested, here is my full LAMP stack on Github https://github.com/delboy1978uk/lamp
I have the same issue on Windows10 since 31st Jan.
I have commented a line in PhpStorm and checked it in the container using vim.
The changes were not there.
If I run docker-compose down and up, the changes go in the container.
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.4, build 8d51620a
Nothing changed in my docker-compose.yml since 2018.

How to use composer with docker-compose?

I am configuring the docker-compose.yml file and I want to run a PHP stack that contains Elastic, Redis, Symfony, and composer.
Now the problem that I have is, that I don't know how can I use composer with docker because some features of composer need to have PHP and some extension. I don't want to build a new image and install Nginx and PHP and composer and extension of PHP on it, I won't to have all of them in a disparate image.
What I have tried so far its this:
version : '2'
services:
nginx:
image: tutum/nginx
ports:
- "80:80"
volumes:
- ./nginx/default:/etc/nginx/sites-available/default
- ./nginx/default:/etc/nginx/sites-enabled/default
- ./logs/nginx-error.log:/var/log/nginx/error.log
- ./logs/nginx-access.log:/var/log/nginx/access.log
- ./app:/usr/share/nginx/html
phpfpm:
image: php:fpm
ports:
- 9000:9000
volumes:
- ./app:/usr/share/nginx/html
composer:
image: composer/composer:php7
command: install
volumes:
- ./app:/app
elastic2.4.4:
image: elasticsearch:2.4.4
ports:
- 9200:9200
volumes:
- ./esdata1:/usr/share/elasticsearch/data
redis:
image: redis:3.2
ports:
- 6379:6379
but this won't install dependencies.
I set up my docker-compose.yml file so one docker instance would use the composer/composer image and execute composer install within a shared container. All of the other images would then be able to access the vendor directory that composer created. The tricky part was realizing that the composer/composer image assumes that the composer.json file will be in an /app directory. I had to override this behavior by specifying my shared container as the working_dir instead:
version: '3'
services:
#=====================#
# nginx proxy service #
#=====================#
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
# self-signed testing wildcard ssl certificate
- "./certs:/certs"
# proxy needs access to static files
- "./site1/public:/site1/public"
- "./site2/public:/site2/public"
# proxy needs nginx configuration files
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#===============#
# composer.test #
#===============#
composer.test:
image: composer/composer
networks:
- test_network
ports:
- "9001:9000"
volumes:
- "./composer:/composer"
container_name: composer.test
working_dir: /composer
command: install
#============#
# site1.test #
#============#
site1.test:
build: ./site1
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./composer:/composer"
- "./site1:/site1"
container_name: site1.test
#============#
# site2.test #
#============#
site2.test:
build: ./site2
networks:
- test_network
ports:
- "9003:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./composer:/composer"
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:
Here is how the directory structure looks:
certs
test.crt
test.key
composer
composer.json
site1
app
public
Dockerfile
site1.test.conf
site2
app
public
Dockerfile
site2.test.conf
docker-compose.yml
If you look at composer/composer:php7 Dockerfile, then you will see, that it is based on php:7.0-alpine and it doesn't seem like fpm is included. So, you could use composer/composer:php7 as base image to install php-fpm on top of it.
So, since you do the mapping of your project in all three containers, running composer install in one container should result in the changes be visible in all three containers.
Me personally, I do not see a point in segregating PHP and nginx into 2 different containers, because one is highly dependable on another. And mapping your app into both containers is also a perfect example of nonsense. That's why I maintain my own public build of nginx+php Docker image. You can check it out here. There are more builds with more flavors. And they all come with composer inside.

How do I properly share the same host volume between the two containers?

Since volumes_from disappear when Docker Compose change it's compose file version I am a bit lost in how to share a volume between different containers.
See the example below where a PHP application is living in a PHP-FPM container and Nginx is living in a second one.
version: '3.3'
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
- shared-volume:/var/www
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
- shared-volume:/var/www
volumes:
shared-volume:
driver_opts:
type: none
device: ~/sources/websocket
o: bind
In order to make the application works of course somehow Nginx has to access the PHP files and there is where volumes_from help us a lot. Now that option is gone.
When I try the command docker-compose up it ends with the following message:
ERROR: for websocket_php_1 Cannot create container for service php:
error while mounting volume with options: type='none'
device='~/sources/websocket' o='bind': no such file or directory
How do I properly share the same host volume between the two containers?
Why would you not use a bind mount? This is just source code that each needs to see, correct? I added the :ro (read-only) option which assumes no code generation is happening.
services:
php:
build:
context: ./docker/php7-fpm
args:
TIMEZONE: ${TIMEZONE}
env_file: .env
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro
nginx:
build: ./docker/nginx
ports:
- 81:80
depends_on:
- php
volumes:
# User-relative path
- ~/sources/websocket:/var/www:ro

run batch in wordpress docker container

I am trying to run batch in wordpress official container.
In container I want to run very simple batch like below:
./wordpress_batch.php
define('BASEPATH', '/path/to/wordpress');
require_once(BASEPATH . '/wp-load.php');
# batch program start
echo "batch test";
var_dump($wpdb);
But no output is shown.
This code work in wordpress in host machine.
What is wrong in docker container?
Any ideas to run this code?
Thanks.
./docker-compose.yml:
version: "2"
services:
wordpress:
build: containers/wordpress
ports:
- "9000:80"
depends_on:
- db
environment:
WORDPRESS_DB_HOST: "db:3306"
env_file: .env
volumes:
- ./wordpress_batch:/var/batch/
db:
build: containers/db
env_file: .env
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
driver: local
./containers/db/Dockerfile:
FROM mysql:latest
./containers/wordpress/Dockerfile:
FROM wordpress:latest
The way I ran the code:
$ docker-compose run wordpress bash
# root#97658bd14387:/var/batch# php wordpress_batch.php

Categories