I are using Docker for our deployments and for local development. We are running the same Laravel application for many customers .env file. In order to get our local to work properly with multiple sites we needed to figure out a way for each site to have it's own environment file. We created a folder structure like the following.
docker/
- src/ <-- Laravel App Here
- local/
- env/
- site1.env
- site2.env
The directory we are using for our environment files is being used in our docker-compose.yml file currently as a volume.
- ./local/env/site1.env:/var/www/html/.env:delegated
This is copying the proper env file to our containers /var/www/html directory and this works great. However, where we are experiencing an issue is that when you do docker-compose up -d it is syncing the .env file back to our src/ directory since we have a mount for that as well.
- ./src:/var/www/html/:delegated
Where the issues start to arise is when you open the .env file that is synced back to src/, not only is the file an empty file but if you add anything to it and save it will sync back to the container killing the site because it no longer has all of the env entries.
I would like to figure out a way to ignore the .env file so that it does not sync back to the src/ directory. Laravel requires it to exist in the /var/www/html directory in the container and I understand why it is syncing back down to the src/ directory but I am unsure how to stop it or if it is even possible.
Am I doing this wrong? I thought I would give configs a shot but I found that I cannot use configs because we are not running in swarm mode.
An example docker-compose entry is below
site1:
build:
context: .
dockerfile: DevDockerfile
working_dir: /var/www/html
environment:
DB_HOST: mysql
DB_PORT: 3306
DB_DATABASE: dbname
DB_USERNAME: dbusername
DB_PASSWORD: password-here
volumes:
- ./src:/var/www/html/:delegated
- ./local/client/site1/:/var/opt/client/:delegated
- ./local/env/site1.env:/var/www/html/.env:delegated
networks:
- ournetwork
depends_on:
- mysql
- smtp
Related
Looking at a common docker-compose setup for a Nginx / combo like:
version: '3'
services:
nginx-example:
image: nginx:1.13-alpine
ports:
- "80:80"
volumes:
- ./www:/www
- ./config/site.conf:/etc/nginx/conf.d/default.conf
php-example:
image: php-fpm
volumes:
- ./www:/www
You find many examples like that to make sure, that if you change something in your local www folder it will be immediately picked up by a running container.
But when I do not want that and copy some php files/content etc. into the container:
Is it enough to create a volume of the same name for both containers and copy my files into that folder e.g. in Dockerfile?
Or is it even possible to not have a volume but create a directory in the container and copy the files there... and in that case: do I have to do for both nginx and php-fpm with the same files?
Perhaps my misunderstanding is around how the php-fpm container works in that combination (of course fastcgi... in conf points to the php-example:9000 standard)
My ideal solution would be to copy once and making sure that file permissions are handled.
I am developing a Laravel application. I am trying to use Laravel Websocket in my application, https://docs.beyondco.de/laravel-websockets. I am using Docker/ docker-compose.yml. Since the Laravel Websocket run locally on port 6001, I am having problem with integrating it with docker-compose. Searching solution I found this link, https://github.com/laradock/laradock/issues/2002. I tried it but not working. Here is what I did.
I created a folder called workspace under the project root directory. Inside that folder, I created a file called, Dockerfile.
This is the content of Dockerfile
EXPOSE 6001
In the docker-compose.yml file, I added this content.
workspace:
port:
- 6001:6001
My docker-compose.yml file looks something like this
version: "3"
services:
workspace:
port:
- 6001:6001
apache:
container_name: web_one_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: web-one.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- web-one-network
ports:
- "80:80"
- "443:443"
php-fpm:
container_name: web-one-php
image: php-fpm-laravel:7.2-minimal
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- web-one-network
When I run "docker-compose up --build -d", it is giving me the following error.
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.workspace: 'port' (did you mean 'ports'?)
What is wrong and how can I fix it? How can I use Laravel Web Socket with docker-compose?
I tried changing from 'port' to 'ports', then I got the following error message instead.
ERROR: The Compose file is invalid because:
Service workspace has neither an image nor a build context specified. At least one must be provided.
Your Dockerfile is wrong. A Dockerfile must start with a FROM <image> directive as explained in the documentation. In your case it might be sufficient to run an existing php:<version>-cli image though, avoiding the whole Dockerfile stuff:
workspace:
image: php:7.3-cli
command: ["php", "artisan", "websockets:serve"]
Of course you will also need to add a volume with the application code and a suitable network. If you add a reverse proxy like Nginx in front of your services, you don't need to export the ports on your workspace either. Services may access other services as long as they are in the same network.
How do I deliver the code of a containerized PHP application, whose image is based on busybox and contains only the code, between separate NGINX and PHP-FPM containers? I use the 3rd version of docker compose.
The Dockerfile of the image containing the code would be:
FROM busybox
#the app's code
RUN mkdir /app
VOLUME /app
#copy the app's code from the context into the image
COPY code /app
The docker-compose.yml file would be:
version: "3"
services:
#the application's code
#the volume is currently mounted from the host machine, but the code will be copied over into the image statically for production
app:
image: app
volumes:
- ../../code/cms/storage:/storage
networks:
- backend
#webserver
web:
image: web
depends_on:
- app
- php
networks:
- frontend
- backend
ports:
- '8080:80'
- '8081:443'
#php
php:
image: php:7-fpm
depends_on:
- app
networks:
- backend
networks:
cms-frontend:
driver: "bridge"
cms-backend:
driver: "bridge"
The solutions I thought of, neither appropriate:
1) Use the volume from the app's container in the PHP and NGINX containers, but compose v3 doesn't allow it (the volumes_from directive). Can't use it.
2) Place the code in a named volume and connect it to the containers. Going this way I can't containerize the code. Can't use. (I'll also have to manually create this volume on every node in a swarm?)
3) Copy the code twice directly into images based on NGINX and PHP-FPM. Bad idea, I'll have to maintain them to be in concert.
Got stuck with this. Any other options? I might have misunderstood something, only beginning with Docker.
I too have been looking around to solve a similar issue and it seems Nginx + PHP-FPM is one of those exceptions when it is better to have both services running in one container for production. In development you can bind mount the project folder to both nginx and php containers. As per Bret Fisher's guide for good defaults for php: php-docker-good-defaults
So far, the Nginx + PHP-FPM combo is the only scenario that I recommend using multi-service containers for. It's a rather unique problem that doesn't always fit well in the model of "one container, one service". You could use two separate containers, one with nginx and one with php:fpm but I've tried that in production, and there are lots of downsides. A copy of the PHP code has to be in each container, they have to communicate over TCP which is much slower than Linux sockets used in a single container, and since you usually have a 1-to-1 relationship between them, the argument of individual service control is rather moot.
You can read more about setting up multiple service containers on the docker page here (it's also listed in the link above): Docker Running Multiple Services in a Container
The way I see it, you have two options:
(1) Using Docker-compose : (this is for very simplistic development env)
You will have to build two separate container from nginx and php-fpm images. And then simply serve app folder from php-fpm on a web folder on nginx.
# The Application
app:
build:
context: ./
dockerfile: app.dev.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
expose:
- 9000
# The Web Server
web:
build:
context: ./
dockerfile: web.dev.dockerfile
working_dir: /var/www
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
(2) Use a single Dockerfile to build everything in it.
Start with some flavor of linux or php image
install nginx
Build your custom image
And serve multi services docker container using supervisord
I'm trying to deploy a very simple Symfony application using nginx & php-fpm via Docker.
Two docker services :
1. web : running nginx
2. php : running php-fpm; containing application source.
I want to build images that can be deployed without any external dependency.
That's why I'm copying source code within the php container.
On development process; i'm overriding /var/www/html volume with local path.
# file: php-fpm/Dockerfile
FROM php:7.1-fpm-alpine
COPY ./vendor /var/www/html
COPY . /var/www/html
VOLUME /var/www/html
Now the docker-compose configuration file.
# file : docker-compose-prod.yml
version: '2'
services:
web:
image: "private/web"
ports:
- 80:80
volumes_from:
- php
php:
image: "private/php"
ports:
- 9000:9000
The problem is about permissions.
When accessing localhost, Symfony is botting up, but cache / logs / sessions folders are not writable.
nginx is using /var/www/html to serve static files.
php-fpm is using /var/www/html to execute php files.
I'm not sure about the problem.
But how can I be sure about the following:
/var/www/html have to be readable for nginx ?
/var/www/html have to be writable for php-fpm ?
Note: I'm building images from MacbookPro; cache / logs / sessions are 777.
docker-compose.yml supports a user directive under services. The docs only mention it in the run command, but it works the same.
I have a similar setup and this is how I do it:
# file : docker-compose-prod.yml
version: '2'
services:
web:
image: "private/web"
ports:
- 80:80
volumes_from:
- php
php:
image: "private/php"
ports:
- 9000:9000
user: "$UID"
I have to run export UID before running docker-compose and then that sets the default user to my current user. This allows logging / caching etc. to work as expected.
I am using this solution "Docker for Symfony" https://github.com/StaffNowa/docker-symfony
New features on
./d4d start
./d4s stop
./d4d help
I've found a solution;
But if someone can explain best practices, it will be appreciate !
Folders cache / logs / sessions from docker context where not empty (on host).
Now that folders have been flushed, Symfony creates them with good permissions.
I've found people using usermod to change UID, ie: 1000 for www-data / nginx ...
But it seems to be an ugly hack. What do you think about ?
I have a docker-compose.yml file that runs the following (create image called mmm/nginx):
web:
image: mmm/nginx
ports:
- "80:80"
volumes:
- ./var:/var/www
- ./etc/nginx/sites-enabled:/etc/nginx/sites-enabled/
links:
- php
- db
php:
image: rossriley/php56-fpm
volumes:
- ./var:/var/www
- ./etc/php5/php-fpm.conf:/etc/php5/fpm/php-fpm.conf
links:
- db
db:
image: sameersbn/mysql
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- DB_NAME=tables
- DB_USER=table
- DB_PASS=pass
it serves up the websites nicely that are stored in /var/www
The issue happens when it tries to write to the logs and tries to write session files. While it does create the files, it can't write them.
The folder for the storage and its nested directories have the permissions set to 777.
In order for laravel to write to them, I have to $ chmod 777 <.log|sessionfile> and it works nicely. Clearly, this is not the way to develop as I need to start new sessions regularly and create new logs daily.
How can I give laravel and the docker containers permission to write the files they create?
Update:
This is what laravel's log says:
local.ERROR: exception 'ErrorException' with message 'file_put_contents(/var/www/com.mtrinteractive.sandbox.form/storage/framework/sessions/e0117b8ca17af9c19572ddb305a272b4c22bd18d): failed to open stream: Permission denied' in /var/www/com.mtrinteractive.sandbox.form/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:81
Update #2
Here's the project directory:
Update #3
Here are the project's permissions and owners:
I don't know if this help, but if your using a Dockerfile you can add
RUN usermod -u 1000 nginx
or if your using Apache you can sub. nginx for apache.
This seems to be only an issue for OS X and the issue is actually something to do with VirtualBox and not directly related to Docker. I had this issue with Docker v1.9.x and now again with v1.10.3. This time I was not able to solve it with the above solution but was able to solve it by writing my cache to a database. In this case it was MySQL/MariaDB but could have easily been memcache or redis.
Oddly, creation of log files and writing to them wasn't an issue even though the volume is mounted a separately but originated in the same folder '/Users' of my Mac.