How to safely get rw right to docker.sock from the container? - php

I'm using docker-php with nginx + php-fpm (docker-compose project). When I'm trying to run an example from the documentation:
<?php
use Docker\API\Model\ContainersCreatePostBody;
use Docker\Docker;
$docker= Docker::create();
$containerConfig = new ContainersCreatePostBody();
$containerConfig->setImage('nginx:latest');
$containerConfig->setCmd(['echo', 'I am running a command']);
$containerCreateResult = $docker->containerCreate($containerConfig);
var_dump($containerCreateResult);
exit;
and I'm getting the error:
Http \ Client \ Socket \ Exception \ ConnectionException - Permission denied
As far as I understand the problem is that user group, that php-fpm is using, does not have rw rights to docker.sock (I'm mounting it from the host on which the docker is running).
Configuration:
docker-compose:
The shell directory contains an application on yii2, that is used by docker-php.
version: '2'
services:
web:
image: 'nginx:latest'
container_name: web
ports:
- '80:80'
- '443:443'
volumes:
- './:/shell'
networks:
- backend
- frontend
restart: always
php:
build: ./docker/php/
container_name: php
volumes:
- './:/shell'
- '/var/run/docker.sock:/var/run/docker.sock'
environment: []
networks:
- backend
restart: always
networks:
frontend:
driver: bridge
backend:
driver: bridge
Dockerfile for php-fpm: github gist (too large file for post ~100 lines)
Docker is installed for the experiment, and so it is useless in the container php-fpm.
Software versions:
Docker version 1.13.1
docker-compose version 1.8.0
Kubuntu 17.10 x64
I found something similar in the Internet (one, two, three ...), the decision is to add the user, from which the application works in the container, to the group www-data.
If I assign 777 rights to docker.sock, then everything will be working, but this is a bad solution =)

Related

Why docker not syncing files inside container on Windows 10?

I have issue after last docker update (seems so) on Windows 10 (local development). When I changed files in PhpStorm (and in another editors - Sublime, Notepad+), after a while, files inside container didn't receive changes.
Steps that can help for a while:
If I completely shut down all containers and after that arise them again. docker-compose down && docker-compoes up
If I get into php-fpm container and for file that not changed ran touch file.php (this file will be immidiatly changed).
What I tried and it didn't help:
I restarted php-fpm and nginx containers docker-compose restart php-fpm nginx (Yes it's strange, because down|up for all container helped)
I changed inside PhpStorm setting Use Safe write(save changes for temporary file first)
Also I checked inode for file inside container. With ls -lai file.php. Before changes worked and after they broked I had the same inode number. There is no determined number of changes I must to do to break syncing, it's random, sometime 2 changes enough.
I have:
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.2, build 698e2846
docker-compose.yml
version: '3'
services:
nginx:
container_name: pr_kpi-nginx
build:
context: ./
dockerfile: docker/nginx.docker
volumes:
- ./:/var/www/kpi
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/fastcgi.conf:/etc/nginx/fastcgi.conf
ports:
- "8081:80"
links:
- php-fpm
networks:
- internal
php-fpm:
container_name: pr_kpi-php-fpm
build:
context: ./
dockerfile: docker/php-fpm.docker
volumes:
- ./:/var/www/kpi
links:
- kpi-mysql
environment:
# 192.168.221.1 -> host.docker.internal for Mac and Windows
XDEBUG_CONFIG: "remote_host=host.docker.internal remote_enable=1"
PHP_IDE_CONFIG: "serverName=Docker"
networks:
- internal
mailhog:
container_name: pr_kpi-mailhog
image: mailhog/mailhog
restart: always
ports:
# smtp
- "1025:1025"
# http
- "8025:8025"
networks:
- internal
kpi-mysql:
container_name: pr_kpi-kpi-mysql
image: mysql:5.7
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
volumes:
- ./docker/storage/kpi-mysql:/var/lib/mysql
environment:
# We must change prod secrets, this is not good approach
- "MYSQL_ROOT_PASSWORD=pass"
- "MYSQL_USER=user"
- "MYSQL_PASSWORD=user_pass"
- "MYSQL_DATABASE=kpi_db"
ports:
- "33061:3306"
networks:
- internal
kpi-npm:
container_name: pr_kpi-npm
build:
context: ./
dockerfile: docker/npm.docker
volumes:
- ./:/var/www/kpi
- /var/www/kpi/admin/node_modules
ports:
- "4200:4200"
networks:
- internal
tty: true
# For xdebug
networks:
internal:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.221.0/28
P.S. There is opened issue:
https://github.com/docker/for-win/issues/5530
P.P.S. We need to update Docker from 2.2.0.0 to 2.2.0.3, Seems it's fixed
I have a separate container for syncing my folder:
app:
image: httpd:2.4.38
volumes:
- ./:/var/www/html
command: "echo true"
I just use the basic apache image, you could use anything really though. Then in my actual containers, I use the following volumes_from key:
awesome.scot:
build: ./build/httpd
links:
- php
ports:
- 80:80
- 443:443
volumes_from:
- app
php:
build: ./build/php
ports:
- 9000
- 9001
volumes_from:
- app
links:
- mariadb
- mail
environment:
APPLICATION_ENV: 'development'
I've never had an issue using this set up, files always sync fast, and I have tested both on Mac OSX and MS Windows.
If you're interested, here is my full LAMP stack on Github https://github.com/delboy1978uk/lamp
I have the same issue on Windows10 since 31st Jan.
I have commented a line in PhpStorm and checked it in the container using vim.
The changes were not there.
If I run docker-compose down and up, the changes go in the container.
Docker version 19.03.5, build 633a0ea
docker-compose version 1.25.4, build 8d51620a
Nothing changed in my docker-compose.yml since 2018.

Dockerfile build Laravel Telescope trying to connect to Redis service yet to "up"

My project is defined in a docker-compose file, but I'm not too familiar with docker-compose definitions.
When I try to docker-compose up -d in a fresh setup, the following error occurred during the build of a docker image.
This is after composer install, under post-autoload-dump. Laravel tries to auto discover packages (php artisan package:discover).
Generating optimized autoload files
> Illuminate\Foundation\ComposerScripts::postAutoloadDump
> #php artisan package:discover --ansi
RedisException : php_network_getaddresses: getaddrinfo failed: Name or service not known
at [internal]:0
1|
Exception trace:
1 ErrorException::("Redis::connect(): php_network_getaddresses: getaddrinfo failed: Name or service not known")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
2 Redis::connect("my_redis", "6379")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
Please use the argument -v to see more details.
Script #php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1
ERROR: Service 'my_app' failed to build: The command '/bin/sh -c composer global require hirak/prestissimo && composer install' returned a non-zero code: 1
The reason it cannot connect to my_redis:6379 is because my_redis is another service in the same docker-compose.yml file. So I assume the domain is not ready yet, since docker-compose wants to first build my images before hosting containers.
EDIT I just found this GitHub issue linking to my problem: https://github.com/laravel/telescope/issues/620. It seems that the problem is related to Telescope trying to use the Cache driver. The difference is I'm not using Docker just for CI/CD, but for my local development.
How can I resolve this problem? Is there a way to force Redis container to up first before building my_app? Or is there a Laravel way to prevent any domain discovery? Or is there a way to specify the building of an image depends on another service to be available?
If you want to see my docker-compose.yml:
version: '3.6'
services:
# Redis Service
my_redis:
image: redis:5.0-alpine
container_name: my_redis
restart: unless-stopped
tty: true
ports:
- "6379:6379"
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
- redisdata:/data
networks:
- app-network
# Postgres Service
my_db:
image: postgres:12-alpine
container_name: my_db
restart: unless-stopped
tty: true
ports:
- "5432:5432"
environment:
POSTGRES_DB: my
POSTGRES_PASSWORD: admin
SERVICE_TAGS: dev
SERVICE_NAME: postgres
volumes:
- dbdata:/var/lib/postgresql
- ./postgres/init:/docker-entrypoint-initdb.d
networks:
- app-network
# PHP Service
my_app:
build:
context: .
dockerfile: Dockerfile
image: my/php
container_name: my_app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: my_app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- /tmp:/tmp #For CS Fixer
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
- fsdata:/my
networks:
- app-network
# Nginx Service
my_webserver:
image: nginx:alpine
container_name: my_webserver
restart: unless-stopped
tty: true
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
# Docker Networks
networks:
app-network:
driver: bridge
# Volumes
volumes:
dbdata:
driver: local
redisdata:
driver: local
fsdata:
driver: local
There is a way to force a service to wait another service in docker compose depends_on, but it only wait until the container is up not the service, and to fix that you have to customize redis image by using command to execute a bash script that check for redis container and redis daemon availability check startup-order on how to set it up.
I currently mitigated this by adding --no-scripts to Dockerfile and added a start.sh. Since it is Laravel's package discovery script that binds to post-autoload-dump that wants to access Redis.
Dockerfile excerpt
#...
# Change current user to www
USER www
# Install packages
RUN composer global require hirak/prestissimo && composer install --no-scripts
RUN chmod +x /var/www/scripts/start.sh
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["/var/www/scripts/start.sh"]
start.sh
#!/usr/bin/env sh
composer dumpautoload
php-fpm
I'm sure you've resolve this yourself by now but for anyone else coming across this question later, there are two solutions I have found:
1. Ensure Redis is up and running before your App
In your redis service in docker-compose.yml add this...
healthcheck:
test: ["CMD", "redis-cli", "ping"]
...then in your my_app service in docker-compose.yml add...
depends_on:
redis:
condition: service_healthy
2. Use separate docker compose setups for local development and CI/CD pipelines
Even better in my opinion is to create a new docker-compose.test.yml In here you can omit the redis service entirely and just use the CACHE_DRIVER=array. You could set this either directly in your the environment property of your my_app service or create a .env.testing. (make sure to set APP_ENV=testing too).
I like this approach because as your application grows there may be more and more packages which you want to enable/disable or configure differently in your testing environment and using .env.testing in conjunction with a docker-compose.testing.yml is a great way to manage that.

Best way to "connect" two Docker Container without volume

Currently, I have two containers php-fpm and NGINX where I run the PHP application.
Now my question, is there a way to "connect" two docker containers without using a volume?
Both containers need my application (NGINX to send static files e.g. css/js and php-fpm to interpret the PHP files).
Currently, my application is cloned from git into my NGINX container and I had a volume so the php-fpm also had the files to interpret PHP.
I search for a solution without that my application is on the host system.
Am, not sure what you trying to archive. But my docker-compose.yml is look lite this:
php:
container_name: custom_php
build:
context: php-fpm
args:
TIMEZONE: 'UTC'
volumes:
- ./website:/var/www/symfony
networks:
- app_net
nginx:
build: nginx
container_name: custom_nginx
ports:
- 80:80
volumes:
- ./website:/var/www/symfony
networks:
- app_net
networks:
app_net:
driver: bridge
Just make sure they are in one network and then you can talk from container to container by container name and port. Hope that helps

LAMP-Stack with Docker: How to modify httpd.conf to get access to PHP-FPM?

I have some trouble to get access from my apache container to Php-fpm. My docker-compose file is ready and works fine. But I don't know how to modify the httpd.conf in order to establish communication between both containers (Apache and Php-fpm). I have looked for some useful tutorials on the internet, but everyone uses Nginx instead of Apache2. There is also a preconfigured Docker image consisting of a Apache webserver and Php-fpm on Docker Hub, but I prefer two seperated images, because of replaceability.
Here is my docker-compose file:
version: "3.5"
services:
webserver:
build: apache/
ports:
- "8080:80"
- "443:443"
volumes:
- ~/Docker-Images/example/apache/html:/usr/local/apache2/htdocs
links:
- php-fpm
php-fpm:
build: php-fpm/
ports:
- "9000:9000"
links:
- database
database:
build: mysql/
ports:
- "3306:3306"
volumes:
- ~/Docker-Images/example/mysql/init-scripts:/init-scripts
volumes:
webserver:
database:
If you need my httpd.conf, let me know! I haven't added it, because it is a very long file with only default values.

How use php from another docker container

in my app i have separated docker containers for nginx, mysql, php and supervisor. But now i require set in supervisor a program which run php script. It`s possible call php from another container?
EDIT
Example:
When i run supervisor program test, then i see error: INFO spawnerr: can't find command 'php'. I know that php is not in the container supervisor, but how i call from container php? And i require same php as for application.
./app/test.php
<?php
echo "hello world";
docker-compose.yml
version: "2"
services:
nginx:
build: ./docker/nginx
ports:
- 8080:80
volumes:
- ./app:/var/www/html
links:
- php
- mysql
php:
build: ./docker/php
volumes:
- ./app:/var/www/html
ports:
- 9001:9001
mysql:
build: ./docker/mysql
ports:
- 3306:3306
volumes:
- ./data/mysql:/var/lib/mysql
supervisor:
build: ./docker/supervisor
volumes:
- ./app:/var/www/html
ports:
- 9000:9000
supervisor.conf
[program:test]
command = php /var/www/html/test.php
process_name = %(process_num)02d
numprocs = 1
autostart = false
autorestart = true
Please check this repo in github
I used angular, laravel and mongo
with 3 containers, for mongo, php-fpm and nginx to make proxi to api and angular.
Angular dose not use nodejs container because I build angular ng build and this out the build in the folder angular-dist.
The folder angular-src its the source code of angular
into folder laravel run the command composer install, if you use Linux use sudo chmod 777 -R laravel
you can see this
and the route http://localhost:8000/api/
and the route http://localhost:8000/api/v1.0

Categories