in my app i have separated docker containers for nginx, mysql, php and supervisor. But now i require set in supervisor a program which run php script. It`s possible call php from another container?
EDIT
Example:
When i run supervisor program test, then i see error: INFO spawnerr: can't find command 'php'. I know that php is not in the container supervisor, but how i call from container php? And i require same php as for application.
./app/test.php
<?php
echo "hello world";
docker-compose.yml
version: "2"
services:
nginx:
build: ./docker/nginx
ports:
- 8080:80
volumes:
- ./app:/var/www/html
links:
- php
- mysql
php:
build: ./docker/php
volumes:
- ./app:/var/www/html
ports:
- 9001:9001
mysql:
build: ./docker/mysql
ports:
- 3306:3306
volumes:
- ./data/mysql:/var/lib/mysql
supervisor:
build: ./docker/supervisor
volumes:
- ./app:/var/www/html
ports:
- 9000:9000
supervisor.conf
[program:test]
command = php /var/www/html/test.php
process_name = %(process_num)02d
numprocs = 1
autostart = false
autorestart = true
Please check this repo in github
I used angular, laravel and mongo
with 3 containers, for mongo, php-fpm and nginx to make proxi to api and angular.
Angular dose not use nodejs container because I build angular ng build and this out the build in the folder angular-dist.
The folder angular-src its the source code of angular
into folder laravel run the command composer install, if you use Linux use sudo chmod 777 -R laravel
you can see this
and the route http://localhost:8000/api/
and the route http://localhost:8000/api/v1.0
Related
I have an application developed with PHP, Nginx and dynamodb. I have create a simple docker-compose to work in local.
version: '3.7'
services:
nginx_broadway_demo:
container_name: nginx_broadway_demo
image: nginx:latest
ports:
- 8080:80
volumes:
- ./www:/var/www
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/default.conf
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
links:
- php_fpm_broadway_demo
php_fpm_broadway_demo:
container_name: php_fpm_broadway_demo
build:
context: ./docker/php
ports:
- 9000:9000
volumes:
- .:/var/www/web
dynamodb:
image: amazon/dynamodb-local
ports:
- 8000:8000
expose:
- 8000
Now I need to add dynamodb URL params to allows PHP to make queries to the database.
So, if I make a ping from PHP docker container like this works fine:
ping dynamodb
This doesn't work.
ping http://dynamodb:8000
I need to use http://dynamodb:8000 because AWS needs a URI because I have this error if I use http://dynamodb:8000:
Endpoints must be full URIs and include a scheme and host
So: how can I call a docker container like an URL?
I have tried with docker-compose parameters like: depends, links, network without success
As discussed in the chat, the error come when dependency installed on the host and use inside the container as composer work base on the underlying platform.
So we investigate that the issue come due to above reason. installing dependency inside the container fix the issue.
docker exec -it php bash -c "cd web; composer install"
My project is defined in a docker-compose file, but I'm not too familiar with docker-compose definitions.
When I try to docker-compose up -d in a fresh setup, the following error occurred during the build of a docker image.
This is after composer install, under post-autoload-dump. Laravel tries to auto discover packages (php artisan package:discover).
Generating optimized autoload files
> Illuminate\Foundation\ComposerScripts::postAutoloadDump
> #php artisan package:discover --ansi
RedisException : php_network_getaddresses: getaddrinfo failed: Name or service not known
at [internal]:0
1|
Exception trace:
1 ErrorException::("Redis::connect(): php_network_getaddresses: getaddrinfo failed: Name or service not known")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
2 Redis::connect("my_redis", "6379")
/var/www/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php:126
Please use the argument -v to see more details.
Script #php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1
ERROR: Service 'my_app' failed to build: The command '/bin/sh -c composer global require hirak/prestissimo && composer install' returned a non-zero code: 1
The reason it cannot connect to my_redis:6379 is because my_redis is another service in the same docker-compose.yml file. So I assume the domain is not ready yet, since docker-compose wants to first build my images before hosting containers.
EDIT I just found this GitHub issue linking to my problem: https://github.com/laravel/telescope/issues/620. It seems that the problem is related to Telescope trying to use the Cache driver. The difference is I'm not using Docker just for CI/CD, but for my local development.
How can I resolve this problem? Is there a way to force Redis container to up first before building my_app? Or is there a Laravel way to prevent any domain discovery? Or is there a way to specify the building of an image depends on another service to be available?
If you want to see my docker-compose.yml:
version: '3.6'
services:
# Redis Service
my_redis:
image: redis:5.0-alpine
container_name: my_redis
restart: unless-stopped
tty: true
ports:
- "6379:6379"
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
- redisdata:/data
networks:
- app-network
# Postgres Service
my_db:
image: postgres:12-alpine
container_name: my_db
restart: unless-stopped
tty: true
ports:
- "5432:5432"
environment:
POSTGRES_DB: my
POSTGRES_PASSWORD: admin
SERVICE_TAGS: dev
SERVICE_NAME: postgres
volumes:
- dbdata:/var/lib/postgresql
- ./postgres/init:/docker-entrypoint-initdb.d
networks:
- app-network
# PHP Service
my_app:
build:
context: .
dockerfile: Dockerfile
image: my/php
container_name: my_app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: my_app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- /tmp:/tmp #For CS Fixer
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
- fsdata:/my
networks:
- app-network
# Nginx Service
my_webserver:
image: nginx:alpine
container_name: my_webserver
restart: unless-stopped
tty: true
ports:
- "8080:80"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
# Docker Networks
networks:
app-network:
driver: bridge
# Volumes
volumes:
dbdata:
driver: local
redisdata:
driver: local
fsdata:
driver: local
There is a way to force a service to wait another service in docker compose depends_on, but it only wait until the container is up not the service, and to fix that you have to customize redis image by using command to execute a bash script that check for redis container and redis daemon availability check startup-order on how to set it up.
I currently mitigated this by adding --no-scripts to Dockerfile and added a start.sh. Since it is Laravel's package discovery script that binds to post-autoload-dump that wants to access Redis.
Dockerfile excerpt
#...
# Change current user to www
USER www
# Install packages
RUN composer global require hirak/prestissimo && composer install --no-scripts
RUN chmod +x /var/www/scripts/start.sh
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["/var/www/scripts/start.sh"]
start.sh
#!/usr/bin/env sh
composer dumpautoload
php-fpm
I'm sure you've resolve this yourself by now but for anyone else coming across this question later, there are two solutions I have found:
1. Ensure Redis is up and running before your App
In your redis service in docker-compose.yml add this...
healthcheck:
test: ["CMD", "redis-cli", "ping"]
...then in your my_app service in docker-compose.yml add...
depends_on:
redis:
condition: service_healthy
2. Use separate docker compose setups for local development and CI/CD pipelines
Even better in my opinion is to create a new docker-compose.test.yml In here you can omit the redis service entirely and just use the CACHE_DRIVER=array. You could set this either directly in your the environment property of your my_app service or create a .env.testing. (make sure to set APP_ENV=testing too).
I like this approach because as your application grows there may be more and more packages which you want to enable/disable or configure differently in your testing environment and using .env.testing in conjunction with a docker-compose.testing.yml is a great way to manage that.
I'm using docker-php with nginx + php-fpm (docker-compose project). When I'm trying to run an example from the documentation:
<?php
use Docker\API\Model\ContainersCreatePostBody;
use Docker\Docker;
$docker= Docker::create();
$containerConfig = new ContainersCreatePostBody();
$containerConfig->setImage('nginx:latest');
$containerConfig->setCmd(['echo', 'I am running a command']);
$containerCreateResult = $docker->containerCreate($containerConfig);
var_dump($containerCreateResult);
exit;
and I'm getting the error:
Http \ Client \ Socket \ Exception \ ConnectionException - Permission denied
As far as I understand the problem is that user group, that php-fpm is using, does not have rw rights to docker.sock (I'm mounting it from the host on which the docker is running).
Configuration:
docker-compose:
The shell directory contains an application on yii2, that is used by docker-php.
version: '2'
services:
web:
image: 'nginx:latest'
container_name: web
ports:
- '80:80'
- '443:443'
volumes:
- './:/shell'
networks:
- backend
- frontend
restart: always
php:
build: ./docker/php/
container_name: php
volumes:
- './:/shell'
- '/var/run/docker.sock:/var/run/docker.sock'
environment: []
networks:
- backend
restart: always
networks:
frontend:
driver: bridge
backend:
driver: bridge
Dockerfile for php-fpm: github gist (too large file for post ~100 lines)
Docker is installed for the experiment, and so it is useless in the container php-fpm.
Software versions:
Docker version 1.13.1
docker-compose version 1.8.0
Kubuntu 17.10 x64
I found something similar in the Internet (one, two, three ...), the decision is to add the user, from which the application works in the container, to the group www-data.
If I assign 777 rights to docker.sock, then everything will be working, but this is a bad solution =)
I have several websites using php, each website is on a different docker.
In normal case, out of docker, all the websites use the same php installation.
How can that be achieved with docker?
this is my current setup. my single nginx docker uses this docker instance. (php-fpm.dockerfile only installing php-mysql).
php:
env_file: ".env"
build :
context : .
dockerfile : php-fpm.dockerfile
volumes:
- "./server/www/:/var/www/:cached"
I want to add another nginx server using this php instance, is it possible?
this is my entire relevant yml file, nginx_server instance us using php, how can i tell the php instance to process nginx_site instance as well.
version: "3.5"
nginx_server:
image: nginx
env_file: ".env"
links:
- php
- mysql
ports:
- "8081:80"
restart:
always
volumes :
- "./server/nginx/default.conf.php.template:/etc/nginx/conf.d/default.conf.template"
- "./server/logs/:/var/log/nginx/:cached"
- "./server/www/:/var/www/:cached"
command : /bin/bash -c "envsubst '$$NGINX_HOST,$$NGINX_ROOT,$$NGINX_INDEX_FILE,$$NGINX_PORT' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
environment:
- NGINX_HOST=$NGINX_SERVER_HOST
- NGINX_ROOT=$NGINX_SERVER_ROOT/public
- NGINX_INDEX_FILE=index.php
- NGINX_PORT=8081
nginx_site:
image: nginx
env_file: ".env"
links:
- php
- mysql
ports:
- "8082:80"
restart:
always
volumes :
- "./site/nginx/default.conf.php.template:/etc/nginx/conf.d/default.conf.template"
- "./site/logs/:/var/log/nginx/:cached"
- "./site/www/:/var/www/:cached"
command : /bin/bash -c "envsubst '$$NGINX_HOST,$$NGINX_ROOT,$$NGINX_INDEX_FILE,$$NGINX_PORT' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
environment:
- NGINX_HOST=$NGINX_SITE_HOST
- NGINX_ROOT=/var/www/
- NGINX_INDEX_FILE=index.php
- NGINX_PORT=8082
php:
env_file: ".env"
build :
context : .
dockerfile : php-fpm.dockerfile
volumes:
- "./server/www/:/var/www/:cached"
thanks
I have the following docker-compose.yml file which runs nginx with PHP support:
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
The PHP application inside /var/www/my-app needs to communicate with a linux daemon (let's call it myappd).
The way I see it, I need to either:
Copy the myappd into the nginx container to /usr/local/bin, make it executable with chmod +x and run it in the background.
Create a different container, copy myappd to /usr/local/bin, make it executable with chmod +x and run it in the foreground.
Now, I'm new to Docker and I'm researching and learning about it but my best guess, given that I'm using Docker Composer, is that option 2 is probably the recommended one? Given my limited knowledge about Docker, I'd have to guess that this container would require some sort of linux-based image (like Ubuntu or something) to run this binary. So maybe option 1 is preferred? Or maybe option 2 is possible with a minimal Ubuntu image or maybe it's possible without such image?
Either way, I have no idea how would I implement that on the composer file. Especially option 2, how would the PHP application communicate with the daemon in a different container? Just "sharing" a volume (where the binary is located) like I did for nginx/php services would suffice? Or something else is required?
Simple answer is adding command entry to php service in docker-compose.yml.
Given that myappd is at ./my-app/ on host machine and at /var/www/my-app/, updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
command: ["/bin/sh", "/var/www/my-app/mappd", "&&", "php-fpm"]
Better answer is to create the third container which runs linux daemon.
New Dockerfile is something like following.
FROM debian:jessie
COPY ./myappd /usr/src/app/
EXPOSE 44444
ENTRYPOINT ['/bin/sh']
CMD ['/usr/src/app/myappd']
Build image and name it myapp/myappd.
Updated docker-compose.yml is something like following.
version: '3'
services:
nginx:
container_name: my-app-nginx
image: nginx:1.13.6
ports:
- 8080:80
volumes:
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf
- ./my-app:/var/www/my-app
restart: always
depends_on:
- php
php:
container_name: my-app-php
image: php:7.1-fpm
volumes:
- ./my-app:/var/www/my-app
restart: always
networks:
- network1
depends_on:
- daemon
daemon:
container_name: my-app-daemon
image: myapp/myappd
restart: always
networks:
- network1
networks:
network1:
You can send request with hostname daemon from inside php. Docker container has capability to resolve hostname of another container in the same network.