Why php development server hangs with docker compose? - php

I have the following docker-compose.yml configuration file:
silex-twig:
image: php:7.1
command: php -S localhost:4000 /app/index.php
volumes:
- .:/app
ports:
- 3000:4000
When I run docker-compose up it downloads the base image but then it hangs here:
Recreating silextwig_silex-twig_1
Attaching to silextwig_silex-twig_1
What am I doing wrong? There is nothing available on port 3000.
I know there are setups with php-fpm + nginx but that seemed complicated for only development.

That is normal. It is attaching to the stdout of the container (for which there is no stdout being logged). At this point, the container is running.
If you want to run in the background you would run with docker-compose up -d instead of just docker-compose up.
The actual HTTP request to port 3000 won't work because PHP is listening only on localhost. You need to modify your command to be php -S 0.0.0.0:4000 /app/index.php so that it is listening on all IP addresses and can accept connections through the Docker NAT.

Related

Get PHP container on port 8000 after php artisan serve

I can't container to listen on port 8000, even after adding the port mapping on my docker-compose.yml file.
All relevant files can be found here: https://github.com/salvatore-esposito/laravel-dockerized
I ran the following commands: docker-compose exec app php artisan serve and it has run successfully.
Anyway if I go inside the container, curl works as expected, but it doesn't work from the outside. The connection gets refused.
I fetched the ip using docker-machine ip
Please note that I mapped the outside-inside port in my container via docker-compose.yml even if in the repository there si no map.
I tried to copy all files to a built image and launch:
docker run --rm -p 8000:8000 --name laravel salvio/php-laravel php artisan serve
and
docker exec -it laravel bash
Once more time if a run "curl localhost:80" and "curl localhost:8000" the former doesn't work and the latter it does whereas if I take the container's ip via docker inspect name_container and digit curl ip_of_container:8000 nothing.
When using docker-compose exec a command keeps running until it's interactive session is stopped(by using ctrl-c or closing the terminal) because it isn't running as a service. To be able to keep the following command running
docker-compose exec app php artisan serve
you would have to open 2 terminals, 1 with the command and 1 to connect to the container and ping port 8000
If you want to access your container port 8000 you would have to expose the port 8000 in the Dockerfile:
# rest of docker file
# Copy existing application directory permissions
#COPY --chown=www-data:www-data ./code /var/www/html
# Change current user to www-data
#USER www-data
# Expose port 9000 and start php-fpm server
EXPOSE 80
EXPOSE 80000
and map it to your host in docker-compose(file):
app:
build:
context: .
dockerfile: .config/php/Dockerfile
image: salvio/php-composer-dev
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www/html
ports:
- "80:80"
- "8000:8000"
volumes:
- ./code/:/var/www/html
- .config/php/php.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- myproject-network
Please keep in mind php artisan serve binds to localhost:8000. This means this is only reachable within the container. Use
php artisan serve --host 0.0.0.0
to bind to the shared network interface. Checkout the following resources:
https://stackoverflow.com/a/54022753/6310593
How do you dockerize a WebSocket Server?

How to connect multiple docker container using link? [duplicate]

I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143

Nginx reverse proxy docker behind apache not working

On my single VPS host I've already an apache with websites. I want to keep it and have an docker with this nginx-proxy with subdomain. Later i will migrate everything in docker .
Apache run on port:80 , 443
nginx-proxy runs on port 81 using jwilder/nginx-proxy docker .
subdomain.domain.com maps to server IP : 109.xxx.xx.xx
Apache is serving content correctly .
nginx-proxy runs smoothly on port 81 .
docker run -d --name nginx-proxy -p 81:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Then i run my sub domain container as suggested here
docker run -e VIRTUAL_HOST=crawling.domain.com -e VIRTUAL_PORT=8181 --volume /home/vps/crawling/crawling/:/var/www/html --detach --publish 8181:80 crawling
Now problem is when use http://subdomain.domain.com it redirects to server ip (109.xxx.xx.xx) home where some dummy index page is used .
Currently i can't change apache port as it will effect many serving content.
EDIT :
as #Tarun suggest i need to pass proxy from Apache to nginx docker . Any suggestion would be great.

Basic reverse-proxy in Docker: Forbidden (no permission)

I start jwilder nginx proxy on 8080
docker run -d -p 8080:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I'm running a standard php:apache container without mapping the port (so 80 is exposed on the container network).
I use an env var to connect with the proxy:
docker run -d -e VIRTUAL_HOST=test.example.com php:apache
on my localhost I've added this in /etc/hosts:
IP test.example.com
Now I visit text.example.com:8080 on my computer so I try to connect with the reverse proxy (8080) which will route me to php-apache on container port 80.
But I get this error:
Error:
Forbidden
You don't have permission to access / on this server.
Apache/2.4.10 (Debian) Server at test.example.com Port 8080
What am I missing? Do I have the change the apache configuration somewhere? (it's now all default). It seems to go throught the nginx because I'm seeing an apache error so I think I need to tell the apache inside (php apache): Allow this 'route')?
Your title appears to be misleading. From your description, you've setup a properly functioning reverse proxy and the target you are connecting to with your reverse proxy is broken. If you review the docker hub page on the php:apache image you'll find multiple examples for how to load your php code into the image and get it working. E.g.:
$ docker run -d -e VIRTUAL_HOST=test.example.com \
-v "$PWD/your/php/code/dir":/var/www/html php:7.0-apache

Logging PHP API Request Info from Docker container

I'm working on a project where I have an iOs app connecting to a PHP API. I want to log all incoming requests to the website service for development purposes (ie I need a solution that can turn off and based on an environment variable). The API is run in a docker container, which is launched as a docker-compose service.
The PHP API is not using any sort of MVC framework.
My PHP experience is limited, so I know I've got some research ahead of me, but in the meantime, I'd appreciate any jump start to the following questions:
Is there a composer library that I can plug into my PHP code that will write to a tailed log?
Can I plug anything at the nginx or php-fpm container level so that requests to those containers are logged before even getting to the PHP code?
Is there anything I need to configure to in either nginx or php-fpm containers to ensure that logs are tailed when I run docker-compose up?
Here are my logging needs:
request method
request URL
GET query parameters, PUT and POST parameters (these will be in JSON format)
response code
response body
The logs I'm interested are all application/json. However, I don't mind the kitchen sink option where anything out gets logged.
request and response headers
I will not need these 99% of the time, so they aren't a need. But it'd be nice to configure them off/on.
Below is the docker-compose.yml file.
version: '2'
services:
gearman:
image:gearmand
redis:
image: redis
database:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: ******
MYSQL_DATABASE: database
volumes:
- dbdata:/var/lib/mysql
ports:
- 3306:3306
sphinx:
image:sphinx
links:
- database
memcached:
image: memcached
ports:
- "11211:11211"
php_fpm:
image:php-fpm
links:
- redis
- memcached
- database
environment:
REDIS_SERVER: redis
DATABASE_HOST: database
RW_DATABASE_HOST: database
RO_DATABASE_HOST0: database
DATABASE_USER: root
DATABASE_PASS: ******
volumes:
- ./website:/var/www/website/
- /var/run
nginx:
image:nginx
links:
- php_fpm
ports:
- "80:80"
volumes_from:
- php_fpm
volumes:
dbdata:
driver: local
The container was logging using all the default settings, but my client was pointing to another server, but just to leave a trail.
Inside php_fpm container (docker exec -it dev_php_fpm_1 /bin/bash), you can cat /etc/php5/fpm/php.ini, which shows the default error_log settings:
; Log errors to specified file. PHP's default behavior is to leave this value
; empty.
; http://php.net/error-log
; Example:
;error_log = php_errors.log
; Log errors to syslog (Event Log on Windows).
;error_log = syslog
Just user error_log function to write to default OS logger.
Log your the php_fpm service with docker logs -f -t dev_php_fpm_1.
Update:
In case error_log is truncated for your purposes, you can also simply write a function
file_put_contents(realpath(dirname(__FILE__)) . './requests.log', $msg, FILE_APPEND);
and then tail it: tail -f ./requests.log - either from inside the container or from outside the container if you are using a local volumes.

Categories