Docker php url access - php

I've build my php images with Docker and running it with :
docker run -d -p 8080:8080 --name r-lafermeduweb lafermeduweb
But i can't access to my app.
I've test my 192.168.10.xx:8080, and not found.
I've try with :
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 0005086cf69c // return 172.17.0.2
But 172.17.0.2:8080 not exist too.
Do you have any idea why i can access to my php app ?
My Dockerfile :
FROM php:7.0-apache
COPY . /var/www/html/
Thank you !

You have to bind your desired port(eg: 8000) to 80 port of the container.
docker run -d -p 8080:80 --name r-lafermeduweb lafermeduweb

If you refer here the base docker file being users port 80 is exposed by default so technically this should work when you hit localhost or your host ip address.
Maybe you forgot to re-build the image before running, everytime you change the dockerfile you need to build a fresh image and run that.
Refer here for more information

Related

How to bind secured PHP source code running inside Docker to a specific IP or MAC address? [duplicate]

I have two network interfaces, eth0 and eth1,
How could I bind all docker container to eth1, and let all network traffic go out and in via the eth1
Thanks~
update
I tried to bind to the eth1 with 133.130.60.36.
But i still got no luck, i still get the eth0 IP as the public IP in the container. the network flow is not go out via eth1
➜ ~ docker run -d --name Peach_1 -p 133.130.60.36::9998 -ti sample/ubuntu-vnc-selenium-firefox
➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb28f0d1c337 sample/ubuntu-vnc-selenium-firefox "/opt/bin/run_sele_s 4 minutes ago Up 4 minutes 5901/tcp, 133.130.60.36:32768->9998/tcp Peach_1
➜ ~ docker exec -ti Peach_1 zsh
➜ / curl ipecho.net/plain ; echo
133.130.101.114
Here's something from the docker docs
https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
If you want to be more restrictive and only allow container services
to be contacted through a specific external interface on the host
machine, you have two choices. When you invoke docker run you can use
either -p IP:host_port:container_port or -p IP::port to specify the
external interface for one particular binding.
Or if you always want Docker port forwards to bind to one specific IP address, you can edit your system-wide Docker server
settings and add the option --ip=IP_ADDRESS. Remember to restart your
Docker server after editing this setting.
Putting IP in -p only works for traffic that comes to server, for traffic that leaving server you can assign static local IP to each container, Then change source IP in iptables or snat. Here is a sample iptables rule:
iptables -t nat -I POSTROUTING -p all -s 172.20.128.2 ! -d 172.20.128.2 -j SNAT --to-source YourInterfaceIP

How to connect multiple docker container using link? [duplicate]

I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143

Nginx reverse proxy docker behind apache not working

On my single VPS host I've already an apache with websites. I want to keep it and have an docker with this nginx-proxy with subdomain. Later i will migrate everything in docker .
Apache run on port:80 , 443
nginx-proxy runs on port 81 using jwilder/nginx-proxy docker .
subdomain.domain.com maps to server IP : 109.xxx.xx.xx
Apache is serving content correctly .
nginx-proxy runs smoothly on port 81 .
docker run -d --name nginx-proxy -p 81:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Then i run my sub domain container as suggested here
docker run -e VIRTUAL_HOST=crawling.domain.com -e VIRTUAL_PORT=8181 --volume /home/vps/crawling/crawling/:/var/www/html --detach --publish 8181:80 crawling
Now problem is when use http://subdomain.domain.com it redirects to server ip (109.xxx.xx.xx) home where some dummy index page is used .
Currently i can't change apache port as it will effect many serving content.
EDIT :
as #Tarun suggest i need to pass proxy from Apache to nginx docker . Any suggestion would be great.

Basic reverse-proxy in Docker: Forbidden (no permission)

I start jwilder nginx proxy on 8080
docker run -d -p 8080:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I'm running a standard php:apache container without mapping the port (so 80 is exposed on the container network).
I use an env var to connect with the proxy:
docker run -d -e VIRTUAL_HOST=test.example.com php:apache
on my localhost I've added this in /etc/hosts:
IP test.example.com
Now I visit text.example.com:8080 on my computer so I try to connect with the reverse proxy (8080) which will route me to php-apache on container port 80.
But I get this error:
Error:
Forbidden
You don't have permission to access / on this server.
Apache/2.4.10 (Debian) Server at test.example.com Port 8080
What am I missing? Do I have the change the apache configuration somewhere? (it's now all default). It seems to go throught the nginx because I'm seeing an apache error so I think I need to tell the apache inside (php apache): Allow this 'route')?
Your title appears to be misleading. From your description, you've setup a properly functioning reverse proxy and the target you are connecting to with your reverse proxy is broken. If you review the docker hub page on the php:apache image you'll find multiple examples for how to load your php code into the image and get it working. E.g.:
$ docker run -d -e VIRTUAL_HOST=test.example.com \
-v "$PWD/your/php/code/dir":/var/www/html php:7.0-apache

Can not access wordpress configured in Docker even though its status is running

Im following the instruction here to make a wordpress site in Docker:
http://www.sitepoint.com/how-to-use-the-official-docker-wordpress-image/
1.pull&run mysql image:
docker run --name wordpressdb -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=wordpress -d mysql:5.7
2.pull&run wordpress image and link the mysql container to it:
docker run -e WORDPRESS_DB_PASSWORD=password -d --name wordpress --link wordpressdb:mysql wordpress
and i can see those two containers running:
and i can inspect the wordpress container and try to get the ip and port:
also, when I inspect the mysql container, i can not use the host/ip to login into mysql browser,
********edit-add -p *****************
I run the wordpress container with -p
**************edit again********************
********************update**********************
***********finally i made it work*******************
finally, i made it work, if i run the container with specific ip for example 127.0.0.1 in the -p command:
-p 127.0.0.1:8080:80
it wont work
if i dont specify the ip, or use 0.0.0.0 as the ip, it will work:
-p 0.0.0.0:8080:80
I see "PortMapping:null" in your docker inspect.
If you don't map any port to one of the host, you won't be able to access said ports.
See as an example "Viewing our web application container".
The documentation does include:
docker run -e WORDPRESS_DB_PASSWORD=password -d --name wordpress --link wordpressdb:mysql -p 127.0.0.2:8080:80 -v "$PWD/":/var/www/html wordpress
Note the -p 127.0.0.2:8080:80 part.
As GHETTO.CHiLD mentions in the comments, the url to access the service would be using the ip of the docker machine $(docker machine ip):8080.
As the OP mention, using a broadcast mapping is easier:
-p 8080:80 => $(docker-machine ip):8080 works

Categories