Cannot use NGINX subdomain inside the Docker container - php

This question is kinda stupid, it's about using the Docker's service names as hostnames, so here's the context:
I am running the following NGINX containers: base, php-fpm and nginx. I also have a Laravel project who is located in the root project, in the /api folder. I also run haproxy on port 5000 for load balancing the requests over php-fpm containers.
The base container contains the linux environment from which i can run commands to phpunit, npm and literally have access to other containers' files that are sent using the volume from docker-compose.
The php-fpm contains the environment for PHP to run.
The nginx contains the NGINX server which is configured to hold two websites: the root website (localhost) and the api subdomain (api.localhost). The api. subdomain points to the /api folder within the root project, and the root website (localhost) points to the /frontend folder within the root project.
The problem is that within the base service container, i cannot run curl command to access the api.localhost website. I tried to use curl to access the nginx using the service name within the docker-compose (which is nginx):
$ curl http://nginx
and it works perfectly, but the frontend folder answers with code from the frontend folder. I have no idea how to use the service name to access the api.localhost wihin the container.
I have tried
$ curl http://api.nginx
$ curl http://api.localhost
Not even the localhost answers to the curl command:
$ curl http://localhost
Is there any way i can access the subdomain from a NGINX container using the service name as hostname?

I have found out that subdomains are not working well using NGINX and Docker Service name as hostname.
Instead, i had to change the structure of my project so that i don't use subdomains while trying to access URLs using service names as hostnames.

Related

Docker: communication between web container and php container

I'm trying to dockerizing a project runs with php + Apache http server. I learned that I need to have a container for apache http server and another container for php script. I searched a lot but still don't understanding how that works. What I know now is I should resort to docker networking, as long as they are in the same network they should be communicating with each other.
The closest info I got is this, but it uses nginx:
https://www.codementor.io/patrickfohjnr/developing-laravel-applications-with-docker-4pwiwqmh4
quote from original article:
vhost.conf
The vhost.conf file contains standard Nginx configuration that will handle http
requests and proxy traffic to our app container on port 9000. Remember from
earlier, we named our container app in the Docker Compose file and linked it to the web container; here, we can just reference that container by its name and Docker will route traffic to that app container.
My question is what configuring I should do to make the communication between php container and web container happen using Apache http server like above? what is the rationale behind this? I'm really confused, any information will be much much appreciated.
The example that you linked to utilizes two containers:
a container that runs nginx
a container that runs php-fpm
The two containers are then able to connect to each other due to the links directive in the web service in the article's example docker-compose.yml. With this, the two containers can resolve the name web and app to the corresponding docker container. This means that the nginx service in the web container is able to forward any requests it receives to the php-fpm container by simply forwarding to app:9000 which is <hostname>:<port>.
If you are looking to stay with PHP + Apache there is a core container php:7-apache that will do what you're looking for within a single container. Assuming the following project structure
/ Project root
- /www/ Your PHP files
You can generate a docker-compose.yml as follows within your project root directory:
web:
image: php:7-apache
ports:
- "8080:80"
volumes:
- ./www/:/var/www/html
Then from your project root run docker-compose up and will be able to visit your app at localhost:8080
The above docker-compose.yml will mount the www directory in your project as a volume at /var/www/html within the container which is where Apache will serve files from.
The configuration in this case is Docker Compose. They are using Docker Compose to facilitate the DNS changes in the containers that allow them to resolve names like app to IP addresses. In the example you linked, the web service links to the app service. The name app can now be resolved via DNS to one of the app service containers.
In the article, the web service nginx configuration they use has a host and port pair of app:9000. The app service is listening inside the container on port 9000 and nginx will resolve app to one of the IP addresses for the app service containers.
The equivalent of this in just Docker commands would be something like:
App container:
docker run --name app -v ./:/var/www appimage
Web container:
docker run --name web --link app:app -v ./:/var/www webimage

How link N php containers with 1 nginx container

i´m moving my wordpress farm (10 installs) to docker architecture,
I want had one nginx container and run 10 php-fpm containers (mysql is on external server)
the php containers are named php_domainname, and also contain persistent storage
i want know how do this:
a)How pass domainname and containername to vhost conf file¿
b)when i start a php-fpm container
1) add a vhost.conf file into nginx confs folder
2) add volume (persistent storage) to nginx instance
3) restart nginx instance
All nginx-php dockers that i founded, has both process per instance, but i think that had 10+1 nginx is overloading the machine, and break the docker advantages
Thanks
No need to reinvent the wheel, this one has already been solved by docker-proxy which is also available on docker hub.
You can also use consul or like with service-autodiscovery. This means:
you add a consul server to your stack
you register all FPM servers as nodes
you register every FPM-daemon as a service "fpm" in consul
For your nginx vhost conf, lets say located /etc/nginx/conf.d/mywpfarm.conf you use consul-template https://github.com/hashicorp/consul-template to generate the config in a go-template were you use
upstream fpm {
{{range service "fpm"}}
server {{.Name}} {{.Address}}:{{.Port}};
{{end}}
}
In your location when you forward .php based request to the FPM upstream, you now use the upstream above. This way nginx will load-balance through all available servers. If you shutdown one FPM host, the config changes automatically and the FPM upstream gets adjusted ( thats what consul-template is for, it watches for changes ) - so you can add new FPM services at any time and scale horizontally very easy

How can i access a second laravel app from another pc

On this link:
how can i access my laravel app from another pc?
It is perfectly described how to access an laravel app from another pc on the same network.
Now, my question is:
How to access another app served on the same pc?
I have a virtual machine serving two apps app.dev and demo.dev
both are accessible inside VM through internet browser
app.dev is accessible on http://localhost and http://app.dev
demo.dev is accessible only on http://demo.dev
Outside VM only app.dev is accessible on IP address 192.168.0.60
i have used this command inside VM
sudo php artisan serve --host 192.168.0.60 --port 80
Should i use again
sudo php artisan serve ????
but how? anybody help?
Laravel's artisan serve command uses the PHP Built-in web server. Because that is not a full featured web server, it has no concept of virtual hosts, so it can only run one instance of the server mapped to an IP and port pair.
Normally to serve two hosts from the same IP address you'd add in your VM's /etc/hosts file the following mappings:
192.168.0.60 app.dev
192.168.0.60 demo.dev
Now you can run app.dev by running:
php artisan serve --host app.dev --port 80
And it will be available on your host machine using http://app.dev. However if you would try to spin up a second server instance for demo.dev using this:
php artisan serve --host demo.dev --port 80
It won't work and will complain that:
Address already in use
You could get around that by using a different port for your demo.dev app by using for example this:
php artisan serve --host demo.dev --port 8080
And now you'd be able to access http://demo.dev:8080 for your second app on your host machine.
That being said, I suggest you install a full featured web server such as Apache or nginx and then setup a virtual host for each application (just make sure to keep the mappings from the /etc/hosts file I showcased above).
Setting up virtual hosts can be really easy for both server solutions. Below are links to two articles from the Laravel Recipes website that showcase how to do that specifically for Laravel:
Creating an Apache VirtualHost
Creating a Nginx VirtualHost

How to set up access to Symfony2 application via web browser?

I have a Linux virtual box, built using Vagrant. I am working on an application built using Symfony2 and wish to use PHP's built in server to host the application. I have got the PHP server running successfully using the command: php bin/console server:start. This tells me:
[OK] Web server listening on http://127.0.0.1:8000
I've specified the following in the Vagrantfile:
config.vm.network "private_network", ip: "192.168.56.109
I want to access the application via the browser on my host machine which is running on Windows 7.
How can I achieve this?
The default IP for the built-in web server is 127.0.0.1. In order for it to be visible outside your vagrant machine, you need to bind it to 0.0.0.0:
php bin/console server:start 0.0.0.0
Then you access http://192.168.56.109:8000 and it should work correctly.

Access Laravel Project on Network Machine

i am new to laravel and basically i am a nodejs developer, i basically develop application in sailsjs and i access my sailjs application on another machine within same network by using my machine ip on which the project is running.
like i lift main salsjs application on A machine using sails lift --port 1334 and then goto another machine B within same network and access is with my A machine ip , like 192.168.10.2:1334
now i want to do the same thing with laravel, i've lifted my server using laravel command
ahsan#ahsan-Inspiron-N5110:~/Desktop/Development/laravell/laravel$ sudo php artisan serve --port 1334
Laravel development server started on http://localhost:1334
and now the application is running on 1334 port and when i try to access it from machine B within same network it just says unble to connect.
Please let me knowo what do i need to do to access it over network with my machine A ip address.
Thanks.
Note: i am using UBUNTU on Machine A and have access to sailsjs application on Machine B which is windows based but not the laravel application access
php artisan serve --host 0.0.0.0 --port 1334
or
php artisan serve --host 192.168.10.2 --port 1334
in most cases you will need permissions (sudo) to start this. If its not working, check your firewall (iptables)

Categories