I have a docker-compose environment setup.
But inside service "Lumen" im trying to make a CURL request to the service itself.
However, the container can't access itself from localhost:8000 OR lumen:8000??
When I call lumen:8000 from the service it just never returns a response and just keeps loading ( And the curl request is to a different url so no infinite loop )
In my Laravel controller I found the protocal, host and port to be: http://lumen:8000
I seems like Laravel can't connect to itself, which I really need for my project.
I can connect to the Laravel from my own computer through localhost, but I need the Laravel to call it self.
Error message from Laravel controller after doing a CURL request:
Failed to connect to localhost port 8000 after 0 ms: Connection refused
Changing host to "lumen" just makes the request load infinite. No matter what page I try to connect to.
Docker-compose file:
version: "3.5"
services:
lumen:
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- ./server:/var/www/html
- ./server/vendor:/var/www/html/vendor/
build:
context: server
dockerfile: Dockerfile
command: php -S lumen:8000 -t public
restart: always
privileged: true
depends_on:
- database
networks:
- database
frontend:
build:
context: client
dockerfile: Dockerfile
volumes:
- ./client/src:/app/src
ports:
- 3000:3000
stdin_open: true
#restart: always
networks:
- database
# Database Service (Mysql)
database:
image: mysql:latest
container_name: blogmoda_mysql
environment:
MYSQL_DATABASE: blogmoda-app
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
command: ['--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-authentication-plugin=mysql_native_password']
ports:
- "127.0.0.1:3306:3306"
volumes:
- db-data:/var/lib/mysql
networks:
- database
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dev_phpmyadmin
links:
- database
environment:
PMA_HOST: database
PMA_PORT: 3306
PMA_ARBITRARY: 1
restart: always
depends_on:
- database
ports:
- 9001:80
networks:
- database
volumes:
db-data:
# Networks to be created to facilitate communication between containers
networks:
database:
Server dockerfile:
FROM php:8.1-fpm-alpine
RUN apk update && apk add bash
RUN apk add chromium
RUN apk add --no-cache zip libzip-dev
RUN docker-php-ext-configure zip
RUN docker-php-ext-install zip
RUN docker-php-ext-install pdo pdo_mysql
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-install opcache
WORKDIR /var/www/html/
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
COPY . .
RUN composer install --ignore-platform-req=ext-zip --ignore-platform-reqs
I believe localhost should work. Assuming curl is installed in lumen, in your compose file under the lumen service can you try changing your command
command: php -S lumen:8000 -t public
to a direct curl via bash as
command: sh -c "curl -s localhost:8000"
Then check the logs of the lumen container to see whether or not the curl ran successfully.
Try 0.0.0.0:8000 instead localhost:8000. It works for localhost too
Related
Im trying to deploy my laravel app using docker. Then I created docker-compose.yml file and Dockerfile like below.
docker-compose.yml
version: "3.7"
services:
app:
build:
args:
user: sammy
uid: 1000
context: ./
dockerfile: Dockerfile
image: travellist
container_name: travellist-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- travellist
db:
image: mysql:8.0
container_name: travellist-db
restart: unless-stopped
environment:
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- travellist
nginx:
image: nginx:alpine
container_name: travellist-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d/
networks:
- travellist
networks:
travellist:
driver: bridge
Dockerfile
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user
And when I try docker-compose up, all the containers run with a any error. And also when I run docker-compose ps, it shows like this,
Name Command State Ports
------------------------------------------------------------------------------------------------
travellist-app docker-php-entrypoint php-fpm Up 9000/tcp
travellist-db docker-entrypoint.sh mysqld Up 3306/tcp, 33060/tcp
travellist-nginx /docker-entrypoint.sh ngin ... Up 0.0.0.0:8000->80/tcp,:::8000->80/tcp
But, when laravel application, tries to connect with mysql, it doesnt happens. And even I cant connect using TablePlus, it shows this.
Lost connection to MySQL server at 'waiting for initial communication packet', system error: 10060
And sometimes, I get like below,
SQLSTATE[HY000] [1045] Access denied for user 'root'#'192.168.48.3' (using password: YES)
How can I fix this ??
Since you're networking the services, you only need to expose the port 3306/tcp (this is the default port mysql serves). This makes it inaccessible from 0.0.0.0 but accessible from your nginx and app container.
I have also added the correct volume to persist data (which I am assuming is what you're trying to do). I moved mysql to latest - feel free to rollback or stick to 8 if you really? need too.
db:
image: 'mysql:latest'
restart: 'unless-stopped'
expose:
- '3306'
environment:
- 'MYSQL_RANDOM_ROOT_PASSWORD=true'
- 'MYSQL_DATABASE=${DB_DATABASE}'
- 'MYSQL_USER=${DB_USER}'
- 'MYSQL_PASSWORD=${DB_PASSWORD}'
- 'MYSQL_ALLOW_EMPTY_PASSWORD=true'
volumes:
- 'laravel-database:/var/lib/mysql/'
networks:
- 'travellist'
volumes:
laravel-database:
Further reading can be found on the Docker hub under mysql.
Inside your Laravel .env, you should connect to the container via its DNS.
DB_HOST=db
If you want to connect to this container externally, you'll need to bind the port to the server. Do so by replacing expose with the following:
ports:
- '3306:3306'
i am currently trying to run my Website in a Docker container using mysql and php with apache.
Docker-Compose:
version: '3.7'
services:
mysql:
image: mysql:latest
container_name: mysql
restart: always
environment:
//Database configuration variables
volumes:
- ./data/mysql/database:/var/lib/mysql
webserver:
image: php:7.4.12-apache
depends_on:
- mysql
restart: always
volumes:
- ./data/webserver:/var/www/html/
ports:
- 8888:80
command: bash -c "docker-php-ext-install mysqli && kill -HUP 1"
phpmyadmin:
depends_on:
- mysql
image: phpmyadmin:latest
container_name: phpmyadmin
links:
- mysql:db
restart: always
ports:
- 8889:80
volumes:
- /sessions
The problem began after i added the command-block to the webserver-container. Without it, the container runs perfectly and i can access the website. But with the command, the container gets stuck in a boot-loop and it seems that it tries to run the command over and over. At least thats what i guess after looking at the log of the webserver container.
However when i use docker exec -it *webserver* bash and run the installation command directly in the container, it works perfectly. I then restart apache with kill -HUP 1 and the Website works as intended. Does anyone know what the problem is here?
Have you tried doing the install and apache restart inside a Dockerfile instead?
Something like:
FROM php:7.4.12-apache
RUN apt-get clean && apt-get update && apt-get install -y php7.4-mysqli;
RUN service apache2 restart;
Then your docker-compose could be:
[...]
webserver:
build:
context: .
dockerfile: docker/webserver/Dockerfile
depends_on:
- mysql
restart: always
volumes:
- ./data/webserver:/var/www/html/
ports:
- 8888:80
[...]
This message to clear up a problem I am having with the use of docker with my php application.
Indeed, I execute locally my dockers images (nginx, phpmyadmin and php with my application) and everything works fine.
However, I use a volume mounted in my container app with php which allows me to be able to modify hot files (without need to build at each edit).
However when I push this image to a repository and I pull it on another desktop, the volume containing my application is not there.
Have you ever faced this concern?
Please find my docker-compose.yml and Dockerfile :
docker-compose.yml
version: "3.7"
services:
app:
build:
args:
user: web
uid: 1000
context: ./
dockerfile: Dockerfile
image: myblog
container_name: myblog-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./:/var/www
networks:
- myblog
db:
image: mysql:5.7
container_name: myblog-db
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./.docker/mysql/database.sql:/docker-entrypoint-initdb.d/init.sql
- ./.docker/mysql/data:/var/lib/mysql
networks:
- myblog
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8002:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
networks:
- myblog
nginx:
image: nginx:alpine
container_name: myblog-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./:/var/www
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
networks:
- myblog
networks:
myblog:
driver: bridge
Dockerfile
FROM php:7.3-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
libzip-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
RUN docker-php-ext-install zip
RUN docker-php-ext-enable zip
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user
Thanks in advance
This is expected behavior, because the data inside Volumes are not part of an image. Volumes are used to persist data generated in containers or to pass dynamic data into containers via bind-mounts e.g. configs, credentials or certificates.
https://docs.docker.com/storage/volumes/
Your docker-compose.yml and its services using volumes mounting your local directory via - .:/path/to/dir are only good for local development, because you may see changes of your application instant and without having to rebuild images.
If you want to see your code inside the images on another machine you need to use COPY in your Dockerfile, rebuild the image and push every time you change your code !
You will also need to change your docker-compose.yml by adding volumes.
https://docs.docker.com/compose/compose-file/#volumes
Many thanks for your clear reply.
Now i understand what is wrong with my configuration and what i need to do.
For the people who search a solution to manage in the same Dockerfile the development environment and the production, you can use an Argument :)
Have you tried using a named volume instead of a path based volume?
This will let Docker manage the volume more for you and may give you the behavior you want.
https://nickjanetakis.com/blog/docker-tip-28-named-volumes-vs-path-based-volumes
Docker volume does not persist data
Having this lamp docker setup (Im a sort of docker newbie):
docker-compose.yml
version: '2'
services:
webserver:
build: .
ports:
- "8080:80"
- "443:443"
volumes:
- ./:/var/www/html
links:
- db
db:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=adminpasswd
- MYSQL_DATABASE=se_racken_dev
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- "88:80"
links:
- db:db
Dockerfile
FROM php:5.6-apache
RUN apt-get update -y && apt-get install -y libpng-dev curl libcurl4-openssl-dev
RUN docker-php-ext-install pdo pdo_mysql gd curl
RUN a2enmod rewrite
RUN service apache2 restart
Just cant get my local environment to work.
Get this error message at localhost:8088:
SQLSTATE[HY000] [2002] No such file or directory
How can I configure my docker setup to get past this connection problem?
Got some hints here:
Starting with Zend Tutorial - Zend_DB_Adapter throws Exception: "SQLSTATE[HY000] [2002] No such file or directory"
Do I need to install vim and do what they suggest in above or can I solve it in my docker files?
On webserver image you should define the connection values such as database's hostname,database name, username and password.
What can you can is to specify a .env file as seen in https://docs.docker.com/compose/env-file/
In your case that should be:
DB_HOSTNAME=db:/var/run/mysqld/mysqld.sock
DB_USER=root
DB_PASSWORD=somepassword
DB_NAME=se_racken_dev
Then to your Dockerfile specify:
FROM php:5.6-apache
ENV MYSQL_HOSTNAME="localhost"
ENV MYSQL_PASSWORD=""
ENV MYSQL_USERNAME="root"
ENV MYSQL_DATABASE_NAME=""
RUN apt-get update -y && apt-get install -y libpng-dev curl libcurl4-openssl-dev
RUN docker-php-ext-install pdo pdo_mysql gd curl
RUN a2enmod rewrite
RUN service apache2 restart
Then modify your docker-compose.yml like that:
version: '2'
services:
webserver:
build: .
ports:
- "8080:80"
- "443:443"
volumes:
- ./:/var/www/html
links:
- db
environment:
- MYSQL_HOSTNAME=$DB_HOSTNAME
- MYSQL_PASSWORD=$DB_USER
- MYSQL_USERNAME=$DB_PASSWORD
- MYSQL_DATABASE_NAME=$DB_NAME
db:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=$DB_PASSWORD
- MYSQL_DATABASE=$DB_NAME
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- "88:80"
links:
- db:db
Then you can use php's getenv to retreive the values from enviromental variables you specified. eg In your case the getenv('MYSQL_DATABASE_NAME') will retreive the value "se_racken_dev" as a string. So you need to modify the database configuration file in order to retreive correctly the database connection credentials using the function mewntioned above.
A simple way to remember is whatever you specify via ENV in your dockerfile you can retreive via getenv.
I'm trying to setup a very simple setup of a mysql database using a data-container as repository using Fig.sh and Docker.
The code below is self-explanatory:
web:
build: .
command: php -S 0.0.0.0:8000 -t /code
ports:
- "8000:8000"
links:
- db
volumes:
- .:/code
dbdata:
image: busybox
command: /bin/sh
volumes:
- /var/lib/mysql
db:
image: mysql
volumes_from:
- dbdata
environment:
MYSQL_DATABASE: database
MYSQL_ROOT_PASSWORD: rootpasswd
For some reason, if I run a command fig run --rm dbdata /bin/sh and then I cd into the directory /var/lib/mysql. The folder is empty. If I run fig run --rm db /bin/sh and cd into /var/lib/mysql the database is being created there.
What am I doing wrong here? And taking advantage of the question, is this the correct setup or I should let the data inside the mysql container?
Thanks.