Hello i have a docker container running which has my laravel app, i'm using my database from my local machine. I dont have extra container for my DB. But unfortunately i cannot connect my docker container with my mysql DB.
Here is the docker file
FROM ubuntu:bionic
RUN apt-get update && apt-get -y upgrade && apt-get install -y software-properties-common tzdata
RUN echo "Asia/Dhaka" > /etc/timezone && rm -f /etc/localtime && dpkg-reconfigure -f noninteractive tzdata
RUN add-apt-repository -y ppa:ondrej/php
RUN apt-get update
RUN apt install -y wget php8.0 php8.0-fpm php8.0-common php8.0-mysql php8.0-xml php8.0-dev php8.0-xmlrpc php8.0-curl php8.0-mbstring php8.0-zip
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php composer-setup.php
RUN mv composer.phar /usr/local/bin/composer
RUN apt-get install -y php8.0-mongodb unzip
RUN apt-get install -y mysql-client
WORKDIR /app
COPY . .
EXPOSE 3306
and here is my docker-compose.yml file
version: "3"
services:
client:
build: ./
command: tail -f /dev/null
ports:
- 5500:8000
volumes:
- ./:/app
entrypoint: ./init.sh
when i try to access my DB inside from my Container it says
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
when i try to run my **api through postman** it gives me this
{
"status": "failure",
"data": [],
"error_code": 400,
"error_message": "Something went Wrong",
"errors": "SQLSTATE[HY000] [2002] No such file or directory (SQL: select * from `users` where `email` = example#gmail.com limit 1)"
}
What might be the issue? my laravel app should get access from the container because i have exposed the port 3306 on my container. I have configured .env file but still i can not access the database from my application. I enter the container by docker exec and try to connect the DB but it cannot connect. So can anyone tell me that what i'm missing?
Thanks in Advance
Related
I am working on a quizz app. I have installed the Laravel app on docker and when I started working on the app I noticed that it takes very long for the app to load. It takes ~7-8s to load a page with a form which have 5 inputs.
I know that this waiting time is very long because a few weeks ago I installed a laravel application on docker that moves very quickly. But I can no longer use that one. I think it is possible that it is due to the .yml file and the Dockerfile, but I don't know what the installation problems would be.
I can not disable the Use the WSL 2 based engine (Windows Home can only run the WSL 2 backend) from Docker UI because I run W10 Home.
docker-compose.yml
version: '3'
services:
mariadb:
image: mariadb
restart: always
ports:
- 3375:3306
environment:
TZ: "Europe/Bucharest"
MARIADB_ALLOW_EMPTY_PASSWORD: "no"
MARIADB_ROOT_PASSWORD: "user#pass"
MARIADB_ROOT_HOST: "%"
MARIADB_USER: 'user'
MARIADB_PASSWORD: 'pass'
MARIADB_DATABASE: 'db'
networks:
- sail
volumes:
- ./docker-config/server_bd:/var/lib/mysql
app:
build: ./docker-config
container_name: app
ports:
- 8000:80
- 4430:443
volumes:
- "./:/var/www/html"
tty: true
networks:
- sail
links:
- mariadb
depends_on:
- mariadb
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
networks:
- sail
links:
- "mariadb:db"
depends_on:
- mariadb
environment:
UPLOAD_LIMIT: 1024M
ports:
- 3971:80
networks:
sail:
driver: bridge
volumes:
data:
driver: local
Dockerfile
FROM ubuntu:20.04
EXPOSE 80
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=Europe/Bucharest
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -y
RUN apt-get upgrade -y
RUN apt install -y lsb-release
RUN apt-get clean all
RUN apt-get install ca-certificates apt-transport-https -y
RUN apt-get install software-properties-common -y
# Apache Server
RUN apt-get -y install apache2
RUN a2enmod rewrite
RUN service apache2 restart
# SETUP SSL
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf &&\
a2enmod rewrite &&\
a2enmod ssl
COPY cert/certificate.crt /etc/ssl/certificate.crt
COPY cert/private.key /etc/ssl/private.key
COPY cert/ca_bundle.crt /etc/ssl/ca_bundle.crt
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
RUN service apache2 restart
RUN apt-get install -y wget
RUN apt-get install nano
RUN apt-get update -y
RUN apt-get install -y apt-transport-https
# PHP8
RUN add-apt-repository ppa:ondrej/php
RUN apt-get install -y php8.1
RUN apt-get install -y php8.1-fpm
RUN apt-get install -y libapache2-mod-php
RUN apt-get install -y libapache2-mod-fcgid
RUN apt-get install -y curl
RUN apt-get install -y php-curl
RUN apt-get install -y php-dev
RUN apt-get install -y php-gd
RUN apt-get install -y php-mbstring
RUN apt-get install -y php-zip
RUN apt-get install -y php-xml
RUN apt-get install -y php-soap
RUN apt-get install -y php-common
RUN apt-get install -y php-tokenizer
RUN apt-get install -y unzip
RUN apt-get install -y php-bcmath
RUN apt-get install -y php-mysql
# Install npm
RUN apt-get install -y npm
RUN npm cache clean -f
RUN npm install -g n
RUN n stable
RUN service apache2 restart
# COMPOSER
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN echo " extension = php_mysqli.so" >> /etc/php/8.1/apache2/php.ini
RUN service apache2 restart
CMD ( cron -f -l 8 & ) && apachectl -D FOREGROUND
The DB is working fine, at a good speed.
The laptop power is not an issue because on the last app everything was running smooth and with no problems.
So I noticed that in WSL2 it is the filesystem which is an issue. If your project is on a folder in the Windows file system it will run much more slower than if you put the entire project into the Linux file system. Try moving your project into the linux system and see if it speeds it up.
For example, if your project is located in a windows folder like
C:\Projects\FooBar
Move it to the linux filesystem under WSL2 to something like:
/home/YouName/projects/FooBar
After you have done this - rebuild the container from the new location.
I want to download a public github project and run a project file through internal php server, through docker.
This is my file so far:
FROM php:7.2-cli
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
RUN git clone https://github.com/mygit src
WORKDIR src
CMD ["php", "-S", "0.0.0.0:80", "-t", "/src/src/examples/image.php"]
The process does not show up in docker ps when I run:
docker build -t myimage .
docker run -d -p 8080:8080 myimage
Try with this Dockerfile:
FROM php:7.2-cli
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
RUN git clone https://github.com/douma/langtons-ant src
WORKDIR src
ENTRYPOINT ["php"]
CMD ["-S", "0.0.0.0:8080","/src/src/examples/image.php"]
Note I don't know php but checking your github project's readme I think you do not need the -t argument.
I also added ENTRYPOINT command to make your Dockerfile clearer. For differences check this.
The build and run commands should be the same as these you posted.
docker build -t myimage .
docker run -d -p 8080:8080 myimage
Not able to run make install from docker file, not sure where I am going wrong.
Getting below error message when docker-compose build is run in ubuntu
Step 6/10 : RUN make install
---> Running in c8f67e8de3b5
make: *** No rule to make target 'install'. Stop.
ERROR: Service 'web' failed to build:
The command '/bin/sh -c make install' returned a non-zero code: 2
I have the below structure where server is laravel project and in root there is a docker-compose.yml file.
Below is the structure of server folder
docker-compose.yml file
version: '3'
services:
web:
image: api_server
container_name: api_server
build:
context: ./server/release
ports:
- 9000:80
volumes:
- ./server:/var/www/app
Dockerfile
FROM php:7.3.1-apache-stretch
RUN apt-get update -yqq && \
apt-get install -y apt-utils zip unzip && \
apt-get install -y nano && \
apt-get install -y libzip-dev libpq-dev libpng-dev&& \
a2enmod rewrite && \
docker-php-ext-install gd mbstring && \
docker-php-ext-install pdo_pgsql && \
docker-php-ext-install pgsql && \
docker-php-ext-configure zip --with-libzip && \
docker-php-ext-install zip && \
rm -rf /var/lib/apt/lists/*
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
COPY default.conf /etc/apache2/sites-enabled/000-default.conf
WORKDIR /var/www/app
RUN make install
RUN chown -R www-data:www-data /var/www/app/
RUN chmod -R 777 /var/www/app/storage
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
EXPOSE 80
server/Makefile
install:
# some other commands
composer install
refresh:
# For testing
php artisan migrate:refresh
php artisan db:seed
make run:
php artisan migrate:status
php artisan serve
Basically if I comment RUN make install and RUN chmod -R 777 /var/www/app/storage commands from Dockerfile it works. To test i'll get into the container and run the above 2 commands manually then it works by showing the default laravel page.
So I feel trying to run the above 2 commands from Dockerfile i guess source code isn't available at that point.
I'm changing infrastructure on AWS and I want to use Docker (ECS) with Fargate. My Docker image is based on Ubuntu and I install all I need in it. I'm using Laravel 5.6 on NGINX running PHP 7.2. My Docker container works on my local machine and if I run ECS with EC2, however when I change to Fargate it returns NGINX 500 error. I did some tests and I know PHP is running, only when I install my Laravel app the error happens.
Since I cannot access Fargate machine I don't know how to debug. I tryied to connect NGINX with Loggly however it requires rsyslog and since I'm using Docker it cannot access Ubuntu's core. When I install and try to run it returns:
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted
Here is my Dockerfile:
FROM ubuntu:latest
ENV BACKEND_PATH=/code/Backend
ENV FRONTEND_PATH=/code/Frontend
## Update
RUN apt-get update -y
## Upgrade
RUN apt-get install -y software-properties-common
RUN add-apt-repository -y ppa:certbot/certbot
RUN apt-get update -y
RUN apt-get upgrade -y
RUN apt-get dist-upgrade -y
RUN apt-get autoremove -y
RUN apt-get update -y
## Nano
RUN apt-get install -y nano
## Timezone
RUN echo "America/Sao_Paulo" > /etc/timezone && \
apt-get install -y tzdata && \
rm /etc/localtime && \
ln -snf /usr/share/zoneinfo/America/Sao_Paulo /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata && \
apt-get clean
## Git
RUN apt-get install -y git
## NGINX
RUN apt-get install -y nginx
COPY ./nginx/app/sites-available /etc/nginx/sites-available
COPY ./nginx/app/sites-available /etc/nginx/sites-enabled
COPY ./nginx/sites /etc/nginx/sites
COPY ./nginx/ssl /ssl
## PHP
RUN apt-get install -y php-cli php-fpm php-curl php-mbstring
COPY ./php/php.ini /usr/local/etc/php
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Install libs
RUN apt-get install -y php-zip php-mysql php-gd pngquant gifsicle jpegoptim libicu-dev g++ php-intl php-xml
## Crontab
RUN apt-get install -y cron
COPY crontab newcrontab
RUN crontab newcrontab
RUN rm newcrontab
## Supervisor
RUN apt-get install -y supervisor
COPY ./supervisord /etc/supervisor/conf.d
## Certbot
RUN apt-get install -y python-certbot-nginx
## Install apps
COPY ./code/Backend /code/Backend
COPY ./code/Frontend/dist /code/Frontend/dist
RUN cd ${BACKEND_PATH} && chmod +x composer.phar && ./composer.phar self-update && php composer.phar install
RUN chmod -Rf 777 ${BACKEND_PATH}/storage
RUN chmod -Rf 777 ${BACKEND_PATH}/resources
RUN php ${BACKEND_PATH}/artisan config:clear
RUN php ${BACKEND_PATH}/artisan passport:keys
## Run!
EXPOSE 80 443
RUN service php7.2-fpm start
CMD ["/usr/bin/supervisord"]
I think this error has something to do with permissions but without error message it's almost impossible to know what's going on... Does anyone have any ideia how I may find this out?
I figured it out. Really stupid mistake actually. When I created Fargate configurations I used a Security Group without permissions to access some AWS components so the application was unable to boot.
Check out this answer on ServerFault: https://serverfault.com/questions/691048/kernel-log-stays-empty-rsyslogd-imklog-cannot-open-kernel-log-proc-kmsg
Try running the Fargate task with the --privileged flag. You can set this flag in the AWS console per-container in the task definition. It's in the SECURITY section near the end of the container definition. Here's the full reference for container definitions.
I have been experimenting with Docker for a few days now and have grown to like it. However, there are a few things that still elude me. Here is what I have thus far
Create a low footprint Ubuntu 14.04 image
//I got this from a post on this forum
#!/bin/bash
docker rm ubuntu-essential-multilayer 2>/dev/null
set -ve
docker build -t textlab/ubuntu-essential-multilayer - <<'EOF'
FROM ubuntu:14.04
# Make an exception for apt: it gets deselected, even though it probably shouldn't.
RUN dpkg --clear-selections && echo apt install |dpkg --set-selections && \
SUDO_FORCE_REMOVE=yes DEBIAN_FRONTEND=noninteractive apt-get --purge -y dselect-upgrade && \
dpkg-query -Wf '${db:Status-Abbrev}\t${binary:Package}\n' |grep '^.i' |awk -F'\t' '{print $2 " install"}' |dpkg --set-selections && \
rm -r /var/cache/apt /var/lib/apt/lists
EOF
TMP_FILE="`mktemp -t ubuntu-essential-XXXXXXX.tar.gz`"
docker run --rm -i textlab/ubuntu-essential-multilayer tar zpc --exclude=/etc/hostname \
--exclude=/etc/resolv.conf --exclude=/etc/hosts --one-file-system / >"$TMP_FILE"
docker rmi textlab/ubuntu-essential-multilayer
docker import - textlab/ubuntu-essential-nocmd <"$TMP_FILE"
docker build -t textlab/ubuntu-essential - <<'EOF'
FROM textlab/ubuntu-essential-nocmd
CMD ["/bin/bash"]
EOF
docker rmi textlab/ubuntu-essential-nocmd
rm -f "$TMP_FILE"
Create a Dockerfile for an Apache image
FROM textlab/ubuntu-essential
RUN apt-get update && apt-get -y install apache2 && apt-get clean
RUN a2enmod ssl
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
EXPOSE 80
EXPOSE 443
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
docker build -t droidos/apache .
Create a Dockerfile for PHP5
FROM droidos/apache
RUN apt-get update && apt-get -y --reinstall install php5 php5-redis php5-memcached php5-curl libssh2-php php5-mysqlnd php5-mcrypt && apt-get clean
RUN php5enmod mcrypt
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
EXPOSE 80
EXPOSE 443
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
docker build -t droidos/php5 .
Create a Dockerfile for memcached and build the image
FROM textlab/ubuntu-essential
# Install packages
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install memcached
# memcached public variable
EXPOSE 11211
CMD ["/usr/bin/memcached", "-u", "memcache", "-v"]
docker build -t droidos/memcached .
Fireup a docker container with memcached
docker run -d -P --name memcached droidos/memcached
Fireup a docker container with apache and link it to the memcached container created earlier
docker run -d --name apache --link memcached:memcached -v /var/droidos/site:/var/www/html -v /var/droidos/logs:/var/log/apache2 -p 8080:80 droidos/php5
Browse to example.com:8080
Everything seems ok
Create a memcached test script in /var/droidos/site
<?php
error_reporting(E_ALL);
header('Content-type:text/plain');
$mc = new Memcached();
$mc->addServer("localhost", 11211);
$flag = $mc->add('name','droidos');
echo ($flag)?'y':'n';
echo $mc->getResultCode();
?>
This script returns n47 implying that the memcached server is disabled.
Either my linking is incorrect or memcached has not been started or the memcached container port is not visible in the apache container. SSHing into the memcached container
docker exec -it <container-id> /bin/bash
and running
service memcached status
indicates that the service is not in fact running. So I start it
service memcached start
verify it has started and run the script above again. No joy - I still get an n47 reply rather than the y0 I would like to see. Clearly, I am missing a step somewhere here. I'd be most obliged to anyone who might be able to tell me what that might be.
I think it fails because you're trying to access memcached from the apache container connecting to the localhost of the apache container, while the memcached container is made accessible to the apache one on a different IP address.
This is the line I think is wrong:
$mc->addServer("localhost", 11211);
When you link containers, Docker adds a host entry for the source container to the /etc/hosts file (see the docs about linking).
Therefore you should be able to connect from the apache container to the memcached one using this PHP command:
$mc->addServer("memcached", 11211);
If it doesn't work, check that you can connect to the memcached service from the memcached container itself.