Docker mysql environment - php

I have a Dockerfile that is based FROM php:alpine and I'm trying to add mysql to the build.
FROM php:alpine
COPY test-data/ /var/www/
RUN apk add --update --no-cache \
mysql
# Composer
RUN curl -s https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer
ENV COMPOSER_ALLOW_SUPERUSER=1
WORKDIR /var/www
My problem is after successfully building, I tried and run the container with the mysql environment overrides but I cant login to mysql within the container.
$ docker run -e MYSQL_DATABASE=homestead -e MYSQL_USER=homestead -e MYSQL_PASSWORD=secret -e MYSQL_ROOT_PASSWORD=secret -ti --rm idecardo /bin/sh
Testing mysql login fails
$ mysql -uroot -p # with password "secret"

Since you are newbie - always try to learn by copying ready working code over and breaking down what is being done in that code.
For Docker:
You can see the docker repository for mysql - https://hub.docker.com/_/mysql/
Under description section you will see links to various docker files for different MySQL versions.
Take one of them as a source for your inspiration, for example the link to 8.0/Dockerfile: https://github.com/docker-library/mysql/blob/fc3e856313423dc2d6a8d74cfd6b678582090fc7/8.0/Dockerfile
Notice that after mysql installation instructions in that dockerfile there is Entrypoint and CMD instructions.
In general:
Since you want php and mysql to work from docker - my advise is for you to see about docker-compose. Docker containers can be run in a variety of ways and docker-compose allows you to launch several docker containers, share some folders between them. In that scenario you would want to launch separate mysql container and separate php container, share host data folders between them and launch your code.
Also, watch some video tutorials online - they explain in details the basics of what docker is all about and how it works.

Related

Running PHP docker container with MYSQL container

I need to make my PHP docker container connect with my MYSQL container.
The steps I'm trying to execute are:
create a docker network:
docker network create network-pfa
run the mysql container:
docker run --name mysql --network=network-pfa -d -e MYSQL_ALLOW_EMPTY_PASSWORD=YES mysql:5.7
run the php container:
docker run --name php --network=network-pfa -it php bash
After these steps inside the php container bash I'm trying to connect to the mysql using the command:
mysql -u root
Bash error: bash: mysql: command not found
I have been studying docker for just a week and am trying to make a simple system using php, mysql and nginx.
My goal is to connect the three containers together and create a php file that reads some data in mysql and then using nginx to view my php result in my computer's browser (outside the containers).
Note: I am not using docker-compose yet.
I think it is missing to install the mysql-client in your Dockerfile
For alpine
RUN apk add mysql-client
For debian based:
RUN apt-get install -y mysql-client

ORA-28547 with php-fpm in docker container

I got a working php-fpm docker container acting as the php backend to a nginx frontend. What I mean by working, is that it renders phpinfo output in the browser as expected.
My php-fpm container was produced by php-fpm-7.4 prod of the devilbox docker repo. It has OCI8 enable.
The issue: I keep getting ORA-28547 when trying oci_connect
What I have done:
1--add /usr/lib/oracle/client64/lib to a file inside ld.so.conf.d and run ldconfig -v
2--restart docker container.
3-- Now phpinfo shows ORACLE_HOME=/usr/lib/oracle/client64/lib
4--Add tnsnames.ora to /usr/lib/oracle/client6/lib/network/admin (there is a README.md file inside that folder that even tells you to do that)
5--Restart docker container again.
6-oci_connect still fails with the same error.
What I am missing?
Thank you very much for any pointers, I think I have browsed to the end of the internet and back without finding a solution yet.
----SOLUTION: reinstall instantclient, relink libraries (ldconfig) to use new instantclient libraries. Create modified dockerfile to do it when container is created.
I modified the Dockerfile file of the php-fpm to add new instant client files and not the one that were provided by the original file. I was not able to make it work with them. I have tried a few times rebuilding the image (docker-compose up --build) and this is the file that does the trick:
FROM devilbox/php-fpm:7.4-work
#instantclient.conf content: /opt/instantclient
RUN echo "/opt/instantclient" >/etc/ld.so.conf.d/instantclient.conf
WORKDIR /opt
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN mv instantclient_19_8 instantclient
ADD tnsnames.ora /opt/instantclient/network/admin
RUN ldconfig -v
CMD ["php-fpm"]
expose 9000
# Insert following to .bash_profile or .profile of the User starting the php-fpm
export ORACLE_HOME=/usr/lib/oracle/client64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
export TNS_ADMIN=$ORACLE_HOME/network/admin
# Test to Ping Remote Db to be connected by PHP
tnsping <tns-name of remote DB - i.e. db12c.world>
# restart here the php Engine
Can you please check
https://github.com/caffeinalab/php-fpm-oci8/blob/master/Dockerfile
which seems to create a p-fpm-oci8 docker image
the "wget" for
wget -qO- https://raw.githubusercontent.com/caffeinalab/php-fpm-oci8/master/oracle/instantclient-basic-linux.x64-12.2.0.1.0.zip | bsdtar -xvf- -C /usr/local &&
wget -qO- https://raw.githubusercontent.com/caffeinalab/php-fpm-oci8/master/oracle/instantclient-sdk-linux.x64-12.2.0.1.0.zip | bsdtar -xvf- -C /usr/local &&
wget -qO- https://raw.githubusercontent.com/caffeinalab/php-fpm-oci8/master/oracle/instantclient-sqlplus-linux.x64-12.2.0.1.0.zip | bsdtar -xvf- -C /usr/local && \
can be dropped when you place downloaded instant client files into local host dir
/usr/local
and extract them - resulting in
/usr/local/instantcient_12_2
or 18, 19c equivalents
the 4 "ln" commands have to be adjusted to reflect the local host instantclient dir
the tnsnames.ora for instantclient is available from host by VOLUME command
-------------FINAL SOLUTION------------(it was not network related, I had done a couple of changes to the files, and also tried a different database, all at the same time, so it made me think that it was the different database what fixed the issue)
After many trial and errors, I came up with a Dockerfile that creates the correct configuration of files and connects without any issues to the database:
--Dockerfile: (to build php-fpm 7.4 using devilbox image)
Final solution:
I modified the Dockerfile file of the php-fpm to add new instant client files and not the one that were provided by the original file. I was not able to make it work with them. I have tried a few times rebuilding the image (docker-compose up --build) and this is the file that does the trick:
FROM devilbox/php-fpm:7.4-work
ADD instantclient.conf /etc/ld.so.conf.d/
WORKDIR /opt
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN mv instantclient_19_8 instantclient
ADD tnsnames.ora /opt/instantclient/network/admin
RUN ldconfig -v
CMD ["php-fpm"]
expose 9000
That's why I have suggested to use tnsping - unfortunaly it is not included in any of the instant client files which is a pity - so you have to pick it up from regular client with matching OS, bitsize and Oracle release. As workaround you could place SQL*Plus package files into container and try to connect with a foo user like
sqlplus foo/foo#\<ip>:\<port>/\<dbname>
which should generate an error - if
user/password not matching - ORA-1017 i.e. DB & listener running
listener running - ORA-1034 i.e. DB down
listener down (no return, or TNS-Errors)
I got it!. It was a firewall issue. I launched a tcpdump capture
session and there was nothing wrong with php-fpm, oci8 and
instantclient libraries. The traffic was initiated but there was no
response from the database. I made it work against a different
database where this box has no firewall issues.
I now will try rebuilding the docker image so I can see what I have to
manually add if any.
That was incorrect (the firewall as the origin of the problem). Rebuilding the docker file showed me where I had it wrong. See original question for solution.

Docker - Installing Composer /bin/sh: 1: php: not found curl: (23) Failed writing body (0 != 16133)

Hello I am creating a dockerfile for my laravel project. This is it so far:
FROM php:7.2-cli
FROM nginx
FROM node:8
MAINTAINER zachary tyhacz
# does not install mysql
# mysql is outside container
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /var/www/public
COPY . /var/www/public
COPY nginx.conf /etc/nginx/sites-available/domain
RUN ln -s /etc/nginx/sites-available /etc/nginx/sites-enabled
RUN npm install
RUN composer install
# sets up the database
CMD php artisan migrate:fresh --seed
# resets configuration files
CMD php artisan config:cache
# refreshes routes
CMD php artisan route:cache
# enables serve
CMD php artisan serve --host=0.0.0.0 --port=436
EXPOSE 8080/udp
EXPOSE 8080/tcp
EXPOSE 80/udp
EXPOSE 80/tcp
EXPOSE 436/tcp
EXPOSE 436/udp
upon docker tag to create an image, it gets to this instruction:
Step 6/22 : RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
and it throws this error and stops.
/bin/sh: 1: php: not found
curl: (23) Failed writing body (0 != 16133)
I am not sure what is going wrong. I think it could be a permissions issue or a directory issue.
Thanks anyone for any suggestions in helping me out
also, my reference in helping me create this dockerfile is this:
https://buddy.works/guides/laravel-in-docker
You can only have one base image using FROM in a Dockerfile. Basically, that tells docker what to start with. In your case, you have several FROMs, so it appears that Docker simply takes the last one you give it, in this case node:8. So PHP is never being installed.
To fix this issue, you'll need to pick a single base image (for example php), and install your other dependencies on top of that, so you could manually install nginx and node on top of the php image using RUN. You may also want to consider building a separate nginx image. This is considered good practice to separate your services into different images when possible.
Also, instead of using multiple CMD entries, use a small startup shell script. For example
#!/usr/bin/env bash
set -e
php artisan migrate:fresh --seed
php artisan config:cache
php artisan route:cache
exec php artisan serve --host=0.0.0.0 --port=436
Put that in a script called start.sh or something like that, then in your Dockerfile, use
CMD ["./start.sh"]
Then, you'll probably also want to start a second container for your nginx service. You could do this manually using docker run, but I suggest checking out docker-compose. It helps you build and run multiple containers at once.

Running package manager inside the docker

I've built an image for the purpose of PHP development, and it became clear to me that I didn't really thought about how to access the tools that I need for every day development. For example: composer, package manager for PHP, I need it to run whenever composer.json updates. I thought it is worth installing those tools inside the same image, but then I don't have a way to access them. So, I can:
Create separate image for composer and run it in different container
Install composer on my host machine.
I'd like to avoid option 2), but then, does it have sense having a setup like 1) ? How did you guys solved this issue ?
Unless you have some quite specific requirements there is a third option:
Connect to the container using docker exec command:
docker exec -it CONTAINER-NAME/ID COMMAND [ARG...]
Here is the example:
1: Create your application:
echo "<?php phpinfo();" > index.php
2: Start container:
docker run -it --rm --name my-apache-php-app -p 80:80 -v "$PWD":/var/www/html php:5.6-apache
3: Open another terminal window and exec required commands inside running container:
docker exec -it my-apache-php-app curl -sS https://getcomposer.org/installer | php
docker exec -it my-apache-php-app ls
If you need shell inside running container - run:
docker exec -it my-apache-php-app bash
That's it!

Project layout with vagrant, docker and git

So I recently discovered docker and vagrant, and I'm starting a new Php project in which I want to use both:
Vagrant in order to have a interchangeable environment that all the developers can use.
Docker for production, but also inside the vagrant machine so the development environment resembles the production one as closely as possible.
The first approach is to have all the definition files together with the source code in the same repository with this layout:
/docker
/machine1-web_server
/Dockerfile
/machine2-db_server
/Dockerfile
/machineX
/Dockerfile
/src
/app
/public
/vendors
/vagrant
/Vagrantfile
So the vagrant machine, on provision, runs all docker "machines" and sets databases and source code properly.
Is this a good approach? I'm still trying to figure out how this will work in terms of deployment to production.
Is this a good approach?
Yes, at least it works for me since a few months now.
The difference is that I also have a docker-compose.yml file.
In my Vagrantfile there is a 1st provisioning section that installs docker, pip and docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
if ! type docker >/dev/null; then
echo -e "\n\n========= installing docker..."
curl -sL https://get.docker.io/ | sh
echo -e "\n\n========= installing docker bash completion..."
curl -sL https://raw.githubusercontent.com/dotcloud/docker/master/contrib/completion/bash/docker > /etc/bash_completion.d/docker
adduser vagrant docker
fi
if ! type pip >/dev/null; then
echo -e "\n\n========= installing pip..."
curl -sk https://bootstrap.pypa.io/get-pip.py | python
fi
if ! type docker-compose >/dev/null; then
echo -e "\n\n========= installing docker-compose..."
pip install -U docker-compose
echo -e "\n\n========= installing docker-compose command completion..."
curl -sL https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
fi
SCRIPT
and finally a provisioning section that fires docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
cd /vagrant
docker-compose up -d
SCRIPT
There are other ways to build and start docker containers from vagrant, but using docker-compose allows me to externalize any docker specificities out of my Vagrantfile. As a result this Vagrantfile can be reused for other projects without changes ; you would just have to provide a different docker-compose.yml file.
An other thing I do differently is to put the Vagrantfile at the root of your project (and not in a vagrant directory) as it is a place humans and tools (some IDE) expect to find it. PyCharm does, PhpStorm probably does.
I also put my docker-compose.yml file at the root of my projects.
In the end, for developing I just go to my project directory and fire up vagrant which tells docker-compose to (eventually build then) run the docker containers.
I'm still trying to figure out how this will work in terms of deployment to production.
For deploying to production, a common practice is to provide your docker images to the ops team by publishing them on a private docker registry. You can either host such a registry on your own infrastructure or use online services that provides them such as Docker Hub.
Also provide the ops team a docker-compose.yml file that will define how to run the containers and link them. Note that this file should not make use of the build: instruction but rely instead on the image: instruction. Who wants to build/compile stuff while deploying to production?
This Docker blog article can help figuring out how to use docker-compose and docker-swarm to deploy on a cluster.
I recommend to use docker for development too, in order to get full replication of dependencies. Docker Compose is the key tool.
You can use an strategy like this:
docker-compose.yml
db:
image: my_database_image
ports: ...
machinex:
image: my_machine_x_image
web:
build: .
volumes:
- '/path/to/my/php/code:/var/www'
In your Dockerfile you can specify the dependencies to run your PHP code.
Also, i recommend to keep my_database_image and my_machine_x_image projects separated with their Dockerfiles because perfectly can be used with another projects.
If you are using Mac, you are already using a VM called boot2docker
I hope this helps.

Categories