I need to install cURL compiled with OpenSSL and zlib via Dockerfile for Debian image with apache and php 5.6. I tried many approaches but due to the fact that I don't have string understanding in Linux a failed. I use docker-compose to up my container. docker-compose.yaml looks like:
version: '2'
services:
web:
build: .
command: php -S 0.0.0.0:80 -t /var/www/html/
ports:
- "80:80"
depends_on:
- db
volumes:
- $PWD/www/project:/var/www/html
container_name: "project-web-server"
db:
image: mysql:latest
ports:
- "192.168.99.100:3306:3306"
container_name: "project-db"
environment:
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpass
MYSQL_ROOT_PASSWORD: dbpass
As a build script I use Dockerfile:
FROM php:5-fpm
RUN apt-get update && apt-get install -y \
apt-utils \
curl libcurl3 libcurl3-dev php5-curl php5-mcrypt
RUN docker-php-ext-install -j$(nproc) curl
'docker-php-ext-install' is a helper script from the base image https://hub.docker.com/_/php/
The problem is that after $ docker build --rm . which is successful a don't get an image with cURL+SSL+zlib. After $ docker-compose up I have a working container with Apache+MySQL and can run my project but libraries I need are not there.
Could you explain how to add these extensions to my apache in container properly? I even tried to create my own Dockerfile and build apache+php+needed libs there, but had no result.
Your Dockerfile is not complete. You have not done a COPY (or similar) to transfer your source code to execute from the host into the container. The point of a Dockerfile is to setup an environment together with your source code which finishes by launching a process (typically a server).
COPY code-from-some-location into-location-in-container
CMD path-to-your-server
... as per the URL you reference a more complete Dockerfile would appear like this
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
notice the COPY which recursively copies all files/dirs (typically the location of your source code, etc like data and/or config files) in your $PWD where you execute the command onto the specified location internal to the container In unix a period as in . indicates the current directory so above command
COPY . /usr/src/myapp
will copy all files and directories in current directory from the host computer (the one you are using when typing in the docker build command) into the container directory called /usr/src/myapp
the WORKDIR acts to change directories into your container's dir supplied
finally the CMD launches the server which hums along once your launch the container
Related
I have created a simple Dockerfile to install apache with PHP and then install packages from composer.json.
FROM php:7-apache
WORKDIR /var/www/html
COPY ./src/html/ .
COPY composer.json .
RUN apt-get update
RUN apt-get install -y unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer update
When I run docker build -t my-web-server . followed by docker run -p 8080:80 my-web-server, everything works fine and the packages install.
But when I use a docker-compose file:
version: "3.9"
services:
ecp:
build: .
ports:
- "8080:80"
volumes:
- ./src:/var/www
and perform docker-compose build followed by docker-compose up The packages do not install and I just index.php is taken across to the container
my current file structure:
src
|-- html
|-- index.php
composer.json
docker-compose.yaml
Dockerfile
When docker-compose is building the image all the console outputs are identical to that of docker build
Your two approaches are not identical. You are using volumes in your docker compose, and not in your docker call. Your problem lies there.
More specifically, notice that in your docker compose you are mounting your host's ./src to your container's ./var/www - which is not the giving you the correct structure, since you "shadow" the container's folder that contains your composer.json (which was copied to the container at build time).
To avoid such confusion, I suggest that if you want to mount a volume with your compose (which is a good idea for development), then your docker-compose.yml file should mount the exact same volumes as the COPY commands in your Dockerfile. For example:
volumes:
- ./src/html:/var/www/html
- ./composer.json:/var/www/html/composer.json
Alternatively, remove the volumes directive from your docker-compose.yml.
Note that it can be a cause for additional problems and confusion to have a file (in your case composer.json) copied to a folder in the container, while having the same folder also copied to the container as is. It is best to have the structure on the container mimic the one on the host as closely as possible.
I'm trying to create a Docker container (using docker-compose) for an application wit Doctrine, the problem is: if I just run the application, it works, but when I try to use the application before I run command ./vendor/bin/doctrine orm:generate-proxies, I get the error:
PHP Warning: require(/tmp/__CG__DomainEntitiesAnyEntity.php): failed to open stream: No such file or directory in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
PHP Fatal error: require(): Failed opening required '/tmp/__CG__DomainEntitiesAnyEntity.php' (include_path='.:/usr/local/lib/php') in /var/www/html/vendor/doctrine/common/lib/Doctrine/Common/Proxy/AbstractProxyFactory.php on line 204
OK, so just run the command on docker-compose.yml
version: '3'
services:
apache_server:
build: .
working_dir: /var/www/html
ports:
- "80:80"
volumes:
- ./:/var/www/html
- ../uploads:/var/www/uploads
- ./.docker/apache2.conf:/etc/apache2/apache2.conf
- ./.docker/000-default.conf:/etc/apache2/sites-avaliable/000-default.conf
- ./.docker/php.ini:/etc/php/7.4/apache2/php.ini
depends_on:
- postgres_database
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
networks:
- some-network
Yes, it works as expected and generates the proxies to /tmp folder, but after the command run and after the prompt with proxies generated, I get the message exited with code 0. It happens because Docker finish the container execution after getting the status code 0. So I tried two more things:
Add tail to something:
command: sh -c "./vendor/bin/doctrine orm:generate-proxies && tail -f /var/www/html/log.txt"
but when I do this, the server doesn't respond to requests (http://localhost/) anymore.
Add tty before running the command:
tty: true
# restart: unless-stopped <--- also tried this
and doesn't work also. Is there another way to solve this without I have to manually run the command inside the container every time?
PS: my dockerfile is this one:
FROM php:7.4-apache
WORKDIR /var/www/html
RUN a2enmod rewrite
RUN a2enmod headers
RUN mkdir /var/www/uploads
RUN mkdir /var/www/uploads/foo-upload-folder
RUN mkdir /var/www/uploads/bar-upload-folder
RUN chmod 777 -R /var/www/uploads
RUN apt-get update \
&& apt-get install -y \
libpq-dev \
zlib1g-dev \
libzip-dev \
unzip \
&& docker-php-ext-install \
pgsql \
pdo \
pdo_pgsql \
zip
RUN service apache2 restart
Cause of your issue
Your Docker Compose configuration of command
command: sh -c "./vendor/bin/doctrine orm:generate-proxies"
in docker-compose.yml overwrites the Cmd in the Docker image php:7.4-apache that normally would start the Apache server, see
docker inspect php:7.4-apache
or more specific
docker inspect --format="{{ .Config.Cmd }}" php:7.4-apache
which gives you
[apache2-foreground]
Solution in general
If you like to run a command before the original command of a Docker image, use Entrypoint and make sure you call the original entrypoint, see
$ docker inspect --format="{{ .Config.Entrypoint }}" php:7.4-apache
[docker-php-entrypoint]
For example, instead of command define
entrypoint: sh -c "./vendor/bin/doctrine orm:generate-proxies && docker-php-entrypoint"
Solution in your case
However, in your case, I would configure Doctrine like this (see Advanced Doctrine Configuration)
$config = new Doctrine\ORM\Configuration;
// ...
if ($applicationMode == "development") {
$config->setAutoGenerateProxyClasses(true);
} else {
$config->setAutoGenerateProxyClasses(false);
}
In development your code changes (mounted as volume) and proxies may have to be updated/generated. In production your code does not change anymore (copy code to Docker image). Hence, you should generate proxies in your Dockerfile (after you copied the source code), e.g.
FROM php:7.4-apache
WORKDIR /var/www/html
# ...
Copy . /var/www/html
RUN ./vendor/bin/doctrine orm:generate-proxies
My goal is to get php dependencies in one stage of a docker file then copy those dependencies to the next stage (the vendor/ dir). However, once a volume is specified in docker-compose.yml that overrides the COPY statement as if it never happened.
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: docker/app/Dockerfile
volumes:
- .:/var/www/html
docker/app/Dockerfile
FROM composer AS php_builder
COPY . /app
RUN composer install --no-dev
FROM php:7.1-fpm
COPY --from=php_builder /app/vendor /var/www/html/vendor
The result of building and running this is a /var/www/html directory that doesn't have the vendor/ directory as I'd expect.
My guess is that this is because the volume specified in the docker-compose.yml service definition actually happens after the COPY --from statement which seems to be logical. But how do I get around this? I'd still like to use a volume here instead of an ADD or COPY command.
You can combine bind mounts & volume to make your aim, a minimal example for your reference:
docker-compose.yaml:
version: "3"
services:
app:
build:
context: .
dockerfile: docker/app/Dockerfile
volumes:
- my_folder:/var/www/html/vendor
- .:/var/www/html
volumes:
my_folder:
docker/app/Dockerfile:
FROM composer AS php_builder
COPY . /app
#RUN composer install --no-dev
RUN mkdir -p vendor && echo "I'm dependency!" > vendor/dependencies.txt
FROM php:7.1-fpm
COPY --from=php_builder /app/vendor /var/www/html/vendor
Results:
shubuntu1#shubuntu1:~/test$ ls
docker docker-compose.yaml index.php
shubuntu1#shubuntu1:~/test$ docker-compose up -d
Creating network "test_default" with the default driver
Creating test_app_1 ... done
shubuntu1#shubuntu1:~/test$ docker exec -it test_app_1 /bin/bash
root#bf59d8684581:/var/www/html# ls
docker docker-compose.yaml index.php vendor
root#bf59d8684581:/var/www/html# cat vendor/dependencies.txt
I'm dependency!
From above execution, you can see the dependencies.txt which generated in the first stage of Dockerfile still could be seen in the container, volume just manage the data by docker itself, while bind mounts give you chance to manage data for yourself.
Context
I set up a PHP application recently to work in a docker container connected to a database in a different container.
In production, we're using a single container environment since it just connects to the database which is hosted somewhere else. Nonetheless, we decided to use two containers and docker-compose locally for the sake of easing the development workflow.
Problem
The issue we've encountered is that the first time we build and run the application via docker-compose up --build Composer's vendor directory isn't available in the container, even though we had a specific RUN composer install line in the Dockerfile. We would have to execute the composer install from within the container once it was running.
Solution found
After a lot of googling around, we figured that we had two possible solutions:
change the default command of our Docker image to the following:
bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
Or simply override the container's default command to the above via docker-compose's command.
The difference is that if we overrid the command via docker-compose, when deploying the application to our server, it would run seamlessly, as it should, but when changing the default command in the Dockerfile it would suffer a 1 minute-ish downtime everytime we deployed.
This helped during this process:
Running composer install within a Dockerfile
Some (maybe wrong) conclusions
My conclusion was that that minute of downtime was due to the container having to install all the dependencies via composer before running the Apache server, vs simply running the server.
Furthermore, another conclusion I drew from all the poking around was that the reason why the docker-compose up --build wouldn't install the composer dependencies was because we had a volume specified in the docker-compose.yml which overrid the directories in the container.
These helped:
https://stackoverflow.com/a/38817651/4700998
https://stackoverflow.com/a/48589910/4700998
Actual question
I was hoping for somebody to shed some light into all this since I don't really understand what's going on fully – why running docker-compose would not install the PHP dependencies, but including the composer install in the default command would and why adding the composer install to the docker-compose.yml is better. Furthermore, how do volumes come into all this, and is it the real reason for all the hassle.
Our current docker file looks like this:
FROM php:7.1.27-apache-stretch
ENV DEBIAN_FRONTEND=noninteractive
# install some stuff, PHP, Apache, etc.
WORKDIR /srv/app
COPY . .
RUN composer install
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
And our current docker-compose.yml like this:
version: '3'
services:
database:
image: mysql:5.7
container_name: container-cool-name
command: mysqld --user=root --sql_mode=""
ports:
- "3306:3306"
volumes:
- ./db_backup.sql:/tmp/db_backup.sql
- ./.docker/import.sh:/tmp/import.sh
environment:
MYSQL_DATABASE: my_db
MYSQL_USER: my_user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: test
app:
build:
context: .
dockerfile: Dockerfile
image: image-name
command: bash -c "composer install && /usr/sbin/apache2ctl -D FOREGROUND"
ports:
- 8080:80
volumes:
- .:/srv/app
links:
- database:db
depends_on:
- database
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: my_db
DB_USER: my_user
DB_PASSWORD: password
Your first composer install within Dockerfile works fine, and your resulting image has vendor/ etc.
But later you create a container from that image, and that container is executed with whole directory being replaced by a host dir mount:
volumes:
- .:/srv/app
So, your docker image has both your files and installed vendor files, but then you replace project directory with one on your host which does not have vendor files, and final result looks like the building was never done.
My advice would be:
don't add second command build to the Dockerfile
mount individual folders in your container, i.e. not .:/srv/app, but ./src:/srv/app/src, etc.
or, map whole folder, but copy vendor files from image/container to your host
or use some 3rd party utility to solve exactly this problem, e.g. http://docker-sync.io or many others
I have three Docker containers running on Mac OS sierra, namely web, mysql and mongo, and have linked both mongo and mysql into web, which is essentially a Ubuntu Xenail base, with Apache and PHP added.
I am currently mounting my local Symfony project into the web container, and that seems to be working fine, but when I try to interact with the DB in any way, I get:
An exception occured in driver: SQLSTATE[HY000] [2002] Connection
refused
I've tried almost every combination of parameter values, but keep getting the same result.
I suspect it might have something to do with the way that I am linking the containers?
I'm in the process of learning Docker, so please excuse my limited knowledge.
Thanks!
Web dockerfile:
FROM ubuntu:xenial
MAINTAINER Some Guy <someguy#domain.com>
RUN apt-get update && apt-get install -y \
apache2 \
vim \
php \
php-common \
php-cli \
php-curl \
php-mysql \
php-mongodb \
libapache2-mod-php \
php-gd
RUN mkdir -p /var/www/symfony.local/public_html
RUN chown -R $USER:$USER /var/www/symfony.local/public_html
RUN chmod -R 755 /var/www
COPY config/php/php.ini /usr/local/etc/php/
COPY config/apache/sites-available/*.conf /etc/apache2/sites-available/
RUN a2enmod rewrite
RUN a2dissite 000-default.conf
RUN a2ensite symfony.local.conf
EXPOSE 80
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Mysql dockerfile:
FROM mysql:5.7
MAINTAINER Some Guy <someguy#domain.com>
# Set the root users password
ENV MYSQL_ROOT_PASSWORD password
# Copy over the DB dump to be run upon creation
COPY sql/ /docker-entrypoint-initdb.d
# Copy over the custom mysql config file
COPY config/ /etc/mysql/conf.d
EXPOSE 3306
Run commands:
docker run --name mongo -d mongo #Im making use of the official Mongo image
docker run --name mysql -v /usr/local/var/mysql:/var/lib/mysql -d someguy/local:mysql
docker run --name web -d -p 80:80 --link mysql:mysql --link mongo:mongo -v ~/Sites/symfony.local/:/var/www/symfony.local/public_html/ someguy/local:web
Symfony parameters.yml file:
parameters:
database_host: mysql
database_port: 3306
database_name: gorilla
database_user: root
database_password: password
UPDATE:
So I've moved over to using docker-compose, but am still receiving the same error.
docker-compose.yml file
version: "2"
services:
web:
build: ./web
ports:
- "80:80"
volumes:
- ~/Sites/symfony.local/:/var/www/symfony.local/public_html/
depends_on:
- db
- mongo
mongo:
image: mongo:latest
mysql:
image: mysql:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
An exception occured in driver: SQLSTATE[HY000] [2002] Connection refused
Means, it has nothing to do with your network per se - the links are just fine.
What you are lacking is the how the user has been created, if the user has been created https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh#L122 .. so actually without a host limitation per se.
The question in your case is, what is inside your "sql/" folder - those scripts are executed during the entrypoint.
Be sure to never use exitX in those scripts, they will interrupt the main script, see https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh#L151
Check your docker logs for mysql to ensure the script did not print you any warnings, use https://github.com/docker-library/mysql/blob/c207cc19a272a6bfe1916c964ed8df47f18479e7/5.7/docker-entrypoint.sh as an reference.
And last but not least, please use docker-compose. If you have issues with the timings ( mysql starting to slow and your web-container freaks out ), use a "wait for mysql" entrypoint in web:
#!/bin/bash
# this script does only exist to wait for the database before we fire up tomcat / standalone
RET=1
echo "Waiting for database"
while [[ RET -ne 0 ]]; do
sleep 1;
if [ -z "${db_password}" ]; then
mysql -h $db_host -u $db_user -e "select 1" > /dev/null 2>&1; RET=$?
else
mysql -h $db_host -u $db_user -p$db_password -e "select 1" > /dev/null 2>&1; RET=$?
fi
done
Set db_host, $user, $pasword accordingly using ENV or whatever suits you.