Building a Docker Container with Cetnos7 apache and php. - php

I'll preface this by saying I am very new to the docker world and despite reading documentation I am still a little confused about a few things.
I want to build a container with centos7 apache and php. I don't want to use an already existing image, want to build a custom container. I have the following folder structure
My rw/docker/webserver/Dockerfile:
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN yum -y install httpd
RUN systemctl start httpd
RUN systemctl enable httpd
RUN yum update -y && yum install -y libpng-dev curl libcurl4-openssl-dev
RUN docker-php-ext-install pdo pdo_mysql gd curl
RUN a2enmod rewrite
MY docker-compose.yml
version: '2'
services:
webserver:
build: ./docker/webserver
ports:
- "80:80"
- "443:443"
volumes:
- /**PATH**/rw/services:/var/www/html
links:
- db
db:
image: mysql:5.7
ports:
- "3306:3306"
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=****
- MYSQL_DATABASE=****
This fails when docker tries to start httpd with the error
ERROR: Service 'webserver' failed to build: The command '/bin/sh -c systemctl start httpd' returned a non-zero code: 1
Q1. Why is the install failing?
Q2. Is the the proper way to do this? Should my dockerfile for centos and apache+php be separate. If yes, how does that work?

Q1. I think systemctl may not be provided with CentOS docker image.
Indeed, docker services are not meant to be run as daemons, but in the foreground. Take a look at apache's original http-foreground shell script for a better understanding of the concept.
Q2. Not that's not the right way IMHO.
Running apache is the job of an entrypoint or command script.
So instead of RUN your-command-to-run-apache, it would rather be CMD your-command-to-run-apache.
Once again, Apache official repository can give you some clue about this.

To my eyes these kinds of Dockerfiles look too old as they try to map the external docker daemon inside the container. That's a workaround as a systemd daemon cannot be run separately in a container.
Instead I am using the docker-systemctl-replacement script. The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
There are even some testcases for the LAMP stack available, so it should work in your case quite smoothly. The systemctl.py script is compatible with the systemd systemctl as much as that one simply overwrite the /usr/bin/systemctl inside the image - and all the non-docker installation instructions will work for docker builds.

Related

How to create a Symfony project (only) in Docker container

I wish to install Symfony and use it on a project, but do it without installing it on my system. So I figured it could be done using Docker, yet my efforts to make it work haven't paid off so far.
I created a Dockerfile where I tried installing everything I could possibly need and then running Symfony, while I was setting up a simple docker-compose.yml. When I try to up it, the container just exists, and by its log, it seems that Symfony could not be found, even though the image seems to build ok.
So what would be the correct way to accomplish this?
Dockerfile:
FROM php:8.1-apache
RUN a2enmod rewrite
RUN apt-get update \
&& apt-get install -y libzip-dev git wget --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN docker-php-ext-install pdo mysqli pdo_mysql zip;
RUN wget https://getcomposer.org/download/2.2.0/composer.phar \
&& mv composer.phar /usr/bin/composer && chmod +x /usr/bin/composer
RUN composer create-project symfony/skeleton:"6.1.*" app
RUN cd /app
CMD ["symfony", "server:start"]
docker-compose.yml:
version: '3.7'
services:
db:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=somerootpass
- MYSQL_PASSWORD=somepass
- MYSQL_DATABASE=dockerizeme_db
- MYSQL_USER=someuser
web:
build: .
ports:
- 8080:8000
volumes:
- './app:/app'
depends_on:
- db
You have a couple problems here.
First, you did not install the Symfony's CLI in your container, see https://symfony.com/download.
Without this, you can not use the symfony command. The compose create-project command does not install the CLI, it's only creating the framework skeleton.
Next, you are mounting a local folder ./app on your container's /app thus, the result of create project in your Dockerfile is overwritten at run time.
If you want to create the project in your local folder mounted inside the container, you would have to do it in the ENTRYPOINT.
But, since it's something you will most likely want to do only once, if you really do not want anything on your local computer, you could take the following approach.
Temporarily change your command, maybe in your docker-compose.yaml file to ["sleep", "infinity"] and re-up your containers
Run a command docker compose exec web composer create-project symfony/skeleton:"6.1.*" app
Change back your command and re-up your containers one last time
Bind mounts are mounted at run time so they are not yet mounted during your build.
Also, I see that you run Symfony's dev server but are using a Apache PHP image. I would normally do one or the other but not both.
Plus you do a RUN cd /app but the correct way to do that in that context would be WORKDIR /app

Containerizing legacy PHP Laravel project

I have to containerize a legacy PHP Laravel application and deploy it in an EKS cluster, and as I am completely new to both PHP and Laravel, I am currently having some difficulties.
After googling some examples of a Laravel Dockerfile, there seems to be many different methods of doing this and I had some trouble understanding and executing the process.
In one of the blogs I found, it seems to use a Dockerfile and a docker-compose.yaml file to containerize the application like the one used below.
FROM php:7.3-fpm
# step 2
WORKDIR /root
RUN apt-get update
RUN apt-get install -y curl
# step 3
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/bin/composer
# step 4
RUN apt-get install -y zlib1g-dev && apt-get install -y libzip-dev
RUN docker-php-ext-install zip
# step 5
RUN composer global require laravel/installer
RUN ["/bin/bash", "-c", "echo PATH=$PATH:~/.composer/vendor/bin/ >> ~/.bashrc"]
RUN ["/bin/bash", "-c", "source ~/.bashrc"]
# step 6
EXPOSE 9000
CMD ["php-fpm"]
version: '3'
services:
proxy:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf
web:
image: nginx:latest
expose:
- "8080"
volumes:
- ./source:/source
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
php:
build:
context: .
dockerfile: php/Dockerfile
volumes:
- ./source:/source
I am guessing that the nginx is used for the web application server, kind of how Apache Tomcat is used in Spring Boot, but other than that, I am a little bit unclear on why there needs to be a yaml file for this.
In addition, I composed using the Docker and docker-compose.yaml file with the following command.
docker build -t website -f Dockerfile .
I did succeed in exporting the image, but I seem to have trouble running a container using this image.
It would be sincerely appreciate if you could tell me what I am doing wrong.
Thank you in advance!
Building with the Dockerfile only builds the php-fpm image, to run the container you should at least have one http server (like nginx) that forwards the requests to the php-fpm, there's probably something about that in ./proxy/nginx.conf,
It is also possible to build everything in 1 image (nginx, php-fpm), you probably want to start with something different then php:7.3-fpm (I usually start with alpine or ubuntu). Then its possible to run that image as a container and handle http requests.

Using RUN in Dockerfile to install Laravel inside a php image is not working

I am trying to install Laravel inside a php container using this line on my Dockerfile:
RUN composer create-project laravel/laravel=8.* . --prefer-dist
My goal is to install it on my container directory /var/www/html. My first assumption is that by using . , the command ran on the current working directory, which is /var/www/html itself. And actually, it works when I go inside the container and run the command there. But, when building using the Dockerfile, seems that it doesn't install anything anywhere. On the contrary, the message I got on the terminal looked like the command ran successfully:
=> [stage-0 6/6] RUN composer create-project laravel/laravel=8.* . --prefer-dist
Currently my docker-compose.yaml looked like this:
version: '3.8'
services:
# PHP + Apache Service
php-apache:
build:
context: ./php-apache
working_dir: /var/www/html
ports:
- 80:80
volumes:
- ./src/:/var/www/html
- ./php-apache/apache2.conf:/etc/apache2/apache2.conf
- ./php-apache/000-default.conf:/etc/apache2/sites-available/000-default.conf
depends_on:
mysql:
condition: service_healthy
# other services and volumes...
Also my Dockerfile:
FROM php:8.1-apache
# Mysql driver on php
RUN docker-php-ext-install pdo pdo_mysql mysqli
# Composer
RUN apt-get update && apt-get install -y git zip
COPY --from=composer:2.2.2 /usr/bin/composer /usr/bin/composer
# Node.js
RUN apt-get update && apt-get install -y nodejs npm
# Laravel
RUN composer create-project laravel/laravel=8.* . --prefer-dist
I already set the working_dir to /var/www/html. Why is that the case here?
You could use the entry point script to run scripts after the container has started.
Create a bash script in the same location where you have Dockerfile.
start-container.sh
#!/usr/bin/env bash
#This will try to install laravel in /var/www/html/new-laravel if folder does not exists
composer create-project laravel/laravel=8.* /var/www/html/new-laravel --prefer-dist
#If your container gets exited, then consider restarting apache, here is an #example
#service php7.4-fpm start && nginx -g "daemon off;"
Remember to make the bash script executable, run the command before you run docker-compose build
chmod u+x start-container.sh
Then in your Dockerfile towards the end add:
ENTRYPOINT ["start-container"]
Then:
docker-compose stop && docker-compose build && docker-compose up -d

How to use php & nodejs from separate containers

Let's say I have the following situation:
docker-compose.yml
version: '3'
services:
web:
build: .
links:
- apache
- php
- node
depends_on:
- apache
- php
- node
volumes:
- .:/var/www/html
apache:
image: httpd
php:
image: php
node:
image: node
and I also have a Dockerfile
FROM phusion/baseimage
RUN apt-get update
RUN apt-get install -yq git curl zip wget curl
RUN curl -s https://getcomposer.org/installer | php # error, no PHP exec found
RUN mv composer.phar /usr/local/bin/composer
RUN npm install -g bower gulp
COPY ./ /var/www/html
WORKDIR /var/www/html
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
CMD ["/sbin/my_init"]
Now my question is: how could I use PHP & Node, which are installed in separate containers, in this main app Dockerfile? Is that even possible, or do I need to manually install PHP & Node inside my Dockerfile?
Docker doesn't work the way you are thinking. This isn't like code inheritance or configuring a single machine with multiple runtime languages. You are configuring multiple virtual machines, each with their own runtime environment.
if you want to run a PHP app, you put that app in the container that has PHP. same with Node.

Symfony environment in Docker container

I want to run my Symfony3 app out of Docker container. As I understand I can install/create multiple images with docker and spin them up into a container that will hold my app.
I been going through some docks on Docker on how I may do this, I seen that I can have a docker images such as:
Ubuntu
PHP
Nginx -> As I understand it is like an Apache server, so in theory it will handle all the requests and responses.
But I still find it hard to understand the concept of spinning multiple images into one container.
Also I have seen something called DockerFile which apparently can can build a my dev environment into one container that I can also work with.
Question 1:
Can someone please clarify the whole process I still find it hard to wrap my head around it.
Question 2:
How can I build DockerFile and what is it?
You don't want to spin multiple images into one container. It's possible that you don't even need a docker file (but for PHP, you probably do).
The usual mantra concerning docker is "one process per container", and after working with it for several months, I find this to be great advice, even if it's not always achievable. For a PHP app, whether it's symphony, Cake, Laravel, Wordpress, whatever, this is how I do it. I use apache, and it sounds like you might be more familiar with apache as well. You can easily substitute the official nginx container with minor changes to my example if desired.
One container runs PHP-FPM
One container runs Apache (httpd)
If you need a database, one container for mysql
Optionally, a container for composer.
docker-compose to orchestrate all of these containers
I typically use the official httpd container, the official mysql container, and I extent the offical php fpm container as described to include the mods that I need. Here is an example of a PHP-FPM dockerfile that adds in some external libs that might be needed for your app:
FROM php:5.5-fpm
RUN apt-get update && apt-get install -y \
php5-mysql \
php5-curl \
php5-common \
php5-gd \
php5-imagick \
php5-intl \
php5-dev \
php5-sqlite \
php5-xdebug \
php5-memcached \
\
libmemcached-dev \
libmcrypt-dev \
libfreetype6-dev \
libxml2-dev \
libmagickwand-dev \
libjpeg62-turbo-dev \
libpng-dev && \
\
docker-php-ext-install pdo pdo_mysql && \
docker-php-ext-install soap && \
docker-php-ext-configure gd --with-jpeg-dir=/usr/include/ && \
docker-php-ext-install gd && \
docker-php-ext-install iconv mcrypt && \
\
pecl install imagick && \
docker-php-ext-enable imagick && \
pecl install memcached && \
docker-php-ext-enable memcached && \
\
pecl install xdebug && \
docker-php-ext-enable xdebug && \
\
mkdir -p /app/content && \
mkdir -p /app/usr/local/apache2 && \
cd /app/usr/local/apache2 && \
ln -s ../../../content htdocs
COPY copy/xdebug.ini /usr/local/etc/php/conf.d/xdebug.ini
This builds an image that I actually use for development. In addition to installing dependencies, it copies a config for xdebug, and sets up the folder structure to hold my app.
You would build this container like this:
docker build -f nameoffile.Dockerfile -t myhubaccount/myphpcontainer \
./path/to/folder/where/dockerfile/is
This builds an image on your machine tagged as myhubaccount/myphpcontainer and you can refer to it in your compose file.
A basic compose file that tells these containers how to talk to each other might look something like this:
docker-compose.yml
version: '2'
services:
httpd:
image: httpd:latest
volumes:
- ./docker_conf/httpd.conf:/usr/local/apache2/conf/httpd.conf
- ./webroot:/usr/local/apache2/htdocs
ports:
- "80:80"
links:
- fpm
logging:
options:
max-size: "0"
database:
image: mysql
ports:
- "3306:3306"
volumes:
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
MYSQL_DATABASE: development
logging:
options:
max-size: "5k"
fpm:
image: myhubaccount/myphpcontainer
volumes:
- ./webroot:/app/content
links:
- database
logging:
options:
max-size: "50k"
I think it's beneficial to highlight several parts of this file. First, for php-fpm you need to set up apache to talk to the fpm server. The links object under httpd tells the container that there is another container with a domain name of "fpm", and docker knows how to resolve that name, so any communication with the fpm server can use that name. We have to mount (under volumes) the apache config in the httpd container. It looks like the default config, but has this part added to accommodate php-fpm:
ProxyTimeout 30
<FilesMatch ".*\.php$">
SetHandler "proxy:fcgi://fpm:9000"
</FilesMatch>
This tells apache to forward requests for php files to the fpm server and serve the result.
The ports entry causes port 80 of the container to be forwarded to port 80 of the docker machine. This is localhost on linux, or the docker-machine ip on Mac and Windows. You can find this ip with the console command docker-machine ip.
We do the same thing on the mysql container so that we access mysql directly with a tool like Mysql Workbench. You can read about the environment variables that the official mysql container allows and what they do.
We have links for fpm, if it needs to talk to the database. The hostname for your database in this case is just "database".
The logging items are not necessary, just personal preference to keep the log output from becoming excessive.
Once you have all this in place, you bring up the environment with docker-compose up. If you want to take a look at what a container looks like you can get a shell on a running container with docker-compose exec fpm bash, substituting "fpm" with the name of the container you want to look at. The caveat here is that the container must actually include the bash binary. All of these here do, but some containers do not.
I hope that this gives enough php-specific example to help you wrap your head around how docker sort of works. I would suggest re-reading the docs for bother Docker and Docker Compose. And I would also suggest reading the Dockerfiles for the official images if you are interested in building your own containers. The docs have links to Dockerfiles that the Docker team considers to be exemplary.

Categories