Containerizing legacy PHP Laravel project - php

I have to containerize a legacy PHP Laravel application and deploy it in an EKS cluster, and as I am completely new to both PHP and Laravel, I am currently having some difficulties.
After googling some examples of a Laravel Dockerfile, there seems to be many different methods of doing this and I had some trouble understanding and executing the process.
In one of the blogs I found, it seems to use a Dockerfile and a docker-compose.yaml file to containerize the application like the one used below.
FROM php:7.3-fpm
# step 2
WORKDIR /root
RUN apt-get update
RUN apt-get install -y curl
# step 3
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/bin/composer
# step 4
RUN apt-get install -y zlib1g-dev && apt-get install -y libzip-dev
RUN docker-php-ext-install zip
# step 5
RUN composer global require laravel/installer
RUN ["/bin/bash", "-c", "echo PATH=$PATH:~/.composer/vendor/bin/ >> ~/.bashrc"]
RUN ["/bin/bash", "-c", "source ~/.bashrc"]
# step 6
EXPOSE 9000
CMD ["php-fpm"]
version: '3'
services:
proxy:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf
web:
image: nginx:latest
expose:
- "8080"
volumes:
- ./source:/source
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
php:
build:
context: .
dockerfile: php/Dockerfile
volumes:
- ./source:/source
I am guessing that the nginx is used for the web application server, kind of how Apache Tomcat is used in Spring Boot, but other than that, I am a little bit unclear on why there needs to be a yaml file for this.
In addition, I composed using the Docker and docker-compose.yaml file with the following command.
docker build -t website -f Dockerfile .
I did succeed in exporting the image, but I seem to have trouble running a container using this image.
It would be sincerely appreciate if you could tell me what I am doing wrong.
Thank you in advance!

Building with the Dockerfile only builds the php-fpm image, to run the container you should at least have one http server (like nginx) that forwards the requests to the php-fpm, there's probably something about that in ./proxy/nginx.conf,
It is also possible to build everything in 1 image (nginx, php-fpm), you probably want to start with something different then php:7.3-fpm (I usually start with alpine or ubuntu). Then its possible to run that image as a container and handle http requests.

Related

How to create a Symfony project (only) in Docker container

I wish to install Symfony and use it on a project, but do it without installing it on my system. So I figured it could be done using Docker, yet my efforts to make it work haven't paid off so far.
I created a Dockerfile where I tried installing everything I could possibly need and then running Symfony, while I was setting up a simple docker-compose.yml. When I try to up it, the container just exists, and by its log, it seems that Symfony could not be found, even though the image seems to build ok.
So what would be the correct way to accomplish this?
Dockerfile:
FROM php:8.1-apache
RUN a2enmod rewrite
RUN apt-get update \
&& apt-get install -y libzip-dev git wget --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN docker-php-ext-install pdo mysqli pdo_mysql zip;
RUN wget https://getcomposer.org/download/2.2.0/composer.phar \
&& mv composer.phar /usr/bin/composer && chmod +x /usr/bin/composer
RUN composer create-project symfony/skeleton:"6.1.*" app
RUN cd /app
CMD ["symfony", "server:start"]
docker-compose.yml:
version: '3.7'
services:
db:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=somerootpass
- MYSQL_PASSWORD=somepass
- MYSQL_DATABASE=dockerizeme_db
- MYSQL_USER=someuser
web:
build: .
ports:
- 8080:8000
volumes:
- './app:/app'
depends_on:
- db
You have a couple problems here.
First, you did not install the Symfony's CLI in your container, see https://symfony.com/download.
Without this, you can not use the symfony command. The compose create-project command does not install the CLI, it's only creating the framework skeleton.
Next, you are mounting a local folder ./app on your container's /app thus, the result of create project in your Dockerfile is overwritten at run time.
If you want to create the project in your local folder mounted inside the container, you would have to do it in the ENTRYPOINT.
But, since it's something you will most likely want to do only once, if you really do not want anything on your local computer, you could take the following approach.
Temporarily change your command, maybe in your docker-compose.yaml file to ["sleep", "infinity"] and re-up your containers
Run a command docker compose exec web composer create-project symfony/skeleton:"6.1.*" app
Change back your command and re-up your containers one last time
Bind mounts are mounted at run time so they are not yet mounted during your build.
Also, I see that you run Symfony's dev server but are using a Apache PHP image. I would normally do one or the other but not both.
Plus you do a RUN cd /app but the correct way to do that in that context would be WORKDIR /app

Docker install and use composer with laravel and dependencies

Im startig my journal with docker.
I made docker-compose.yml that starts following services:
nginx
PHP 8.1
I setup site to display and read php files, everything is ok. But right now I don't know whats next. I want to install laravel with composer and NPM. How to run it together in that way I can user "composer install", "composer update" in every project.
This is my docker-compose.yml:
services:
nginx:
container_name: nginx_tst
image: nginx:latest
networks:
- php
ports:
- "8080:80"
volumes:
- "./nginx-conf:/etc/nginx/conf.d"
- "./:/usr/share/nginx/html"
php:
container_name: php_tst
image: php:8.1-fpm
networks:
- php
volumes:
- "./:/usr/share/nginx/html"
working_dir: /
networks:
php:
Edit:
I Switched to Laravel Sail, it makes everything by itself
Add command attribute to the php service. Using that you can execute compose install and update commands and etc...
Follow this link to know how to execute multiple commands
Docker Compose - How to execute multiple commands?
You can use something like this.
command: curl -s https://laravel.build/example-app | bash
You can go inside the container
docker exec -it php_tst bash
and then install & run composer
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php composer-setup.php
php -r "unlink('composer-setup.php');"
php composer.phar install
install & run nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
nvm install node
node -v
npm i
npm run dev

Composer while 'docker-compose' with mount bind

I try to build a docker-compose project for a php application beside a database, which is automatically deployed on a developer machine. It's not a problem, but where it's stuck is the composer-install.
I want to have a directory from the host computer which is bind into the container, so developer are able to change code and are able to see it immediately in the docker instance. That means
Deploy a container with PHP and a webserver
Deploy the database
Bind the local source directory into the PHP container
Execute a composer-install in that directory
The directory tree is like
/
-php
-- src
-- Dockerfile
-postgres
-- Dockerfile
docker-compose.yml
I attached snippets from my docker-compose.yml and the PHP Dockerfile, anyone has an idea why it fails or see the problem in the order or something else or even can me explain what I have to notice? That would be great!
Dockerfile:
FROM xy
RUN curl -sS https://getcomposer.org/installer | php \
&& mv composer.phar /usr/local/bin/ \
&& ln -s /usr/local/bin/composer.phar /usr/local/bin/composer
COPY ./src /var/www/html
WORKDIR /var/www/html/
RUN composer install --prefer-source --no-interaction
Docker-compose file:
version: '3.4'
services:
php:
build: ./php
container_name: "php"
volumes:
- ./php/src:/var/www/html
postgres:
image: postgres:10.4-alpine
container_name: "postgres"

Building a Docker Container with Cetnos7 apache and php.

I'll preface this by saying I am very new to the docker world and despite reading documentation I am still a little confused about a few things.
I want to build a container with centos7 apache and php. I don't want to use an already existing image, want to build a custom container. I have the following folder structure
My rw/docker/webserver/Dockerfile:
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN yum -y install httpd
RUN systemctl start httpd
RUN systemctl enable httpd
RUN yum update -y && yum install -y libpng-dev curl libcurl4-openssl-dev
RUN docker-php-ext-install pdo pdo_mysql gd curl
RUN a2enmod rewrite
MY docker-compose.yml
version: '2'
services:
webserver:
build: ./docker/webserver
ports:
- "80:80"
- "443:443"
volumes:
- /**PATH**/rw/services:/var/www/html
links:
- db
db:
image: mysql:5.7
ports:
- "3306:3306"
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=****
- MYSQL_DATABASE=****
This fails when docker tries to start httpd with the error
ERROR: Service 'webserver' failed to build: The command '/bin/sh -c systemctl start httpd' returned a non-zero code: 1
Q1. Why is the install failing?
Q2. Is the the proper way to do this? Should my dockerfile for centos and apache+php be separate. If yes, how does that work?
Q1. I think systemctl may not be provided with CentOS docker image.
Indeed, docker services are not meant to be run as daemons, but in the foreground. Take a look at apache's original http-foreground shell script for a better understanding of the concept.
Q2. Not that's not the right way IMHO.
Running apache is the job of an entrypoint or command script.
So instead of RUN your-command-to-run-apache, it would rather be CMD your-command-to-run-apache.
Once again, Apache official repository can give you some clue about this.
To my eyes these kinds of Dockerfiles look too old as they try to map the external docker daemon inside the container. That's a workaround as a systemd daemon cannot be run separately in a container.
Instead I am using the docker-systemctl-replacement script. The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
There are even some testcases for the LAMP stack available, so it should work in your case quite smoothly. The systemctl.py script is compatible with the systemd systemctl as much as that one simply overwrite the /usr/bin/systemctl inside the image - and all the non-docker installation instructions will work for docker builds.

How to use php & nodejs from separate containers

Let's say I have the following situation:
docker-compose.yml
version: '3'
services:
web:
build: .
links:
- apache
- php
- node
depends_on:
- apache
- php
- node
volumes:
- .:/var/www/html
apache:
image: httpd
php:
image: php
node:
image: node
and I also have a Dockerfile
FROM phusion/baseimage
RUN apt-get update
RUN apt-get install -yq git curl zip wget curl
RUN curl -s https://getcomposer.org/installer | php # error, no PHP exec found
RUN mv composer.phar /usr/local/bin/composer
RUN npm install -g bower gulp
COPY ./ /var/www/html
WORKDIR /var/www/html
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
CMD ["/sbin/my_init"]
Now my question is: how could I use PHP & Node, which are installed in separate containers, in this main app Dockerfile? Is that even possible, or do I need to manually install PHP & Node inside my Dockerfile?
Docker doesn't work the way you are thinking. This isn't like code inheritance or configuring a single machine with multiple runtime languages. You are configuring multiple virtual machines, each with their own runtime environment.
if you want to run a PHP app, you put that app in the container that has PHP. same with Node.

Categories