After years of spaghetti code (I'm Italian, I do really know what spaghetti are) I'm trying to set up a decent php development environment.
This is my battleplan:
install git and docker on my laptop;
create a docker virtual enviroment as much similar as possible to
the production LAMP (shared) server;
use sshfs to mount the docker VE web server root directory on my laptop;
on the laptop, init a git repository inside the mount point;
use my favourite ide (aptana studio) to create a php project in the mount point directory;
test the code pointing a browser to docker VE ip;
set up a bitbucket account to automatically deploy git commits on production server.
What do you think about? Any chance it could work?
Thanks!
I suggest using the official php language docker image:
https://registry.hub.docker.com/_/php/
This enables you to create an image that packages your php code, rather than having to map volumes at run-time.
Example 1 (with Dockerfile)
├── build_and_run.sh
├── Dockerfile
└── src
└── index.php
Dockerfile
FROM php:5.6-apache
COPY src/ /var/www/html/
build_and_run.sh
Script that builds a new container image and launches it:
docker build -t my-php-app .
docker run -it --rm --name my-running-app -p 8080:80 my-php-app
Apache is configured to listen on port 8080
Example 2 (without Dockerfile)
The php image can also be run without a docker file. Just provide a mapping to the source code locally:
docker run -it --rm --name my-apache-php-app -v "$(pwd)/src":/var/www/html -p 8080:80 php:5.6-apache
Related
I made the following script in docker-compose.yml, which tries to run a official PHP + Apache image from Docker Hub:
services:
apache:
image: 'php:5.6-apache'
container_name: apache
restart: always
ports:
- '80:80'
- '443:443'
volumes:
- /mnt/data/apps/html:/var/www/html
- /mnt/data/apps/ssl:/etc/ssl
- /mnt/data/apps/apache:/etc/apache2
But when i run it with docker compose up the container does go up, but the files that were supposed to be created on container launch are not being created it... (Also happens when using docker run script)
If i remove the volumes, run it again and access the container with "docker exec -it apache bash" i see that the files are generated accordingly... Just happens when binding volumes. Wasnt the files suposed to be created automatically to the local volumes?
Please, what am i doing wrong? Is there something missing on the script? Am i being dumb?
Sorry if this is a really obvious question, i have nowere to go and are starting now on docker.
Thank you
Solution was to run container without any volumes and use docker cp to copy the config files to my machine, then run it again but with volume poiting to the copied config files...
Nginx Example
docker pull nginx:1.23.1-alpine
docker run --name tmp-nginx-container -d nginx:1.23.1-alpine
docker cp tmp-nginx-container:/etc/nginx/ D:\Docker\Config
docker rm -f tmp-nginx-container
I have created a simple Dockerfile to install apache with PHP and then install packages from composer.json.
FROM php:7-apache
WORKDIR /var/www/html
COPY ./src/html/ .
COPY composer.json .
RUN apt-get update
RUN apt-get install -y unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer update
When I run docker build -t my-web-server . followed by docker run -p 8080:80 my-web-server, everything works fine and the packages install.
But when I use a docker-compose file:
version: "3.9"
services:
ecp:
build: .
ports:
- "8080:80"
volumes:
- ./src:/var/www
and perform docker-compose build followed by docker-compose up The packages do not install and I just index.php is taken across to the container
my current file structure:
src
|-- html
|-- index.php
composer.json
docker-compose.yaml
Dockerfile
When docker-compose is building the image all the console outputs are identical to that of docker build
Your two approaches are not identical. You are using volumes in your docker compose, and not in your docker call. Your problem lies there.
More specifically, notice that in your docker compose you are mounting your host's ./src to your container's ./var/www - which is not the giving you the correct structure, since you "shadow" the container's folder that contains your composer.json (which was copied to the container at build time).
To avoid such confusion, I suggest that if you want to mount a volume with your compose (which is a good idea for development), then your docker-compose.yml file should mount the exact same volumes as the COPY commands in your Dockerfile. For example:
volumes:
- ./src/html:/var/www/html
- ./composer.json:/var/www/html/composer.json
Alternatively, remove the volumes directive from your docker-compose.yml.
Note that it can be a cause for additional problems and confusion to have a file (in your case composer.json) copied to a folder in the container, while having the same folder also copied to the container as is. It is best to have the structure on the container mimic the one on the host as closely as possible.
I have created a command line application using symfony 3.4 which doesn't need to display any web page.
I generally run the commands like following:
php bin/console MY_COMMAND_NAME
I want to dockerize the application and share it with others, so inside the root directory of my project I created a docker-compose.yml file, which looks like following:
version: "3.3"
services:
web:
image: php:7.3-cli
Then I ran docker-compose up, after that I checked the PHP version by the following command and it showed my the correct version:
docker run php:7.3-cli php -v
However, when I ran docker ps, it didn't show any container running.
My question is how to run the commands inside my project root directory. FYI, I am using Docker Toolbox, on windows 10 Home Edition and my project location is:
C:\Users\{my_user_name}\Desktop\folder_1\folder_2
The docker container need to have a long running process defined in CMD to stay running. php-cli is not that. If you run composer up, you'll see something like this:
$ docker-compose up
Creating network "tempphpdocker_default" with the default driver
Pulling web (php:7.3-cli)...
7.3-cli: Pulling from library/php
b8f262c62ec6: Pull complete
a98660e7def6: Pull complete
4d75689ceb37: Pull complete
639eb0368afa: Pull complete
2cdbfdb779b1: Pull complete
e0b637fa9606: Pull complete
da7333b0ef25: Pull complete
01d65ff46009: Pull complete
673e50bed3b9: Pull complete
bf6c6e34305d: Pull complete
Digest: sha256:1453f5ef0d4d1d424ed8114dd90a775bdec06cc6fb3bbae9521dcb4ca0c8ca90
Status: Downloaded newer image for php:7.3-cli
Creating tempphpdocker_web_1 ...
Creating tempphpdocker_web_1 ... done
Attaching to tempphpdocker_web_1
web_1 | Interactive shell
web_1 |
tempphpdocker_web_1 exited with code 0
The exit code is 0. This means your command in the docker image php:7.3-cli has successfully run and finished.
To properly dockerize your applicaiton, you should override this by writing you own docker file with proper COPY calls that bundle your CLI program into it. Your Dockerfile should probably look something like this:
FROM php:7.3-cli
RUN mkdir -p /opt/workdir/bin
RUN mkdir -p /opt/workdir/vendor
COPY bin/ /opt/workdir/bin
COPY vendor/ /opt/workdir/vendor
WORKDIR /opt/workdir
CMD php ./bin/console COMMAND
You can simply build and run this Dockerfile, or you if you prefer docker-compose, you can define docker-compose.yml in the same folder as the Dockerfile:
version: "3.3"
services:
web:
image: php-custom
build: ./
Please noted that a dockerized application can only access files and folder in the docker image. You should bind volumes of your local file system to the container before it can actually work on your filesystem.
Quick and dirty fix to keep you container running just override the container command in docker-compose.
version: "3.3"
services:
web:
image: php:7.3-cli
command: tail -f /dev/null
when you run docker-compose up it will keep the docker container but it will do not thing, just will give away to run command inside container.
docker exec -it php-cli_web_1 ash
My question is how to run the commands inside my project root
directory.
As mentioned by #David, you need to mount your host project to the container in docker-compose.
For instance
your project is placed on the host /home/myporject, mount the project within docker-compose and it will be available inside the container. then you can update the command of your docker-compose to run the script.
keep in mind
The life of container is the life of docker-compose command
When the execution completed your container will be die after execution. so your container will run until the php:7.3-cli /app/your_script.php this script completed.
version: "3.3"
services:
web:
image: php:7.3-cli
command: php:7.3-cli /app/your_script.php
volumes:
- /home/myporject:/app
I have a project on gitlab, and I would use gitlab CI for unit testing.
Actually, I have a other repository name "docker" with docker-compose.yml and Dockerfile for two project (because i reproduce the production configuration, the two project are interdependant)
Actually in my dev configuration
in Projects directory:
docker
project_1
project_2
in docker directory:
docker-compose.yml
Dockerfile-project1
Dockerfile-project2
[some config file ask in dockerFile]
docker-compose.yml have relative path as ../project_1 and ../project_2
For set up my configuration, I make :
cd docker
docker-compose up -d project1 (name in docker-compose.yml)
docker exec -ti project1 bash
Question ?
I want know how I can pull the git repository "docker" and launch docker-compose up for the project1 since gitlab CI start ?
Thanks
We've built a gitlab runner with docker-compose support. See its README for setup and configuration.
Basically you just use the same commands like in development, see here for an example with Makefiles or this one with native docker-compose commands.
So I recently discovered docker and vagrant, and I'm starting a new Php project in which I want to use both:
Vagrant in order to have a interchangeable environment that all the developers can use.
Docker for production, but also inside the vagrant machine so the development environment resembles the production one as closely as possible.
The first approach is to have all the definition files together with the source code in the same repository with this layout:
/docker
/machine1-web_server
/Dockerfile
/machine2-db_server
/Dockerfile
/machineX
/Dockerfile
/src
/app
/public
/vendors
/vagrant
/Vagrantfile
So the vagrant machine, on provision, runs all docker "machines" and sets databases and source code properly.
Is this a good approach? I'm still trying to figure out how this will work in terms of deployment to production.
Is this a good approach?
Yes, at least it works for me since a few months now.
The difference is that I also have a docker-compose.yml file.
In my Vagrantfile there is a 1st provisioning section that installs docker, pip and docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
if ! type docker >/dev/null; then
echo -e "\n\n========= installing docker..."
curl -sL https://get.docker.io/ | sh
echo -e "\n\n========= installing docker bash completion..."
curl -sL https://raw.githubusercontent.com/dotcloud/docker/master/contrib/completion/bash/docker > /etc/bash_completion.d/docker
adduser vagrant docker
fi
if ! type pip >/dev/null; then
echo -e "\n\n========= installing pip..."
curl -sk https://bootstrap.pypa.io/get-pip.py | python
fi
if ! type docker-compose >/dev/null; then
echo -e "\n\n========= installing docker-compose..."
pip install -U docker-compose
echo -e "\n\n========= installing docker-compose command completion..."
curl -sL https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
fi
SCRIPT
and finally a provisioning section that fires docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
cd /vagrant
docker-compose up -d
SCRIPT
There are other ways to build and start docker containers from vagrant, but using docker-compose allows me to externalize any docker specificities out of my Vagrantfile. As a result this Vagrantfile can be reused for other projects without changes ; you would just have to provide a different docker-compose.yml file.
An other thing I do differently is to put the Vagrantfile at the root of your project (and not in a vagrant directory) as it is a place humans and tools (some IDE) expect to find it. PyCharm does, PhpStorm probably does.
I also put my docker-compose.yml file at the root of my projects.
In the end, for developing I just go to my project directory and fire up vagrant which tells docker-compose to (eventually build then) run the docker containers.
I'm still trying to figure out how this will work in terms of deployment to production.
For deploying to production, a common practice is to provide your docker images to the ops team by publishing them on a private docker registry. You can either host such a registry on your own infrastructure or use online services that provides them such as Docker Hub.
Also provide the ops team a docker-compose.yml file that will define how to run the containers and link them. Note that this file should not make use of the build: instruction but rely instead on the image: instruction. Who wants to build/compile stuff while deploying to production?
This Docker blog article can help figuring out how to use docker-compose and docker-swarm to deploy on a cluster.
I recommend to use docker for development too, in order to get full replication of dependencies. Docker Compose is the key tool.
You can use an strategy like this:
docker-compose.yml
db:
image: my_database_image
ports: ...
machinex:
image: my_machine_x_image
web:
build: .
volumes:
- '/path/to/my/php/code:/var/www'
In your Dockerfile you can specify the dependencies to run your PHP code.
Also, i recommend to keep my_database_image and my_machine_x_image projects separated with their Dockerfiles because perfectly can be used with another projects.
If you are using Mac, you are already using a VM called boot2docker
I hope this helps.