I create a container in docker and with id/name i
box: ujwaldhakal/laravel
build:
steps:
- install-packages:
packages: git
- script:
name: install phpunit
code: |-
curl -L https://phar.phpunit.de/phpunit.phar -o /usr/local/bin/phpunit
chmod +x /usr/local/bin/phpunit
- script:
name: install composer
code: curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
- script:
name: install dependencies
code: composer install --no-interaction
- script:
name: PHPUnit integration tests
code: phpunit --configuration phpunit.xml
box ujwaldhakal/laravel won't work if used php it will work. There were no any good docs for linking the custom container on wercker.
Short Version
Have you tried adding a tag after the box id? That solved the issue for me under similar circumstances. Otherwise the image has not (yet) been built and/or pushed to Docker Hub.
Long Version
I had a similar problem. I wanted to use the dealerdirect/ci-php box
So I changed my wercker.yml to use it:
box:
id: dealerdirect/ci-php
# ...
But then the build failed:
The "setup environment" step had the error "no such image":
After some experimenting it turned out I needed to add a "tag":
box:
id: dealerdirect/ci-php:5.6
# ...
After that the docker image was pulled fine and the build continued working again:
Of course, this only works if the images actually exists on Docker Hub. If it does not, you will have to push it manually or set up automated building.
Related
I have created a simple Dockerfile to install apache with PHP and then install packages from composer.json.
FROM php:7-apache
WORKDIR /var/www/html
COPY ./src/html/ .
COPY composer.json .
RUN apt-get update
RUN apt-get install -y unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer update
When I run docker build -t my-web-server . followed by docker run -p 8080:80 my-web-server, everything works fine and the packages install.
But when I use a docker-compose file:
version: "3.9"
services:
ecp:
build: .
ports:
- "8080:80"
volumes:
- ./src:/var/www
and perform docker-compose build followed by docker-compose up The packages do not install and I just index.php is taken across to the container
my current file structure:
src
|-- html
|-- index.php
composer.json
docker-compose.yaml
Dockerfile
When docker-compose is building the image all the console outputs are identical to that of docker build
Your two approaches are not identical. You are using volumes in your docker compose, and not in your docker call. Your problem lies there.
More specifically, notice that in your docker compose you are mounting your host's ./src to your container's ./var/www - which is not the giving you the correct structure, since you "shadow" the container's folder that contains your composer.json (which was copied to the container at build time).
To avoid such confusion, I suggest that if you want to mount a volume with your compose (which is a good idea for development), then your docker-compose.yml file should mount the exact same volumes as the COPY commands in your Dockerfile. For example:
volumes:
- ./src/html:/var/www/html
- ./composer.json:/var/www/html/composer.json
Alternatively, remove the volumes directive from your docker-compose.yml.
Note that it can be a cause for additional problems and confusion to have a file (in your case composer.json) copied to a folder in the container, while having the same folder also copied to the container as is. It is best to have the structure on the container mimic the one on the host as closely as possible.
I'm new with Github Actions and I try to make some continuous integration with functional tests.
I use Codeception and my workflow run perfectly, but when some tests fail the step is written as success. Github don't stop the action and continue to run the nexts steps.
Here is my workflow yml file :
name: Run codeception tests
on:
push:
branches: [ feature/functional-tests/codeception ]
jobs:
build:
runs-on: ubuntu-latest
steps:
# —— Setup Github Actions 🐙 —————————————————————————————————————————————
- name: Checkout
uses: actions/checkout#v2
# —— Setup PHP Version 7.3 🐘 —————————————————————————————————————————————
- name: Setup PHP environment
uses: shivammathur/setup-php#master
with:
php-version: '7.3'
# —— Setup Docker Environment 🐋 —————————————————————————————————————————————
- name: Build containers
run: docker-compose build
- name: Start all container
run: docker-compose up -d
- name: Execute www container
run: docker exec -t my_container developer
- name: Create parameter file
run: cp app/config/parameters.yml.dist app/config/parameters.yml
# —— Composer 🧙️ —————————————————————————————————————————————————————————
- name: Install dependancies
run: composer install
# —— Check Requirements 👌 —————————————————————————————————————————————
- name: Check PHP version
run: php --version
# —— Setup Database 💾 —————————————————————————————————————————————
- name: Create database
run: docker exec -t mysql_container mysql -P 3306 --protocol=tcp -u root --password=**** -e "CREATE DATABASE functional_tests"
- name: Copy database
run: cat tests/_data/test.sql | docker exec -i mysql_container mysql -u root --password=**** functional_tests
- name: Switch database
run: docker exec -t php /var/www/bin/console app:dev:marketplace:switch functional_tests
- name: Execute migrations
run: docker exec -t php /var/www/bin/console --no-interaction doctrine:migrations:migrate
- name: Populate database
run: docker exec -t my_container php /var/www/bin/console fos:elastica:populate
# —— Generate Assets 🔥 ———————————————————————————————————————————————————————————
- name: Install assets
run: |
docker exec -t my_container php /var/www/bin/console assets:install
docker exec -t my_container php /var/www/bin/console assetic:dump
# —— Tests ✅ ———————————————————————————————————————————————————————————
- name: Run functional tests
run: docker exec -t my_container php codeception:functional
- name: Run Unit Tests
run: php vendor/phpunit/phpunit/phpunit -c app/
And here is the logs of the action step :
Github Action log
Anyone know why the step don't faile and how to throw an error ?
Probably codeception:functional set an exit code = 0 even though an error occured. docker exec passes the exit code of the process through. GitHub Actions fails the step if a command returns with an exit code != 0.
I am trying to deploy PHP project using bitbucket pipeline. With this code:
init: # -- First time init
- step:
name: build
image: php:7.1.1
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- vendor/bin/phpunit
artifacts: # defining vendor/ as an artifact
- vendor/**
- step:
image: samueldebruyn/debian-git
name: deployment
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init -u "$FTP_DEV_USERNAME" -p "$FTP_DEV_PASSWORD" ftp://$FTP_DEV_HOST/$FTP_DEV_FOLDER
But it ignores the vendor folder. I have assumed, that artifacts will add this folder to deploy too.
What is wrong or what can I do better?
This happens because you probably have a .gitignore which includes the vendor directory. The artifacts are in fact passed to the next step by bitbucket but are ignored by git-ftp. In order to upload these files with git-ftp you need to create a file called .git-ftp-include where you will need to add the following line: !vendor/. The ! is required as stated in the docs:
The .git-ftp-include file specifies intentionally untracked files that Git-ftp should
upload. If you have a file that should always be uploaded, add a line beginning with !
followed by the file's name.
I don't know how to do cache dependency in gitlab-ci -> docker.
My project has 82 dependencies and they get very slow.. (vendor is in gitignore)
Full process:
change local file -> comit and push to repo remote -> run gitlab-ci -> build docker image -> push image to other server -> publish image
My example project:
app -> my files (html, img, php, css, anything)
gitlab-ci.yml
composer.json
composer.lock
Makefile
Dockerfile
Dockerfile:
FROM hub.myserver.test/image:latest
ADD . /var/www
CMD cd /var/www
RUN composer install --no-interaction
RUN echo "#done" >> /etc/sysctl.conf
gitlab-ci:
build:
script:
- make build
only:
- master
Makefile:
all: build
build:
docker build hub.myserver.test/new_image .
How I can caching dependencies (composer.json)? I do not want to download libraries from scratch.
Usually it's not a good idea to run composer install inside your image. I assume you need eventually to run your php app not composer itself, so you can avoid having it in production.
One possible solution is to split app image creation into 2 steps:
Install everything outside image
Copy ready-made files to image
.gillab-ci.yml
stages:
- compose
- build
compose:
stage: compose
image: composer # or you can use your hub.myserver.test/image:latest
script:
- composer install # install packages
artifacts:
paths:
- vendor/ # save them for next job
build:
stage: build
script:
- docker build -t hub.myserver.test/new_image .
- docker push hub.myserver.test/new_image
So in Dockerfile you just copy files from artifacts dir from first stage to image workdir:
# you can build from your own image
FROM php
COPY . /var/www
WORKDIR /var/www
# optional, if you want to replace CMD of base image
CMD [ "php", "./index.php" ]
Another good consideration is that you can test your code before building image with it. Just add test job between compose and build.
Live example # gitlab.com
So I recently discovered docker and vagrant, and I'm starting a new Php project in which I want to use both:
Vagrant in order to have a interchangeable environment that all the developers can use.
Docker for production, but also inside the vagrant machine so the development environment resembles the production one as closely as possible.
The first approach is to have all the definition files together with the source code in the same repository with this layout:
/docker
/machine1-web_server
/Dockerfile
/machine2-db_server
/Dockerfile
/machineX
/Dockerfile
/src
/app
/public
/vendors
/vagrant
/Vagrantfile
So the vagrant machine, on provision, runs all docker "machines" and sets databases and source code properly.
Is this a good approach? I'm still trying to figure out how this will work in terms of deployment to production.
Is this a good approach?
Yes, at least it works for me since a few months now.
The difference is that I also have a docker-compose.yml file.
In my Vagrantfile there is a 1st provisioning section that installs docker, pip and docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
if ! type docker >/dev/null; then
echo -e "\n\n========= installing docker..."
curl -sL https://get.docker.io/ | sh
echo -e "\n\n========= installing docker bash completion..."
curl -sL https://raw.githubusercontent.com/dotcloud/docker/master/contrib/completion/bash/docker > /etc/bash_completion.d/docker
adduser vagrant docker
fi
if ! type pip >/dev/null; then
echo -e "\n\n========= installing pip..."
curl -sk https://bootstrap.pypa.io/get-pip.py | python
fi
if ! type docker-compose >/dev/null; then
echo -e "\n\n========= installing docker-compose..."
pip install -U docker-compose
echo -e "\n\n========= installing docker-compose command completion..."
curl -sL https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
fi
SCRIPT
and finally a provisioning section that fires docker-compose:
config.vm.provision "shell", inline: <<-SCRIPT
cd /vagrant
docker-compose up -d
SCRIPT
There are other ways to build and start docker containers from vagrant, but using docker-compose allows me to externalize any docker specificities out of my Vagrantfile. As a result this Vagrantfile can be reused for other projects without changes ; you would just have to provide a different docker-compose.yml file.
An other thing I do differently is to put the Vagrantfile at the root of your project (and not in a vagrant directory) as it is a place humans and tools (some IDE) expect to find it. PyCharm does, PhpStorm probably does.
I also put my docker-compose.yml file at the root of my projects.
In the end, for developing I just go to my project directory and fire up vagrant which tells docker-compose to (eventually build then) run the docker containers.
I'm still trying to figure out how this will work in terms of deployment to production.
For deploying to production, a common practice is to provide your docker images to the ops team by publishing them on a private docker registry. You can either host such a registry on your own infrastructure or use online services that provides them such as Docker Hub.
Also provide the ops team a docker-compose.yml file that will define how to run the containers and link them. Note that this file should not make use of the build: instruction but rely instead on the image: instruction. Who wants to build/compile stuff while deploying to production?
This Docker blog article can help figuring out how to use docker-compose and docker-swarm to deploy on a cluster.
I recommend to use docker for development too, in order to get full replication of dependencies. Docker Compose is the key tool.
You can use an strategy like this:
docker-compose.yml
db:
image: my_database_image
ports: ...
machinex:
image: my_machine_x_image
web:
build: .
volumes:
- '/path/to/my/php/code:/var/www'
In your Dockerfile you can specify the dependencies to run your PHP code.
Also, i recommend to keep my_database_image and my_machine_x_image projects separated with their Dockerfiles because perfectly can be used with another projects.
If you are using Mac, you are already using a VM called boot2docker
I hope this helps.