Gitlab CI/CD with Selenium in multi-container docker-environment - php

We are developing a webapplication, which runs in three containers (MySQL database, Nginx webserver, PHP FPM).
Docker-Compose is used to manage the containers.
The current goal is to bring automated tests that makes use of Selenium into the pipeline of gitlab.
Within the gitlab-ci config I would launch the three containers using docker-compose and run our tests.
This however feels a bit complicated.
Is there a better, more clean way to set up testing?

Related

Why does Docker and Heroku push such a huge layer for my small test application?

I'm learning to use Docker and Heroku for my web applications. I've created a very small test PHP application that simply outputs "Hello world". I followed the steps given in the Heroku dev center for deploying Docker-based applications (first with the given example repo, which was fine, and now with my own test app from scratch): https://devcenter.heroku.com/articles/container-registry-and-runtime
However, when it comes to running the command,
heroku container:push web,
it seems to push a massive layer, more than 200MB. I don't think this is necessary for such a small application and I'm assuming this is a waste of bandwidth. Am I supposed to somehow link my application to an existing image on Heroku? 200MB seems very unnecessary. Everything works fine locally.
This is my Dockerfile:
FROM php:7.0-apache
COPY src/ /var/www/html
EXPOSE 80

Where does the data live in a Multi-Container Dockerized application?

I'm trying to build a couple of Docker images for different PHP-based web apps, and for now I've been deploying them to Elastic Beanstalk using the Sample Application provided by AWS as a template. This application contains two Docker container definitions: one for PHP itself, and the other for Nginx (to act as a reverse-proxy).
However, it seems a little odd to me that the source code for my PHP application is effectively deployed outside of the Docker image. As you can see from the Github sample project linked above, there's a folder called php-app which contains all the PHP source files, but these aren't part of the container definitions. The two containers are just the stock images from Dockerhub. So in order to deploy this, it's not sufficient to merely upload the Dockerrun.aws.json file by itself; you need to ZIP this file together with the PHP source files in order for things to run. As I see it, it can (roughly) be represented by this visual tree:
*
|
|\
| - PHP Docker Container
|\
| - Linked Nginx Container
\
- Volume that Beanstalk auto-magically creates alongside these containers
Since the model here involves using two Docker images, plus a volume/file system independent of those Docker images, I'm not sure how that works. In my head I keep thinking that it would be better/easier to roll my PHP source files and PHP into one common Docker container, instead of doing whatever magic that Beanstalk is doing to tie everything together.
And I know that Elastic Beanstalk is really only acting as a facade for ECS in this case, where Task definitions and the like are being created. I have very limited knowledge of ECS but I'd like to keep my options open, in case I wanted to manually create an ECS task (using Fargate, for instance), instead of relying on Beanstalk to do it for me. And I'm worried that Beanstalk is doing something magical with that volume that would make things difficult to manually write a Task definition, if I wanted to go down that route.
What is the best practice for packaging PHP applications in a Docker environment, when the reverse proxy (be it Nginx or Apache or whatever) is in a separate container? Can anyone provide a better explanation (or correct any misunderstandings) of how this works? And how would I do the equivalent of what Beanstalk is doing here, in ECS, for a PHP application?
You have several options to build it.
The easiest is to have one service ecs with two container foreach web app (one container for php app and one container for nginx).
The only exposed port in each service is nginx 80 port with ecs dynamic port.
Each exposed service should have an alb that redirect trafic to nginx port.
In this case nginx is not used as loadbalancer, only for front web server.
Edit:
Your DockerFile for php app should be like that:
...
# Add Project files.
COPY . /home/usr/src
...
And for dev mode your docker-compose:
version: '3.0'
services:
php:
build:.
depends_on:
...
environment:
...
ports:
...
tty: true
working_dir: /home/usr/src
volumes:
- .:/home/usr/src
Then in local use docker-compose and live edit your files in container.
And in production mode file was copied in container at build time.
Is it more clear?

Run PHPUnit in a Docker container and start containers it depends on

Background
A php application in running in a Docker container. With docker-compose, this container is ran in a configuration together with a postgres database container and a lot of other containers.
Attempting to run my phpunit tests in phpstorm, I have created a Docker Remote Interpreter test configuration which runs the PHP application container.
Problem
The container complains that it can not connect to the database, which of course is not started because it's configured in the docker-compose.yml and not started up along with the single container used by PhpStorm.
Attempts to solve
A PHP Remote Debug can use a deployment, so I tried to create a Docker Deployment configuration which uses the docker-compose.yml (therefore starting all containers) and is launched before the PHPUnit launch, but I cannot select this deployment.
Starting the Docker Compose containers except the one from the PHP app and have it connect to it. This proves difficult, because they are on different networks, so the php app container still complains about not finding the database. I cannot configure which network the container uses in PhpStorm.
tl;dr
My PhpStorm project is a PHP application. This application can run in a Docker container which serves through nginx. I can run my PHPUnit tests in the container with a run configuration, but it also needs other containers which are not started up automatically.
Question
How can I use PHPStorm to run PHPUnit tests in a PHP application container together with the containers it depends on (already described in a docker-compose.yml)?
The answer is too long. I hope this 5 minutes video by Jetbrains TV helps you.
https://www.youtube.com/watch?v=I7aGWO6K3Ho
In short you need:
Configure Docker instance in PHPStorm
Configure new PHP Interpreter from Docker container
Configure PHPUnit to use your new interpreter

Docker and microservices

I am developing a system using microservices, for myself to learn new technology. One service on php (laravel) + postgres, the other on nodejs (express) + mongo, and another on php (symfony) + with other postgres server, I want to wrap all of this services in the docker. I looked at the decision https://github.com/LaraDock/laradock, but there is only one container workspace, and one container to postgres, how do I correct tune docker?
If you look at the docker-compose.yml in the link you provided, you can see that they have split up everything into separate docker containers.
If you want more than one of any of the containers listed you use docker-compose scale to create duplicates.

Jenkins + Phing: Who should deploy and who should clone?

I have Jenkins setup to build and deploy my project using Phing.
Inside the Phing build.xml file, I create a set of targets for doing useful stuff, such as running tests, static code analyzer and such.
Phing also is responsible for deploying the application, creating a structure similar to this:
var/www
current -> /var/www/develop/releases/20100512131539
releases
20100512131539
20100509150741
20100509145325
features
hotfixes
The thing is, Jenkins does a git clone of the project and Phing also does it, inside the releases directory. With this, I have two clones on the same build and deploy process.
My question is: the responsibility of cloning the repository should be of Phing or Jenkins?
I leave Phing the same sort of tasks you mentioned: like static code analyzer, running tests, linters, etc.
The reason for this, is that as a dev, I might want to run all these test (or a set of them) regularly during my development, and Im able to trigger them on my local environment without the need of jenkins.
For deployment stuff, I leave jenkins in charge of it, so would leave jenkins to continue doing it.
But, if you want to have everything in phing, I think that would be ok, too. I would split the tasks to have a group of dev test to be run on console, and a jenkins tasks to be run on the deploy.
Definitively, only one of them should be doing so.

Categories