I am trying to wrap my head around an optimal structure for Dockerization of a web app. So one of the best practices recommendations for using Docker is having one process per container. So where do I put the source code of my app?
Assume I am making a simple nginx and php app. The one process per container rule suggests having a nginx container that serves static assets and proxies php requests to a php-fpm container.
Now where do I put the source code? Do I keep it in a separate container and use volumes_from in Docker compose to let the two containers access the code? Or do I build each container with the source code inside (I suppose that makes it easier with versioning)?
What are the best practices around this?
Do I keep it in a separate container and use volumes_from in Docker compose to let the two containers access the code?
That is the usual best-practice, which avoid duplicating/synchronizing codes between components.
See "Creating and mounting a data volume container".
This is not just for pure data, but also for other shared resources like libraries, as shown in this article "How to Create a Persistent Ruby Gems Container with Docker":
Related
we have a scenario as follows:
Multiple items of work are ready to test, we would like to place each 'branch' which is ready for testing on to a container and then allow each team responsible to test their work in silo before we then merge work back in.
I think my first question is, is this possible? And if so, can anybody point me in the right direction?
The basic stack of the system is LAMP.
Any help would be much appreciated
It sure is possible.
I suppose that your different 'items' are features of your app so you will not have to modify your Dockerfile as all your feature should work on the same environment.
Depending of your infrastructure and if you are familiar/have some CI/CD tools you could automate the deployment per branch and so have all your feature version of the app running in different container.
If you are not, you have to run your applications locally.
So create a single Dockerfile which you put in all your branches then ask the dev to build and run the image on their branch then to verify that eveything is fine (you could automate some test to avoid manually doing this) before submitting a pull/merge request.
I have a few dozens of php apps that I want to dockerize. I am wondering what will be the best design for management and performance wise.
one big container with all services included (php-fpm, mysql, nginx etc)
separate containers for all services:
container-php-fpm-app1
container-nginx-app1
container-mysql-app1
container-php-fpm-app2
container-nginx-app2
container-mysql-app2
one container for service, that service hosts all apps:
container-php-fpm - for all php-fpm pools
container-nginx - for all nginx virtual hosts
container-mysql - for all databases
I understand running separate containers lets you make changes to one service without impacting another. You can run different php configurations and extensions and versions without worrying about the other services being affected. All of my apps are Wordpress based, so configuration will (or should) be consistent across the board.
For now I am leaning toward separation, however I am not sure if this is the best approach.
What do you guys think?
You should run one service in a container, that's how it's designed. So 1 is out the door.
If you look at three, you have a tight coupling between your apps. If you want to migrate to a new php-version for app1, or have a different dependency there, you're in trouble, so that's not a good one.
The standard is to do 2. A container per service.
Per docker documentation multi-service container:
It is generally recommended that you separate areas of concern by
using one service per container. That service may fork into multiple
processes (for example, Apache web server starts multiple worker
processes). It’s ok to have multiple processes, but to get the most
benefit out of Docker, avoid one container being responsible for
multiple aspects of your overall application. You can connect multiple
containers using user-defined networks and shared volumes.
Also based on their best practices:
Each container should have only one concern
Decoupling applications into multiple containers makes it much easier
to scale horizontally and reuse containers.
I would suggest using option 2 (separate containers for all services).
The most common pattern that I have seen is a separate container per application. That being said, there also value in having related containers near one another but still distinct, hence the concept of Pods used in Kubernetes.
I would recommend one container per application.
I am trying to setup set of docker containers to serve couple of applications.
One of my goals is to isolate PHP applications from eachother.
I am new to Docker and not fully understand its concepts.
So only idea i came up with is to create a dedicated php-fpm container per-application.
I started with official image: php:7.0-fpm but now I think that I may need to create my own general purpose pfp-fpm container (based on mentioned above), add some programs to it (such as ImageMagick) and instantiate couple of such php-fpm+stuff containers per PHP-application, setting up volume pointing strictly to that application source code.
Am I thinking in right direction?
now I think that I may need to create my own general purpose pfp-fpm container (based on mentioned above), add some programs to it
That is the idea: you can make a Dockerfile starting with FROM php:7.0-fpm, with your common programs installed in it.
Then you can make a multiple other Dockerfiles (each in their different folder), starting with FROM <yourFirstImage>, and declaring specifics to each php applications.
I have seen Jenkins being used as CI for Docker containers. Is Dokku also a CI platform like Jenkins?
If, what is the difference when I want to do CI with Docker containers for a PHP application?
Are you maybe confusing drone with Dokku? Dokku is a platform for execution of heroku apps drone is a docker based CI. I don't know much about drone but since docker can't be run inside a docker container without some hacking you are better off sticking to a traditional CI like jenkins, bamboo, team city or such.
Continuing from Usman Ismail's answer...
If you look at dokku-alt, the distinction is less clear. In particular dokku-alt allows you to use a Dockerfile for the build rather than buildstep, so it's not specific to Heroku like apps.
Dokku-alt is not in itself a CI system, but out of the box it does verify that the build completes without error before it's deployed, and using git hooks you could connect in your test-suite to run on every git push and block deployment when it fails.
CI typically is a bit more than this. You'd normally have multiple deployments for test, staging and live, and to some extent it also encompasses a set of practices. Dokku-alt gives you some very useful parts of CI, and a fairly clear path to building more of it fairly easily, but it's not a complete CI system in itself.
You might well prefer to keep your main git repository elsewhere, and keep jenkins in the picture for automating transfer to dokku-alt.
I'm creating web services for my company using Symfony2. Our company uses a centralized configuration service (Zookeeper/etcd) to manage configuration information for our services. Such as connection/host information for MySQL, Redis, Memcached, etc. The configuration is subject to change randomly through out the day. For instance when MySQL servers are added or removed from our database cluster. So hard coding the configuration in yml/xml is not possible.
So I'm looking for a way to modify the config.yml values when the application boots. Some of the values in the config will be static. For instance Twig and Switfmailer configurations, but other values for Doctrine and Redis need to be set on the fly.
The configuration values cannot be determined until the Symfony application boots, and the values cannot be cached or compiled. I've tried a few things to hook into the boot process and modify the configuration, but nothing works.
Additional Information
An example of the architecture I'm dealing with is described here: http://aredko.blogspot.com/2013/10/coordination-and-service-discovery-with.html Along with services like MySQL and Redis, we also need to discover our own RESTful services. Zookeeper is being used as a service discovery agent. The location (host name) and exact configuration for the services aren't known until runtime/boot.
I'd suggest you to take a look at OpenSkyRuntimeConfigBundle.
The bundle allows you to replace traditional container parameters (that you usually write to parameters.yml) with some logic. This provides you a way to make a query to Zookeeper to check the latest configuration variables and inject them to Symfony2 services without a need to rebuild the container.
As you can write the logic in any possible way, you can also implement local cache for the parameters. ZK reads are performant but always require a network round-trip. If performance is important for you, utilize a cache here too.
I wouldn't even consider running Symfony 2 without a cache if you care about performance.
It sounds like you've not quite identified the best way to compose your infrastructure whilst scaling things up / down. It's hard to be specific without knowing more about the bigger picture, but how about pointing the Symfony 2 db config to a proxy server, and manage the routing at the network level. The app then stays blissfully ignorant of the churn of db servers...