I have a few dozens of php apps that I want to dockerize. I am wondering what will be the best design for management and performance wise.
one big container with all services included (php-fpm, mysql, nginx etc)
separate containers for all services:
container-php-fpm-app1
container-nginx-app1
container-mysql-app1
container-php-fpm-app2
container-nginx-app2
container-mysql-app2
one container for service, that service hosts all apps:
container-php-fpm - for all php-fpm pools
container-nginx - for all nginx virtual hosts
container-mysql - for all databases
I understand running separate containers lets you make changes to one service without impacting another. You can run different php configurations and extensions and versions without worrying about the other services being affected. All of my apps are Wordpress based, so configuration will (or should) be consistent across the board.
For now I am leaning toward separation, however I am not sure if this is the best approach.
What do you guys think?
You should run one service in a container, that's how it's designed. So 1 is out the door.
If you look at three, you have a tight coupling between your apps. If you want to migrate to a new php-version for app1, or have a different dependency there, you're in trouble, so that's not a good one.
The standard is to do 2. A container per service.
Per docker documentation multi-service container:
It is generally recommended that you separate areas of concern by
using one service per container. That service may fork into multiple
processes (for example, Apache web server starts multiple worker
processes). It’s ok to have multiple processes, but to get the most
benefit out of Docker, avoid one container being responsible for
multiple aspects of your overall application. You can connect multiple
containers using user-defined networks and shared volumes.
Also based on their best practices:
Each container should have only one concern
Decoupling applications into multiple containers makes it much easier
to scale horizontally and reuse containers.
I would suggest using option 2 (separate containers for all services).
The most common pattern that I have seen is a separate container per application. That being said, there also value in having related containers near one another but still distinct, hence the concept of Pods used in Kubernetes.
I would recommend one container per application.
Related
I am trying to setup set of docker containers to serve couple of applications.
One of my goals is to isolate PHP applications from eachother.
I am new to Docker and not fully understand its concepts.
So only idea i came up with is to create a dedicated php-fpm container per-application.
I started with official image: php:7.0-fpm but now I think that I may need to create my own general purpose pfp-fpm container (based on mentioned above), add some programs to it (such as ImageMagick) and instantiate couple of such php-fpm+stuff containers per PHP-application, setting up volume pointing strictly to that application source code.
Am I thinking in right direction?
now I think that I may need to create my own general purpose pfp-fpm container (based on mentioned above), add some programs to it
That is the idea: you can make a Dockerfile starting with FROM php:7.0-fpm, with your common programs installed in it.
Then you can make a multiple other Dockerfiles (each in their different folder), starting with FROM <yourFirstImage>, and declaring specifics to each php applications.
I am trying to wrap my head around an optimal structure for Dockerization of a web app. So one of the best practices recommendations for using Docker is having one process per container. So where do I put the source code of my app?
Assume I am making a simple nginx and php app. The one process per container rule suggests having a nginx container that serves static assets and proxies php requests to a php-fpm container.
Now where do I put the source code? Do I keep it in a separate container and use volumes_from in Docker compose to let the two containers access the code? Or do I build each container with the source code inside (I suppose that makes it easier with versioning)?
What are the best practices around this?
Do I keep it in a separate container and use volumes_from in Docker compose to let the two containers access the code?
That is the usual best-practice, which avoid duplicating/synchronizing codes between components.
See "Creating and mounting a data volume container".
This is not just for pure data, but also for other shared resources like libraries, as shown in this article "How to Create a Persistent Ruby Gems Container with Docker":
I'm creating web services for my company using Symfony2. Our company uses a centralized configuration service (Zookeeper/etcd) to manage configuration information for our services. Such as connection/host information for MySQL, Redis, Memcached, etc. The configuration is subject to change randomly through out the day. For instance when MySQL servers are added or removed from our database cluster. So hard coding the configuration in yml/xml is not possible.
So I'm looking for a way to modify the config.yml values when the application boots. Some of the values in the config will be static. For instance Twig and Switfmailer configurations, but other values for Doctrine and Redis need to be set on the fly.
The configuration values cannot be determined until the Symfony application boots, and the values cannot be cached or compiled. I've tried a few things to hook into the boot process and modify the configuration, but nothing works.
Additional Information
An example of the architecture I'm dealing with is described here: http://aredko.blogspot.com/2013/10/coordination-and-service-discovery-with.html Along with services like MySQL and Redis, we also need to discover our own RESTful services. Zookeeper is being used as a service discovery agent. The location (host name) and exact configuration for the services aren't known until runtime/boot.
I'd suggest you to take a look at OpenSkyRuntimeConfigBundle.
The bundle allows you to replace traditional container parameters (that you usually write to parameters.yml) with some logic. This provides you a way to make a query to Zookeeper to check the latest configuration variables and inject them to Symfony2 services without a need to rebuild the container.
As you can write the logic in any possible way, you can also implement local cache for the parameters. ZK reads are performant but always require a network round-trip. If performance is important for you, utilize a cache here too.
I wouldn't even consider running Symfony 2 without a cache if you care about performance.
It sounds like you've not quite identified the best way to compose your infrastructure whilst scaling things up / down. It's hard to be specific without knowing more about the bigger picture, but how about pointing the Symfony 2 db config to a proxy server, and manage the routing at the network level. The app then stays blissfully ignorant of the churn of db servers...
I am a PHP developer, I read about Java EE technologies and I want to implement such technologies( n-tier, EJB, JPA...) with PHP and all what coming with (MySQL, Apache...).
Don't.
PHP is not Java. Writing PHP code like you'd write Java code is silly and counterproductive. It's very likely to make future maintainers of the code want to hurt you.
Need to persist an object? Use an ORM.
Need a multi-tier architecture? If you design your code with proper separation of concerns, you've already gotten 9/10ths of the way there.
EJBs? Every time I read the Wikipedia article, they're described differently. Reusable components? With a standardized interface for what, distributed applications and data persistence? Useful, yeah, but that's not PHP. ORMs and a a good message/work queue will get the job done.
The bottom line: For the vast majority of PHP scripts, you will not need any "enterprise technologies." If you do, you're doing something wrong: either you've having overarchitected the application, or you've chosen the wrong platform.
Start by picking a modern PHP framework, and build your application from there. If you're coming from Java, then Zend Framework will seem the least foreign. Kohana, Symfony and CodeIgniter are all worthwhile. Avoid Cake for now.
Keep it simple and you can't go wrong.
Your question is an insightful one. That's because as your enterprise becomes more successful, it will have to scale up to support the load of more traffic. So you will have to separate your PHP code into layers that run on separate tiers (either separate servers or separate Virtual Machines like with Xen.)
For example, I designed a system last year implemented in PHP on 10 Linux OpenSUSE servers running about 25 Xen Virtual Machines (VMs.) Some of the VM's were load balancers, some were front end tiers, some were middle tiers and some were back-end tiers, some contained MySQL databases, and we had a couple of dedicated servers that were RAID arrays for user file storage. We created NFS mounts as necessary to save/read files to/from the RAID array.
We grouped the tiers into three related groups, so we could have independent test sites for QA, Staging (User Acceptance) and Production.
So our PHP software was separated into loosely-coupled layers as follows:
FRONT-END TIER (VMs)
Application Layer (port 80) --
including AJAX responses, validation
code, navigation, etc.
Admin layer (port 443) --
including Admin Dashboard with
access to system metrics and Unit Test harnesses
Service Provider (port 443) -- Secure
RESTful Web Services API (with token)
to provide services to Partners and
others who use the system as a
"platform."
MIDDLE TIER (VMs)
Business Logic Layer -- calculations
specific to the system or business,
or the roles and permissions for
various use cases
Interoperability Layer --
authorizations and posts to Social
networks or Partner applications,
etc.
BACK-END TIER (VMs)
Data Access Layer -- handles SQL
queries, inserts, updates, deletes to
the database (implemented as Prepared
Statements) in a way that can be
adapted when the database changes to
a different kind...example: from
PostgreSQL to MySQL or vice versa.
Includes PHP code for backing up and
restoring databases.
The idea another respondent brought up of using a Framework for enterprise software seem pretty silly to me. If you are developing a student project or a “proof of concept” on a single server, and if you already are familiar with a framework, it may have its use for rapid prototyping.
But as you see from the above, when you are writing Production-quality code, distributed across multiple tiers, you don't need the crutch of using a Framework.
Where would you put the framework to link into all the places in your code? On every tier? Bad idea. Frameworks include many pages that you may need and you may not need. So they slow down performance, especially when multiplied by every tier on which you must install them.
Equally inefficient would be to create a “layer” just to contain a framework that every other layer would have to call. The benefit of software layers is to be loosely-coupled and independent of other layers, so that when changes occur in one layer, they do not require changes in another layer.
Besides, developers who write Production-quality code don't need to rely on a “swiss-army knife” that Frameworks represent. Such developers are quite capable of writing targeted efficient code and, if necessary, reusing classes in a library they may have developed for previous projects.
I have similar instances of the same web application running across many domains. I need an easy way to manage (edit, add, etc.) certain files (php, css, html) for each domain.
I was thinking of writing a control panel in PHP, but was wondering if there are existing scripts people have already written for accomplishing this?
Thanks!
There are several choices for managing the infrastructure of your servers. Here are some and you may want to ask on ServerFault website too.
Puppet http://reductivelabs.com/products/puppet
Chef http://wiki.opscode.com/display/chef/Home
Wikipedia article http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software
It depends on how customised each instance is - if there are code changes, then I'd recomment using a SCM like git. You can hold each customised version in a separate branch, and deploy them as separate instances. If you have a change that needs to be incorporated into many sites, then you can cherry pick that commit across the branches.
I've used this method to deploy a suite of 18 sites that used a common codebase, and it works.