Dockerized PHP Application Architecture Best Practices - php

I'm pretty new do Docker. I played a lot with Docker in my development environment but I tried to deploy real app only once.
I've read tons of documentations and watched dozes of videos but still have a lot of questions.
I do understand that Docker is just a tool that can be used in so many different ways, but now I'm trying to find the best way to develop and deploy web apps.
I'll use real PHP App case to make my question more concrete and practical.
To keep it simple let's assume I'm building a very simple PHP App so I'll need:
Web Server (nginx)
PHP Interpreter (php-fpm or hhvm)
Persistent storage for SESSIONs
The best example/tutorial I could find was this one year old post. Dylan proposes this kind of structure:
He use Data Only container for the whole PHP project files and logs and docker-compose to run all this images with proper links. In development env I'll mount a host directory as a data volume and for production I'll copy files directly to Data Only Images and deploy.
This is understandable. I do want to share data across nginx and php-fpm. nginx needs access to static files (.img, .css, .js...) and php-fpm need access to PHP files. And both services are separated so can be updated/changed independently.
Data only container shares a data volume that is linked to nginx and php-fpm by --volumes-from option.
But as I understand - there's a problem with Data Only containers and -v flag.
Official Docker Documentation says that data volume is specially-designated directory to persist data! It is said that
Data volumes persist even if the container itself is deleted.
So this solution is great for data I do not want to loose like Session files, DB storage, logs etc.. But not for my code files, right? I do want to change my code files. I want to deploy changes without rebuilding nginx and php-fpm images.
Another problem is when I tried this approach I could not deploy code changes until I stopped all running containers, removed them and their images and rebuild them entirely. Just rebuilding and deploying Data Only images did nothing!
I've seen some other implementations when data is stored directly in Interpreter container, but it's not an option because I need nginx to have access to these files also.
The question is what is the best practices on where to put my project code files and how to deploying changes for this kind of app?
Thanks.

Right, don't use a data volume for your code. docker-compose makes a point to re-use old volumes (so you don't lose data), so you'd always be stuck with old code.
Use a COPY directive to add the static resources in the nginx Dockerfile and a COPY in the application (phpfpm) Dockerfile to add the code. In dev you can use a host volume so that you don't have to restart containers to see your code changes (assuming the web server supports picking up changes).

Related

Docker: share data between containers in non-persistent way to allow for code upgrade

In a php-fpm and nginx container setup the source code usually needs to be available to both containers. In my case I'd like to dockerize Magento. In Magento the nginx is configured in such a way that it looks for the existence of files before it commits the request to the php engine.
According to the best practices from Docker for an production environment I copied the source code into the php container during the build process. My first idea to share the source code with the nginx container was to use a named volume that is mounted to the root of both containers. However, the data in named volumes persist even after editing the source code and rebuilding the php container. This comes in handy for dynamic content like file uploads etc. but how do I upgrade the source code? Should I delete the volume everytime the source code is changed? How do I persist dynamic content in this case?
In a nutshell:
I'd like to have non-persistent volume to share source code between php and nginx
A persistent volume (but still shared with nginx) for folders with dynamic content (e.g. file uploads)
For Magento that would be:
Non-persistent volume for files and folders like ./index.php, ./vendor/, ./app/ (except ./app/etc/env.php and ./app/etc/config.php since that are configuration files), etc.
Persistent volume for files and folders like ./pub/media/, ./app/etc/env.php, ./app/etc/config.php, etc.

How to run PHP application with Nginx in Docker Swarm/Kubernetes

I need to somehow run my PHP application in Swarm (maybe we will consider kubernetes if it will be easier). We want to keep nginx and php containers separate, so we can scale them independently. But there is the problem, nginx must have access to those static files somehow.
How would you solve this situation?
Our first idea would be that in the CI, versioned compiled assets would be included to Nginx image. But what to do when i want to update my application containers? I would need old and also the new assets. Or should I use some kind of persisted volume and update it with CI? But I'm no sure how can I do that...
The persisted volume is probably the best way to accomplish this. Docker containers can mount NFS volumes. Create a container to act as an NFS server for the shared files. Here is one of the many containers available on Docker Hub: https://hub.docker.com/r/itsthenetwork/nfs-server-alpine/
Here is an example of how to set up NFS volumes for use with containers. https://gist.github.com/ruanbekker/4a9c0d250bce9f84482f2a788ce92131
Keep in mind that the server address will need to be that of the NFS container.

Using a Network Attached Storage (NAS) for developing

I was enjoining about getting a Network Attached Storage (NAS) so that I can work on dev sites from both my desktop and my laptop without duplicating files and always having the most current file (just in case I forget to save). My question is if I put sites on there that uses php, would I be able to run the sites off of the NAS as I would with MAMP / WAMP? Or would I still need something else to make that work?
The point of a NAS is to share files over a network. This is usually done via Windows File & Print Sharing (aka Samba aka SMB) which is supported on most platforms.
Some NAS devices might allow you to run a web server (particularly if you can install custom firmware), but it is a poor choice of platform to run anything remotely complex in terms of web server stacks.
You can certainly store your development files on a NAS, and then access them from webservers running in both your development environments.
… but that said, I'd look at using version control software (Git would be my preference), keeping your repository on the NAS and getting into the habit of saving, committing and pushing. It makes things more manageable in the long run. (You could also use a service like Bitbucket or Github and dispense with the local NAS entirely).
You could also go a step further and run a server with CI software on it that monitors your repository and automatically pulls updates from it, runs your automated tests, and then updates a local test server.
I am assuming you use windows(it's easier to do it in mac i think) with wamp what you can do is mount a network drive to w:\ let's say. Then create a virtual host that points to a folder in W:\ drive.
With mac all you need to do is mount the remote folder share to your mamp directory and all should work as you want.
Though personally I think this is a terrible idea and would much rather suggest you to use a VCS(version control system) to share code between multiple places. Lots of them are designed with this problem in mind. And it provides nice history about your code at the same time. If you want to do some research look at GIT(the most popular, currently) bitbucket has free private repositories. you can look into what a VCS can do here https://en.wikipedia.org/wiki/Version_control

problems exporting Solar php5 root directory from localhost to live server or even other computer

Ok,
This seems like something that would be obvious, but I haven't been able to figure this out.
I just started using Solar PHP5 Framework http://solarphp.com. It is a great php5 framework. But with any new framework the is a learning curve.
Issue: Solar uses many pre-written scripts to make directories and files for you. Making it easy to rapidly deploy a site. Being that it uses these scripts, it makes symbolic links to files and directories. (Example: Chapter 1 in the manual) This is great until you need to export your entire root directory to upload to your server or make another instance on another development computer. The problem for me is, when I do this, the files are editable, but do not reflect any changes when I refresh a page. Its like it doesn't update any code. The only way I can accomplish changes or updates, is to (essentially) run the site set-up each time, which involves running all the setup scripts, setting up the DB connections, etc. This is a total pain.
Question Is there any advice out there on doing this where I can just export the working root directory, to easily upload to server or other dev machine, without having to run those scripts over and over again. I know its something easy but I do not know exactly what to search for.
Is the a certain method for exporting directories/files that use symbolic links?
You might try using rsync instead of ftp to deploy the site. rsync will respect symlinks. Of course you will need have ssh access or mount the server over ftp/sftp with FUSE. If youre using SVN you could also ssh into the server and do an svn export or checkout.

Is there a way to use SVN for web development in a Mac shop that uses coda?

So we are pushing to create good processes in our office. I work in a web shop that has been doing web sites for over a decade. And we don't use version control. I know! It's bad, not my fault. I'm the guy with a SoftE background pushing for this at a minimum.
The tech lead has been looking into it. We all use Mac workstations and mostly use Coda for editing since it is a great IDE. It has SVN support built in but expects it to work on local files. We're trying to explore mounting the web directory as a local network drive with an SFTP tool.
We are a LAMP shop, BTW.
I am wondering what the model is here. I think we have typically would checkout the whole site to our local machine where we have apache running and then test it there? This isn't how we work yet, we do everything on the server. We've looked at checking things in and out, but some files are owned by apache and the ownerships change when I check them in, because I'm not apache.
I just want to know a way to do this that works given my circumstances. Would be nice to not have to run apache locally.
You might want to checkout the Coda mailing list and ask there. Lots of Coda enthusiasts there with specific experience.
If you don't want to have to run locally could make Apache on your server run a copy of the site for every developer, on a different port per person, and then mount those web-roots to the local macs and make that the working directory. If you're a small shop that's not hard to manage. I find that pretty easy to set up and saves a lot of resources on the local machines. The one-site-per-person helps to avoid conflicts with multiple people working on files at the same time.
What I'd additionally recommend is to have a script that gets the latest changes from SVN and deploys the entire site to the production server when you're ready. You could have that script change permissions on appropriate files/folders as needed to be owned by Apache. The idea once you're using source control is to never manually edit the production files -- you should have something that deploys it from SVN for you.
A few notes:
Take a look at MacFuse / MacFusion (the latter is the application, the former is the library behind it) to mount remote directories via SSH / FTP as local ones.
Allow your developers to check out into their local environment (with their own LAMP stack if they're savvy), or look into a shared dev environment with individual jails. This way your developers can run their own LAMP stack (which you could deploy for them on the machine) without interfering with others.
The idea being, let them use a workflow that works best for them, to minimize the pain in adapting to this change (if change management might be an issue!)
Just as an example, we have a shared dev server where jails are created with a single command for new developers. They have a full LAMP stack ready to go, and we can upgrade and re-deploy jails easily to keep software up to date. Developers have individual control to add custom settings / extensions if they need it for work, while the sys admins have the ability to reset everything when someone accidently breaks their environment :)
Those who prefer not to use jails, and are able to, manage their own local environments (typically through Macports or MAMP).

Categories