I plan to deploy my Laravel application with docker containers.
I need the following components for my application:
MySQl server
nginx server
cerbot for activation of ssl
Queue worker for Laravel
Since the application is still under development (and probably will always be), it should be very easy to update (I will automate this with GitLab CI/CD) and it should have as little downtime as possible during the update.
Also, I want to be able to host multiple instances of the application, whereby only the .env file for Laravel is different. In addition to the live application, I want to host a staging application.
My current approach is to create a container for the MySQL server, one for the nginx server and one for the queue worker. The application code would be a layer in the nginx server container and in the queue worker container. When updating the application, I would rebuild the nginx conatiner and queue worker container. Is this a good approach? Or are there better ways to achieve this? And what would be a good approach for my mysql server, nginx server, php version,... to stay up to date without downtime for the application?
The main idea of the docker is to divide your app by containers. So yep it is good to have one container for one service. In your example, I suggest keeping MySQL in one container the queue worker in another and so on. As a result, u will have containers for each service. Then suggest to create the internal docket network and connect containers to them. Also, I suggest using docker volumes to store all your application data. To make it much easyer I suggest for configuration to use docker compose.
Related
I want to deploy a Laravel app from Gitlab to a server with no downtime. On the server, I serve the app with php artisan serve. Currently, I'm thinking that I would first copy all the files, then stop the old php artisan serve process on the server and start a new one on the server in the directory with the new files. However, this introduces a small downtime. Is there a way to avoid this?
If you are serving with a single server, you can not achieve 0 downtime. If downtime is a crucial part of your system, then use two server and load balance between them smartly. Remember, no hosting or VPS provider ensure you to deliver 100% availability. So if you want to get 100% availability in the deployment process, in an irony situation, your site may have been down in another time. What I'm saying is that, if tiny moment of restarting php artisan serve is matter, then scale up to more than one server.
A workaround solution would be use some 3rd party service (like CloudFlare) which can smartly detect server down situation and notify user when it is back, I personally use that.
If you really want full uptime, docker with kubernetes is your technology.
We're hosting a lot of different applications on our Kubernetes cluster already - mostly Java based.
For PHP-FPM + Nginx our approach is currently, that we're building a container, which includes PHP-FPM, Nginx and the PHP application source code.
But this actually breaks with the one-process-per-container docker rule, so we were thinking on how to improve it.
We tried to replace it by using a pod with multiple containers - a nginx and a PHP container.
The big question is now where to put the source code. My initial idea was to use a data-only container, which we mount to the nginx and PHP-FPM container. The problem is, that there's seems to be no way to do this in Kubernetes yet.
The only approach that I see is creating a sidecar container, which contains the source code and copies it to an emptyDir volume which is shared between the containers in the pod.
My question: Is there a good approach for PHP-FPM + Nginx and a data container on Kubernetes, or what is best practice to host PHP on Kubernetes (maybe still using one container for everything)?
This is a good question because there is an important distinction that gets elided in most coverage of container architecture- that between multithreaded or event-driven service applications and multiprocess service applications.
Multithreaded and event-driven service applications are able with a single process to handle multiple service requests concurrently.
Multiprocess service applications are not.
Kubernetes workload management machinery is completely agnostic as to the real request concurrency level a given service is facing- agnostic in the sense that different concurrency rates by themselves do not have any impact on automated workload sizing or scaling.
The underlying assumption, however, is that a given unit of deployment- a pod- is able to handle multiple requests concurrently.
PHP in nearly all deployment models is multiprocess. It requires multiple processes to be able to handle concurrent requests in a single deployment unit. Whether those processes are coordinated by FPM or by some other machinery is an implementation detail.
So- it's fine to run nginx + FPM + PHP in a single container, even though it's not a single process. The number of processes itself doesn't matter- there is actually no rule in Docker about this. The ability to support concurrency does matter. One wants to deploy in a container/pod the minimal system to support concurrent requests, and in the case of PHP, usually putting it all in a single container is simplest.
Concept of micro-service architecture is to run every service individually in different clusters i.e., a cluster of nginx and a cluster of php-fpm. (Cluster > Pod > Containers)
Now these clusters should be communicating with other so that nginx and php-fpm can work properly.
As for the main part, where to put my code.
For this you can use many plugins working based on api i.e., digitalocean, s3 etc.
If you want to mount them on your drive, then are mountpoints parameter available in kubernetes.
Hope someone can point me. Google doesn't yield much that's simple to understand (there's stuff like Pheanstalk, etc), and Amazon's own Beanstalk documentation as always is woefully arcane presuming that we use Laravel or Symfony2.
We have a simple set of 10 PHP scripts that constitute our entire "website", with fast functional programming. In our testing this has been much faster than doing the same things with needless OOP. Anyway, with PHP 7, we're very happy with the simple functional code we have.
We could go the EC2 route. Two EC2 servers load balanced by ELB. Both EC2 servers just have Nginx running with PHP-FPM, and calling the RDS stuff for data (ElastiCache for some caching speed for read-only queries).
However, the idea is to lower management costs for EC2 by relying on Beanstalk for the simple processing that's needed in these 10 PHP scripts.
Are we thinking the right way? Is it simple to "upload" scripts to Beanstalk in the way we do in EC2 via SSH or SFTP? Or is that only programatically available via git etc?
You can easily replicate your EC2 environment to Elastic Beanstalk using Docker containers.
Create a Docker container that contains required packages (nginx etc), any configuration files, and your PHP scripts. Then you'd deploy the container to Beanstalk.
With Beanstalk, you can define environment variables that are passed to underlying EC2 instances where you application is running. Typically, one would use environment variables to pass, for example, the RDS hostname, username, and password to the Beanstalk application.
Additionally, you can store the Dockerfile, configuration files, and scripts in your git repository for version control, and fetch them whenever you create the container.
See AWS documentation about deploying beanstalk application from Docker containers.
I have a noob question. If I'm using a docker image that uses a folder located in the host to do something, Where should be located the folder in the kubernetes cluster? I'm ok doing this with docker since I know where is my host filesystem but I get lost when I'm on a kubernetes cluster.
Actually, I don't know if this is the best approach. But what I'm trying to do is build a development environment for a php backbend. Since what I want is that every person can run a container environment with their own files (which are on their computers), I'm trying to build a sidecar container so when launching the container I can pass the files to the php container.
The problem is that I'm running kubernetes to build development environment for my company using a vagrant (coreos + kubernetes) solution since we don't have a cloud service right now so I can't use a persiten disk. I try NFS but it seems be too much for what I want (just pass some information to the pod regardless of the PC where I am). Also I try to use hostPAth in Kubernetes but the problem is that the machines where I want connect to the containers are located outside of the kubernetes cluster (Vagrant + CoreOS + Kubernetes so I-m trying to expose some container to public IPs but I can not figure out how to past the files (located in the machines outside of the cluster) to the containers.
Thanks for your help, I appreciate your comments.
Not so hard, actually. Check my gists may give you some tips:
https://gist.github.com/resouer/378bcdaef1d9601ed6aa
See, do not try to consume files from outside, just package them in a docker image, and consume them by sidecar mode.
I am currently working with a startup that is in a transitional phase.
We have a PHP web application and utilise continuous integration with the standard unit and regression tests (selenium) run over jenkins. We have a development server which hosts newly committed code and a staging server that holds the build ready for deployment to the production server. The way we deploy to the production server is through a rudimentary script that pulls the latest svn copy and overwrites the changes in the htdocs directory. Any SQL changes are applied via the sync feature from MySQL Workbench.
This setup works fine for a very basic environment but we are now in a transition from single server setups to clusters due to high traffic and I have come up against a conundrum.
My main concern is how exactly do we switch deployment from a single
server to a cluster of servers ? Each server will have its own htdocs
and SQL database and under the current setup I would need to execute
the script on every server which sounds like an abhorrent thing to
do. I was looking into puppet which can be used to automate sysadmin tasks but I am not sure whether it is a formidable approach for deploying new builds to a cluster.
My second problem is to do with the database. Now my assumption is the code changes will be applied immediately, but since we will have a db master/slave replication my concern is the database changes will take longer to propagate and thus introduce inconsistencies during deployment. How can the code AND database be synchronised at the same time ?
My third problem is related to automation of database changes. Does anyone know of any way I can automate the process of updating a DB schema without manually having to run the synchronisation ? At the moment I have to manually run the workbench sync tool, whereas I am really looking for a commit and forget approach. I commit it and DB changes are auto synchronised across the dev and QA setups.
I am running a similar scenario, but I am using a Cloud Provider for my production environment, in order that I do not need to care about replication of DB, multi server instances etc. (I am Using pagodabox, but AWS would also work perfectly fine).
I would recommend you to create real migrations for Database Migrations, in order to track those via svn or something else. In that case, you can also provide information, how to roll back. I am using https://github.com/doctrine/migrations, but mainly because I use doctrine as ORM.
If you have a migration tool, you can easily add a command in your deployment script to run those migrations after deployment.
I don't think that the database synchronisation is a big issue during deployment. That might depend on the actual infrastructure youre using. The cloud providers like pagoda or aws take care of it for you.