PHP-FPM + Nginx on Kubernetes - php

We're hosting a lot of different applications on our Kubernetes cluster already - mostly Java based.
For PHP-FPM + Nginx our approach is currently, that we're building a container, which includes PHP-FPM, Nginx and the PHP application source code.
But this actually breaks with the one-process-per-container docker rule, so we were thinking on how to improve it.
We tried to replace it by using a pod with multiple containers - a nginx and a PHP container.
The big question is now where to put the source code. My initial idea was to use a data-only container, which we mount to the nginx and PHP-FPM container. The problem is, that there's seems to be no way to do this in Kubernetes yet.
The only approach that I see is creating a sidecar container, which contains the source code and copies it to an emptyDir volume which is shared between the containers in the pod.
My question: Is there a good approach for PHP-FPM + Nginx and a data container on Kubernetes, or what is best practice to host PHP on Kubernetes (maybe still using one container for everything)?

This is a good question because there is an important distinction that gets elided in most coverage of container architecture- that between multithreaded or event-driven service applications and multiprocess service applications.
Multithreaded and event-driven service applications are able with a single process to handle multiple service requests concurrently.
Multiprocess service applications are not.
Kubernetes workload management machinery is completely agnostic as to the real request concurrency level a given service is facing- agnostic in the sense that different concurrency rates by themselves do not have any impact on automated workload sizing or scaling.
The underlying assumption, however, is that a given unit of deployment- a pod- is able to handle multiple requests concurrently.
PHP in nearly all deployment models is multiprocess. It requires multiple processes to be able to handle concurrent requests in a single deployment unit. Whether those processes are coordinated by FPM or by some other machinery is an implementation detail.
So- it's fine to run nginx + FPM + PHP in a single container, even though it's not a single process. The number of processes itself doesn't matter- there is actually no rule in Docker about this. The ability to support concurrency does matter. One wants to deploy in a container/pod the minimal system to support concurrent requests, and in the case of PHP, usually putting it all in a single container is simplest.

Concept of micro-service architecture is to run every service individually in different clusters i.e., a cluster of nginx and a cluster of php-fpm. (Cluster > Pod > Containers)
Now these clusters should be communicating with other so that nginx and php-fpm can work properly.
As for the main part, where to put my code.
For this you can use many plugins working based on api i.e., digitalocean, s3 etc.
If you want to mount them on your drive, then are mountpoints parameter available in kubernetes.

Related

Deploying Laravel with Docker containers

I plan to deploy my Laravel application with docker containers.
I need the following components for my application:
MySQl server
nginx server
cerbot for activation of ssl
Queue worker for Laravel
Since the application is still under development (and probably will always be), it should be very easy to update (I will automate this with GitLab CI/CD) and it should have as little downtime as possible during the update.
Also, I want to be able to host multiple instances of the application, whereby only the .env file for Laravel is different. In addition to the live application, I want to host a staging application.
My current approach is to create a container for the MySQL server, one for the nginx server and one for the queue worker. The application code would be a layer in the nginx server container and in the queue worker container. When updating the application, I would rebuild the nginx conatiner and queue worker container. Is this a good approach? Or are there better ways to achieve this? And what would be a good approach for my mysql server, nginx server, php version,... to stay up to date without downtime for the application?
The main idea of the docker is to divide your app by containers. So yep it is good to have one container for one service. In your example, I suggest keeping MySQL in one container the queue worker in another and so on. As a result, u will have containers for each service. Then suggest to create the internal docket network and connect containers to them. Also, I suggest using docker volumes to store all your application data. To make it much easyer I suggest for configuration to use docker compose.

Apache equivalent to IIS application pool

I have an Apache server, and there are many sites in it. One or two of these sites are consuming the whole server's resources, consuming almost all the MPM processes, which leads to the server failing and all the other sites becoming very slow.
Is it possible to implement something like an application pool in IIS in Apache server to avoid other sites becoming slow when one site is consuming all the server resources?
As far as I am aware there is no strict equivalent to application pools in Apache, however you can accomplish splitting by running different httpds as http://wiki.apache.org/httpd/DifferentUserIDsUsingReverseProxy describes:
"One frequently requested feature is to run different virtual hosts under different userids. Unfortunately, due to the basic nature of unix permission handling, this is impossible. (Although it is possible to run CGI scripts under different userids using suexec or cgiwrap.) You can, however, get the same effect by running multiple instances of Apache httpd and using a reverse proxy to bring them all into the same name space. "

How to host a PHP file with AWS?

After many hours of reading documentation and messing around with Amazon Web Services. I am unable to figure out how to host a PHP page.
Currently I am using the S3 service for a basic website, but I know that this service does not support dynamic pages. I was able to use the Elastic Beanstalk to make the Sample Application running PHP. But i have really no idea how to use it. I read up on some other services but they don't seem to do what I want or they are just way to confusing.
So what I want to be able to do is host a website with amazon that has dynamic PHP pages. Is this possible and what services do you use?
For a PHP app, you really have two choices in AWS.
Elastic Beanstalk is a service that takes your code, and manages the runtime environment for you - once you've set it up, it's very easy to deploy, and you don't have to worry about managing servers - AWS does pretty much everything for you. You have less control over the environment, but if your server will run in EB then this is a pretty easy path.
EC2 is closer to conventional hosting. You need to decide how your servers are configured & deployed (what packages get installed, what version of linux, instance size, etc), your system architecture (do you have separate instances for cache or database, whether or not you need a load balancer, etc) and how you manage availability and scalability (multiple zones, multiple data centers, auto scaling rules, etc).
Now, those are all things that you can use - you dont have to. If you're just trying to learn about php in AWS, you can start with a single EC2 instance, deploy your code, and get it running in a few minutes without worring about any of the stuff in the previous paragraph. Just create an instance from the Amazon Linux AMI, install apache & php, open the appropriate ports in the firewall (AKA the EC2 security group), deploy your code, and you should be up & running.
Your Php must run on EC2 machines.
Amazon provides great tools to make life easy (Beanstalk, ECS for Docker ...) but at the end, you own EC2 machines.
There is no a such thing where you can put your Php code without worrying about anything else ;-(
If you are having problems hosting PHP sites on AWS then you can go with a service provider like Cloudways. They provide managed AWS servers with one click installs of PHP frameworks and CMS.

Managing Debian Services from PHP

I am making an application for school but i need to be able to manage some Services running on my Debian 7 machine. I'm running "nginx" and "PHP5-FPM" so PHP 5.4. but how can I restart or stop for example "nginx" from my PHP file? i tried
exec("/etc/init.d/nginx stop");
Also I tried
shell_exec("/etc/init.d/nginx stop");
but no result php returns me:
Stopping nginx: nginx
Thanks in advance
You need to be root to restart these services, and unless your web service (like Apache) is running as your site(s) as root, this isn't going to work. There are a couple of options at your disposal, the best one would likely depend on your situation.
You can create a two-layered approach where the front-end (run by your web server) issues the commands, and the back-end is a service running as root that executes them. This could also add a layer of security as the back-end could sanitize the commands before they're executed. The communication between the front-end and back-end could be any number of things, such as a file that the front-end writes to and the back-end reads from every few seconds, or you could go with WebSockets and make it real-time.
You could run an additional instance of your web server as root that handles just this task. You would definitely want to run this over SSL, and it's somewhat risky since if someone could inject code into one of your pages, their code would be running on your server as root.

Is redis on Heroku possible without an addon?

I'm looking into using Heroku for a PHP app that uses Redis. I've seen the various addons for redis. With Redis To Go, for example, you can use an environment variable $_ENV['REDISTOGO_URL'] in your PHP code, as the URL of the Redis Server.
Most of these add ons have their own pricing schemes which I'd like to avoid. I'm a little confused about how heroku works. Is there a way that I can just install Redis on my own Dynos without the addons?
Like for example, have one worker dyno that acts as a server, and another that acts as a client? If possible, how would I go about:
Installing and running the redis server on a Dyno? Is this just the same as
installing on any other unix box? Can I just ssh to it and install whatever i want?
Have one Dyno connect to
another with an IP/port via TCP? Do the worker dynos have their own
reference-able IP addresses or named URLS that I can use? Can I get them dynamically from PHP somehow?
The php code for a redis client assumes there is a host and port that you can connect to, but have no idea what it would be?
$redis = new Predis\Client(array(
"scheme" => "tcp",
"host" => $host, //how do i get the host/port of a dyno?
"port" => $port));
Running redis on a dyno is an interesting idea. You will probably need to create a redis buildpack so your dynos can download and run redis. As "redis has no dependencies other than a working GCC compiler and libc" this should be technically possible.
However, here are some problems you may run into:
Heroku dynos don't have a static IP address
"dynos don’t have static IP addresses .. you can never access a dyno directly by IP"
Even if you set up and run Redis on a dyno I am not aware of a way to locate that dyno instance and send it redis requests. This means your Redis server will probably have to run on the same dyno as your web server/main application.
This also means that if you attempt to scale your app by creating more web dynos you will also be creating more local redis instances. Data will not be shared between them. This does not strike me as a particularly scalable design, but if your app is small enough to only require one web dyno it may work.
Heroku dynos have an ephemeral filesystem
"no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted"
By default Redis writes its RDB file and AOF log to disk. You'll need to regularly back these up somewhere so you can fetch and restore after your dyno restarts. See the documentation on Redis persistence.
Heroku dynos are rebooted often
"Dynos are cycled at least once per day, or whenever the dyno manifold detects a fault in the underlying hardware"
You'll need to be able to start your redis server each time the dyno starts and restore the data.
Heroku dynos have 512MB of RAM
"Each dyno is allocated 512MB of memory to operate within"
If your Redis server is running on the same dyno as your web server, subtract the RAM needed for your main app. How much Redis memory do you need?
Here are some questions attempting to estimate and track Redis memory use:
Redis: Database Size to Memory Ratio?
Profiling Redis Memory Usage
--
Overall: I suggest reading up on 12 Factor Apps to understand a bit more about heroku's intended application model.
The short version is that dynos are intended to be independent workers that can be easily created and discarded to meet demand, and that dynos access various resources to read or write data and serve your app. A redis instance is an example of a resource. As you can see from the items above, by using a redis add-on you're getting something that's guaranteed to be static, stable, and accessible.
Reading material:
http://www.12factor.net/ - specifically Processes and Services
The Heroku Process Model
Heroku Blog - The Process Model
redis has a client server architecture you can install it on one machine(in your case dyno) and access it from any client.
for more help on libraries you can refer this link
or you can go through this Redis documentaion which is a simple case study of implementing a twitter clone using Redis ad database and PHP

Categories