I have an Apache server, and there are many sites in it. One or two of these sites are consuming the whole server's resources, consuming almost all the MPM processes, which leads to the server failing and all the other sites becoming very slow.
Is it possible to implement something like an application pool in IIS in Apache server to avoid other sites becoming slow when one site is consuming all the server resources?
As far as I am aware there is no strict equivalent to application pools in Apache, however you can accomplish splitting by running different httpds as http://wiki.apache.org/httpd/DifferentUserIDsUsingReverseProxy describes:
"One frequently requested feature is to run different virtual hosts under different userids. Unfortunately, due to the basic nature of unix permission handling, this is impossible. (Although it is possible to run CGI scripts under different userids using suexec or cgiwrap.) You can, however, get the same effect by running multiple instances of Apache httpd and using a reverse proxy to bring them all into the same name space. "
Related
We're hosting a lot of different applications on our Kubernetes cluster already - mostly Java based.
For PHP-FPM + Nginx our approach is currently, that we're building a container, which includes PHP-FPM, Nginx and the PHP application source code.
But this actually breaks with the one-process-per-container docker rule, so we were thinking on how to improve it.
We tried to replace it by using a pod with multiple containers - a nginx and a PHP container.
The big question is now where to put the source code. My initial idea was to use a data-only container, which we mount to the nginx and PHP-FPM container. The problem is, that there's seems to be no way to do this in Kubernetes yet.
The only approach that I see is creating a sidecar container, which contains the source code and copies it to an emptyDir volume which is shared between the containers in the pod.
My question: Is there a good approach for PHP-FPM + Nginx and a data container on Kubernetes, or what is best practice to host PHP on Kubernetes (maybe still using one container for everything)?
This is a good question because there is an important distinction that gets elided in most coverage of container architecture- that between multithreaded or event-driven service applications and multiprocess service applications.
Multithreaded and event-driven service applications are able with a single process to handle multiple service requests concurrently.
Multiprocess service applications are not.
Kubernetes workload management machinery is completely agnostic as to the real request concurrency level a given service is facing- agnostic in the sense that different concurrency rates by themselves do not have any impact on automated workload sizing or scaling.
The underlying assumption, however, is that a given unit of deployment- a pod- is able to handle multiple requests concurrently.
PHP in nearly all deployment models is multiprocess. It requires multiple processes to be able to handle concurrent requests in a single deployment unit. Whether those processes are coordinated by FPM or by some other machinery is an implementation detail.
So- it's fine to run nginx + FPM + PHP in a single container, even though it's not a single process. The number of processes itself doesn't matter- there is actually no rule in Docker about this. The ability to support concurrency does matter. One wants to deploy in a container/pod the minimal system to support concurrent requests, and in the case of PHP, usually putting it all in a single container is simplest.
Concept of micro-service architecture is to run every service individually in different clusters i.e., a cluster of nginx and a cluster of php-fpm. (Cluster > Pod > Containers)
Now these clusters should be communicating with other so that nginx and php-fpm can work properly.
As for the main part, where to put my code.
For this you can use many plugins working based on api i.e., digitalocean, s3 etc.
If you want to mount them on your drive, then are mountpoints parameter available in kubernetes.
I am writing webserver in Go that will replace existing website. I still need some old PHP scripts. Right now I have lighttpd + fastcgi. So I wish my web server to call PHP as FastCGI.
What is best way to handle it?
I guess I need some Go FastCGI API
http://golang.org/pkg/net/http/fcgi/ - It seems only to support Server side not client.
I think you'd have to make your own if you want to connect directly to a fastcgi process. Keep in mind though that you still have to run a process manager/spawner anyway, so it wouldn't be a huge leap to just run nginx, too, and have your Go process proxy there for the PHP scripts.
You could also reasonably turn it around and have end-users connect to nginx on port 80 and have nginx proxy requests to your Go process or fastcgi as appropriate. One advantage of that is that then easily can have the Go process run as a different user than root.
and I've read a bunch of related posts here on SO and have gotten the following ideas here:
First, my website is shared-hosted at 1and1.com. And my objective is to call exec() in my PHP code to run ffmpeg to convert short videos (30 seconds) from format A to format B.
1) Now, because of my shared hosting situation (the 1and1 web server my website runs on is shared with other 1and1 customers) -- 1and1's administrator probably has their server in 'safe' mode, so that my desire to exec() in PHP to do the ffmpeg conversion of a video -- will fail.
(Supposedly there is a 'safe mode' shared directory but that's a hoop to jump through with 1and1).
2) And even if 1and1 does not mind if, from my PHP code, I call 'exec( ffmpeg -i yada yada)' for converting between video formats -- I've heard it's unseemly to do that. I'm led to believe I need a dedicated server and not a shared server. Which sounds weird because (a) the web site will never scale to massive # of users, and (b) when I run ffmpeg on my 2 1/2 year-old laptop, Windows Vista, running Xampp as my localhost development web server, the conversion happens like right now. Very fast.
Are the above restrictions correct ('safe mode' on shared web hosting, running ffmpeg is a no-no without a dedicated box)?
I'd like to think that 1and1 would not even notice if my web site did a few (say, under 20) short-video conversions per day.
Also if anyone can recommend the proper web hosting company for my situation I would be in your debt. Thanks.
I believe safe mode would allow you to use exec because 1and1 would have you chrooted so any program you run would not be able to access other users. But, you might want to check the AUP to see if the prevent those kind of processes (that consume a lot of resources).
For the best performance you should use a VPS server in which you can control your php environment. ffmpeg has potential to be very resource intensive, so for best performance a VPS/dedicated would be best. Also, using ffmpeg on shared hosting would impact performance on other sites as well.
Any VPS would do, but because you asked for a recommendation, I would recommend BlueMileCloud.
PS: You should never call a process directly from exec. You should add it to a queue, so the script does not wait for the process to finish and cause the site to hang.
I'm running a pressflow site with over 40,000 unique visitors a day, and almost 80,000 records in node_revision, and my site hangs randomly giving 'site offline' message. I have moved my db to innodb and it still continues. I'm using my-huge.cnf as my mysql config. Please advice me on a better configuration and reasons for all this. I'm running on a dedicated server with more than 300GB and 4GB RAM.
The my-huge.cnf file was tuned for a "huge" server by standards of a decade ago, but it barely qualifies as a reasonable production configuration now. I would check other topics related to MySQL tuning and especially consider using a tool like Varnish to (since you're already on Pressflow) to cache anonymous traffic.
I suspect that you are having excessive connections to the database server which can exhaust your server RAM. This is very likely to be the case if you are running Apache in pre-fork mode and PHP as Apache module with persistent connections, and using the same server to serve images, CSS, JavaScript and other static content.
If that is the case, the way to go is to move the static content to a separate multi-threaded Web server like lighttpd or ngynx. That will avoid Apache forking too many processes that end up making PHP establish too many persistent connections that exhaust your RAM.
I am experimenting with several languages (Python, Ruby...), and I would like to know if there is a way to optimize my Apache Server to load
certain modules only in certain VirtualHost, for instance:
http://myapp1 <- just with Ruby support
http://myapp2 <- just with Python support
http://myapp3 <- just with Php support
...
Thanks.
Each Apache worker loads every module, so it's not possible to do within Apache itself.
What you need to do is move your language modules to processes external to Apache workers.
This is done for your languages with the following modules:
PHP: mod_fastcgi. More info: Apache+Chroot+FastCGI.
Python: mod_wsgi in daemon mode.
Ruby: passenger/mod_rack
I dont think thats possible as,
The same thread/forked process might be serving pages from different Virtualhosts. So if it has loaded only python, what happens when it needs to serve ruby?
For reason 1, certain directives are web server only, and not virtualhost specific. MaxRequestsPerChild, LoadModule etc are such.
I think the only way is to have a "proxy" web server that dispatches requests to the real servers ...
The proxy server has a list of domain names -> Server Side language, and does nothing else but transparently redirecting to the correct real server
There are N real server, each one with a specific configuration and a single language supported and loaded ... each server will listen on a different port of course and eventually only on the loopback device
Apache mod_proxy should do the job
My 2 cents
My Idea is several apache processes (each one with different config) listening on different addresses and/or ports and a http proxy (squid or apache) in the front redirecting to respective server. This has a possible added advantage of caching.