I have 32 GB ubuntu server where my site is hosted. I have installed the XAMPP and running my site. So here my question is what is the limit of maximum concurrent connections apache will handle and how I can check that? At which extent I can increase it and how?
My server must have 5000 concurrent users at a time So for that I have to configure it.
Generally the formula is :
(Total available memory - Memory needed by operating system) / memory each PHP process needs.
Honestly it's a bit hard to predict sometimes, so it might be worth doing some experimentation. The goal is that you never use more memory that available, so your operating system never swaps.
However, you can also turn it around. 5000 concurrent requests is frankly a lot, so I'm going by your 5000 concurrent users.
Say if 5000 users are actively using your application at a given time, and maybe they do on average each 1 request every 30 seconds or so. And say that the average PHP script takes 100ms to execute.
That's about 166 requests per second made by your users. Given that it takes 100ms to fulfill a request, it means you need about 17 connections to serve all that up. Which is easy for any old server.
Anyway, the key to all these types of dilemmas is to:
Make an educated guess
Measure
Make a better guess
Repeat
Related
My website is always receiving 522 Connection timeout. I upgraded my vps to dedicated server but it still the same.
So i found this solution online: PHP-FPM tuning. What will happen if i increase it to the very maximum?
This is my configuration:
PHP-FPM Pool Options
Max Requests
1000000000000000
Process Idle Timeout
1000000000000000
Max Children
1000000000000000
Limitations
The maximum value for those fields are integer values.
The number of processes is limited by the kernel, to roughly 25.000 to 50.000
What happens when you set ridiculous high values?
Depending on other amouont of traffic, you might be happy with the server for hours, weeks, or months. After a time, the server will probably get unresponsive.
The exact behaviour highly depends on many factors and might be totally unpredictable.
What should you do?
There are basic direction towards what the settings should go, like spawning around cores*2 (think hyperthreading) processes and so on.
The suggested values are just an orientation, not an advice that fits all needs.
The settings highly depend on your code. How much memory does it use, how much cpu time, how much memory leaking, ...
522 Connection timeout
Various issues can lead to a connection timeoout. Your PHP application might
experience fatal errors (seg faults)
might have run into infinite loops
might itself be waiting on locks or responses
bad configured network / firewall.
Try to use a debugger on your code and watch the error log closely.
I am running HTTP API which should be called more than 30,000 time per minute simultaneously.
Currently I can call it 1,200 time per minute. If I call 1200 time per minute, all the request are completed and get response immediately.
But if I called 12,000 time per minute simultaneously it take 10 minute to complete all the request. And during that 10 minute, I cannot browse any webpage on the server. It is very slow
I am running CentOS 7
Server Specification
Intel® Xeon® E5-1650 v3 Hexa-Core Haswell,
RAM 256 GB DDR4 ECC RAM,
Hard Drive2 x 480 GB SSD(Software-RAID 1),
Connection 1 Gbit/s
API- simple php script that echo the time-stamp
echo time();
I check the top command, there is no load in the server
please help me on it
Thanks
Sounds like a congestion problem.
It doesn't matter how quick your script/page handling is, if the next request gets done within the execution time of the previous:
It is going to use resources (cpu, ram, disk, network traffic and connections).
And make everything parallel to it slower.
There are multiple things you could do, but you need to figure out what exactly the problem is for your setup and decide if the measure produces the desired result.
If the core problem is that resources get hogged by parallel processes, you could lower connection limits so more connections go in to wait mode, which keeps more resources available for actually handing out a page instead of congesting everything even more.
Take a look at this:
http://oxpedia.org/wiki/index.php?title=Tune_apache2_for_more_concurrent_connections
If the server accepts connections quicker then it can handle them, you are going to have a problem which ever you change. It should start dropping connections at some point. If you cram down French baguettes down its throat quicker then it can open its mouth, it is going to suffocate either way.
If the system gets overwhelmed at the network side of things (transfer speed limit, maximum possible of concurent connections for the OS etc etc) then you should consider using a load balancer. Only after the loadbalancer confirms the server has the capacity to actually take care of the page request it will send the user further.
This usually works well when you do any kind of processing which slows down page loading (server side code execution, large volumes of data etc).
Optimise performance
There are many ways to execute PHP code on a webserver and I assume you use appache. I am no expert, but there are modes like CGI and FastCGI for example. Which can greatly enhance execution speed. And tweaking settings connected to these can also show you what is happening. It could for example be that you use to little number of PHP threats to handle that number of concurrent connections.
Have a look at something like this for example
http://blog.layershift.com/which-php-mode-apache-vs-cgi-vs-fastcgi/
There is no 'best fit for all' solution here. To fix it, you need to figure out what the bottle neck for the server is. And act accordingly.
12000 Calls per minute == 200 calls a second.
You could limit your test case to a multitude of those 200 and increase/decrease it while changing settings. Your goal is to dish that number of requestst out in a shortest amount of time as possible, thus ensuring the congestion never occurs.
That said: consequences.
When you are going to implement changes to optimise the maximum number of page loads you want to achieve you are inadvertently going to introduce other conditions. For example if maximum ram usage by Apache would be the problem, the upping that limit will ensure better performance, but heightens the chance the OS runs out of memory when other processes also want to claim more memory.
Adding a load balancer adds another possible layer of failure and possible slow downs. Yes you prevent congestion, but is it worth the slow down caused by the rerouting?
Upping performance will increase the load on the system, making it possible to accept more concurrent connections. So somewhere along the line a different bottle neck will pop up. High traffic on different processes could always end in said process crashing. Apache is a very well build web server, so it should in theories protect you against said problem, however tweaking settings wrongly could still cause crashes.
So experiment with care and test before you use it live.
In the midst of transferring a website from a GoDaddy shared server to an EC2 instance. Handling the traffic, which during peak times on a typical day is around 300 active visitors, has been problematic to say the least. My CPU usage slowly rises and eventually hits 100% leaving the website essentially unusable. I've been attempting to resolve the issues from my error logs and was wondering if there could be a more significant problem to address.
After looking at the Apache error log I increased MaxClients [prefork (256) / worker (300) / serverlimit (256)] ==> (500 / 500 / 500).
After looking at the PHP error log I increased [pm.max_children (50) / pm.start_servers (5) / pm.min_spare_servers (5) / pm.max_spare_servers (35)] ==> (100, 10, 10, 70)
Even with these numbers I continue to have warnings:
[23-Feb-2014 04:34:47] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 7 idle, and 83 total children
Artificially increasing these numbers doesn't appear to be long term solution. Any ideas?
EC2:
RDS:
If your site doesn't use any kind of cache, start using one. There are many alternatives.
Is the load coming from users? Reduce as much load from bots as you can. Establish rules to block IPs or agents.
Use gzip if you can. It will pass some extra work to the CPU, but alleviates Apache.
Avoid/prebent hotlinking. Remember, don't compress images or multimedia files, only text.
Try setting KeepAlive Off. This will make users generate more connections and increases HTTP traffic and handshake, but relieves connections open. You are the only one that can find if you need it on or off.
Reduce keepalivetimeout. You may prefer this instead of turning off KeepAlive
Try to disable as many modules as you can. This may not be possible depending on the control you have over the server.
There is not much more that we can tell with out more details except generic things like, optimize your server side language pages, optimize database queries, etc.
Update based on your graphs
Assuming all the graphs have the same scale.
When your server hits 100% CPU, you have 0 reads and 0 writes, but you have a high network usage. Where is that traffic going/coming? You could think that is using the cache, but I'd be amazed that just cache can hold the whole access of users. I mean, they have to be using exactly the same files/pages without a single change. The problem with that, is not just how improbable is to have all the users hitting the same resource, is that you have some database access.
If the database is on the same server, the read/writes from the database are so low that they don't even raise the bar on the other graph. If the database is in another server, is normal that one doesn't affect the other.
But, the database is working, even when has almost no read/writes, is working, and the load is increasing, which points to problems with the queries. May be complex views, may be very inefficient queries, may be too much calculations on some queries. The queue depth seems to indicate that you have a bottle neck there.
I'd say that you have something making the database work really difficult, and that is what is affecting the most. If the database is in the same server. But it's not the whole story. Check that first.
Case
Currently I am developing an application using Laravel 4. I installed profiler to see the stats about my app. This is the screenshot:
Questions
You can see that it consumes 12.25 MB memory for each request (very simple page) in my vagrant (Ubuntu 64 bit + Nginx + PHP 5.3.10+ MySQL). Do you think this is too much ? This means If I have 100 concurrent connections, the memory consumption will be about 1 GB. I think this is too much but what do you think ?
It loads 237 files for each request. Do you think this is too much ?
When I deploy this app to the my server (Centos 6.4 with Apache + PHP 5.5.3 with Zend OPcache + MySQL) the memory consumption decreases dramatically. This is the screenshot from the server:
What do you think about this difference between my mac and the server ?
No, you don't really need to worry about this.
12MB is not really a large amount for a PHP program. And 100 concurrent connections is a lot.
To put it into context, assume your PHP page takes half a second to run, that would mean you'd need to have 12000 page loads per minute to achieve a consistent 100 concurrent connections. That's a lot more traffic than any of my sites get, I can tell you that.
Of course, if your page takes longer than half a second to load, this number will come down quickly, and your 100 concurrent connections can become a possibility much more easily.
This is one reason why it's a really good idea to focus on performance‡ -- the quicker your program can finish running, the quicker it can free up its memory for the next visitor. In fact unless you have a really major memory usage problem (which you don't), performance is probably more important in this context than the amount of memory being used.
In any case, if you do have 100 concurrent connections, you're likely to get issues with your server software before you have them with PHP. Apache has a default limit to the max number of connections, and it is a lot lower than 100. (you can raise it, of course, but if you really are getting that kind of traffic, you'll likely be wanting more servers anyway)
As for the 12M memory usage, you're not really ever likely to get much less than that for a PHP program. PHP needs a chunk of memory just in order to run in the first place, and the framework will need a chunk too, so most of your 12M will be due to that. This means that although your small program may be using 12M, it does not follow that a larger program would use twice as much. So you probably don't need to worry too much about it.
If you do have high traffic, and performance issues as a result, there are various ways you can mitigate the problem. The main one is by using caching. PHP 5.5 comes with an OpCache module built-in, which will cache your programs for you so that it doesn't have to do all the bootstrap work such as loading all the files every time. For some systems, this can have a dramatic impact on performance.
There are also other layers of caching you can use, such as a server-level page cache like Varnish, which will cache your static pages so that PHP doesn't even need to be called if the page content hasn't changed.
(‡ of course there are other reasons for focussing on performance too, like keeping your visitors happy)
on a Debian web server (VPS) with good CPU, 6 GB RAM, and fast backbone Internet connection, I run a PHP application. PHP runs in "prefork" mode (incl. APC opcache), because whenever you search for PHP and the MPM worker, there are abundant warning regarding thread safety. The PHP application is quite large, so each server process requires about 20 to 30 MB RAM. There is sensible data processed by the application, therefore, all connections to the Apache server are SSL encrypted.
Typically, the application shows no or few images (about 1-3 files incl CSS and JS per request) and the users send a new request each 1 minute (30 sec. to 4 minutes, depeding on the user).
Recently, this application faced a big storm of user requests (that was planned, no DoS, about 2.500 concurrent users). While the CPU did fine (<50% use), my server ran quickly out of slots. The point is that - in prefork mode - each slot requires memory and the 6 GB are just enough for "MaxClients" about 200 slots).
Problem 1: According to Apache server-status, most slots were occupied "..reading..". Sometimes reading for 10 seconds and more, while PHP processing takes 0.1 to 2 seconds. Few data is sent by the users, so I guess that this actually is the SSL handshake. This, of course, occupies lots of slots (I also enabled and configured mod_reqtimeout to drop very slow clients and - according to http://unhandledexpression.com/2013/01/25/5-easy-tips-to-accelerate-ssl/ - used SSLHonorCipherOrder to use faster encryption ciphers, SSLCertificateChainFile is also transmitted).
Problem 2: If I enable KeepAlive (only 1 or 2 seconds) to reduce the SSL overhead, slots are kept open and, therefore, occupied twice as long, as PHP processing would require.
Problem 3: If I actually wanted to serve 2.500 users, and want to use KeepAlive to speed up SSL, I would require 2.500 slots. However, I won't have a machine with 32 GB RAM.
With enough users on the server, to test its limits, I were stuck with about 110 requests per second, about 50% CPU load on a quadcore system (max. 400%). Less req/sec if I (re-)enabled KeepAlive. 110 req/sec on a modern webserver - this seems ridiculous! I cannot believe that this is actually what Apache, PHP and SSL can perform.
Is there a major fail in my thinking? Do I encounter a basic limitation of the prefork mode? Did I ignore the obvious? Is SSL acutually such a performance-eater? Thanks for any hints!
I'm the author of that article about SSL performance. I don't think the handshake is responsible for the 8+ seconds on reads. You can get useful information by using http://www.webpagetest.org/ . The handshake is done when a request is marked as "connected".
My guess would be that the slow processing of the PHP app with a lot of concurrent users can make some users wait a lot more.
Here are some ideas to get better performance:
I don't think the KeepAlive would be a good idea if each client does a request every minute.
You could enable SSL session tickets to reduce the handshake overhead.
MPM-Worker works fine for a lot of different setups, so I encourage you to try it out.
caching will probably not help you if the clients recieve a different response every time.
you should test PHP-FPM, that could speed up the PHP code.
also, test APC, to cache precompiled PHP code.
I don't know anything about the architecture of your app, but you could defer sending the results: get the data from the client, send an immediate answer ("processing data..." or something like that), process the data in a background process, then on the next request, send the calculated answer.