I will tell you about website specs for the below configuration.
social networking site- 70%dynamic content
Linux centos 6.6
Apache web server
php language
server specs x 2 (main server and sql server)
4 x Intel® Xeon® E5-4640 v2 2.20GHz, 20M Cache, 8.0GT/s QPI, 10 Core
48 x 16GB (768 GB) RDIMM, 1600MT/s, Low Volt RAM
4 x 300GB 15K RPM SAS 6Gbps
for other storage = Dell Storage Direct-Attached Storage (DAS)
network = 10 gigabit / sec
Assume that memcache / load balancer / other extra servers are there and not included in this.
(just needed rough calculation)
my question is:
how many concurrent users (users that will click at a same time) this platform can handle and assume that average connectivity of users will be 512 kilobit / sec.
concurrent users depends on which factor ? (ram>cpu>hdd is this right?)
I am not an expert , this question is for educational purpose only.
This question is very vague. The load that you can support will depend on the complexity of your PHP code and database design... planning (and even testing) load is a complicated topic.
You could also configure your hardware in a variety of ways which will have an impact on performance. Which RAID system you use will depend on whether your application is read or write heavy, as will your database design.
You will also need to consider whether to use Virtualisation for backup/redundancy which adds a layer of performance overhead...
Related
I'm looking to build my own server for the fun of it and in order to learn about the process.
I will be setting up a LAMP stack on the server and use it as a back-end for my mobile application that has some 1000s of daily active users.
But I am completely clueless as to what determines the limits of my server. I see on many hosting providers' websites that they offer a fixed amount of concurrent users, like 20 or 100.
What are the reasons that determine the maximum number of concurrent users for a server? Is it just dependent on the server's hardware, like available RAM? Does it have to do with code or software? What happens to users who try to access the server when the maximum limit has already been reached?
Your question is not really about any code issue, but since you asked...
I will be setting up a LAMP stack on the server and use it as a back-end for my mobile application that has some 1000s of daily active users.
If you care about the performance, you probably should use LEMP instead of LAMP.
But I am completely clueless as to what determines the limits of my server. I see on many hosting providers' websites that they offer a fixed amount of concurrent users, like 20 or 100.
All servers you can buy can be divided into 3 simple groups:
Shared hosting (you and 1000 other folks share the same server).
VPS/VDS (the server is divided into N isolated parts and your resources are guaranteed: M CPU cores, K RAM, T GB HDD/SSD, S GB in/out traffic).
Dedicated servers (you own everything).
If you see any limitation like "X max connections" or "Y SQL queries per second", that means you talk about the shared hosting. You are not allowed to consume more than your limit, otherwise all clients of the same server may suffer from your website/service "popularity". Stick to the VPS/VDS at least if you don't want to hear about boring limitations, your only limitations will be cores, RAM, HDD space and traffic usage.
What happens to users who try to access the server when the maximum limit has already been reached?
Depends on your client's software configs and server's configs. The default behavior for most of clients (like browsers) is to wait until a specific timeout. The default behavior for most of webservers (Apache/Nginx) is to keep connections in a queue until the interpreter (PHP-CGI/PHP-FPM) would become available or die if a timeout is reached (whatever comes first). But that's configurable for each actor in that scheme from increasing the timeout to dropping the extra load immediately.
P.S. If you really want to test your server performance, you can always use a regression testing/benchmarking software or write your own (flood your own server with connections until it would die).
A few months ago we moved our e-commerce website to a VPS, after struggling with poor performance from shared hosting platforms. To handle an increase in traffic (avg. 300-500 daily visitors), we tweaked our PHP-FPM settings and increased the Max Children from 5 (default) to 50. Currently, PHP-FPM "pool" processes are requiring high CPU usage (30-40%). Any tips to make those "pool" processes use less CPU? Thanks!
VPS Specs:
2 CPUs
Intel(R) Xeon(R) CPU E5-2630 v4 # 2.20GHz
4GB RAM
WHM: Centos 7.8 v86.0.18
Ecommerce platform: OpenCart 3.0.2.0
FPM has nothing to do with the CPU usage, it's your code.
That said, don't just arbitrarily change the number of worker processes without a sound basis to do so, eg: actual resource statistics.
With 300-500 daily users you're really unlikely to have 50 concurrent requests unless you're doing something strange.
The place I'm currently working at peaks at about 600 concurrent users and a grand maximum of 15-20 connections actually simultaneously doing anything. [Note: Much larger/broader backing infrastructure]
Do you really expect each CPU core to handle 25 simultaneous requests?
Can you reasonably fit 50 requests' worth of RAM into that 4GB?
Are you fine with those 50 idle PHP processes each consuming 10-15MB RAM apiece?
All that said, we can't tell you what in your code is using up resources, and it's not possible for you to post enough information for us to make more than a vague guess. You need to put things in place to measure where that resource usage is happening, profile your code to find out why, and tune your infrastructure configuration to accommodate your specific application requirements.
There's no one "magic" config that works for everyone.
Current config:
16GO RAM, 4 cpu cores, Apache/2.2 using prefork module (which is set at 700 maxClients, since avg process size ~22MB), with suexec and suphp mods enabled (PHP 5.5).
Back-end of site using CakePHP2 and storing on a MySQL server. The site consists of text / some compressed images in the front and data processing in the back.
Current traffic:
~60000 unique visitors daily, on peaks I'm currently easily reaching 700+ simultaneous connections which fills the MaxClients. When I use apachectl status at those moments, I can see that then all the processes are used.
The CPU is fine. But the RAM is getting all used.
Potential traffic:
The traffic might grow to ~200000 unique visitors daily, or even more. It might also not. But if it happens, I want to be prepared. Since I've already reached the limits of the current server using that config.
So I think about taking a new server, much bigger, like with 192GB Ram and 20 cores for example.
I could keep exactly the same config (which means I would then be able to handle 10* my current traffic with that same config).
But I wonder if there is a better config in my case using less ressources and being as much efficient ? (and which is proved to be so)
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # from 8 to reduce threads_created
innodb_io_capacity=500 # from 200 to allow higher IOPS to your HDD
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 129,942
thread_concurrency=6 # from 10 to expedite query completion with your 4 cores
slow_query_log=ON # from OFF to allow log research and proactive correction
These changes will contribute to less CPU BUSY.
Observations:
A) 5.5.54 is past End of Life, newer versions perform better.
B) These suggestions are just the beginning of possible improvements, even with 5.5.4.
C) You should be able to gracefully migrate to innodb_file_per_table once
you turn on the option. Your tables are already managed by the innodb engine.
For additional assistance including free downloads of Utility Scripts, view my profile, Network profile, please.
There is a server with such parameters:
Operating system:
Debian 7.11 x86
Memory RAM:
1024 MB
SSD memory:
30000 MB
CPU:
2x2.8 Ghz
I want to develop a social network project. Ability to search by certain parameters, send messages, add photos, publish entries to the wall. While this is the maximum. I'm interested in how much the server can handle, how to optimize for this site, and at what load will have to distribute the site to several servers, and most importantly, how to do it. This should be taken into account immediately at the beginning of writing, or to re-design the project will be easy?
At the beginning of the project there will be about 1000 visitors, in the future I plan about 5000. I wonder which server should be, how many servers, etc., because I do not know about large projects.
How any concurrent users can a Apache + PHP solution support? Please don’t be bogged down by Mysql constraints – we are using LAP without M as we are storing around 2-8 PB at the back end.
Why not try it out:
ab - Apache HTTP server benchmarking tool
As an alternative Siege comes to mind.
Also see the answers to How to test a site rigorously
How any concurrent users
Ok, there's your first issue - HTTP is stateless, so your webserver can support an infinite numbers of users - as long as they don't actually submit any requests to the webserver. Really the limiting factor is the number of concurrent connections to the webserver. This is going to be determined by:
1) the frequency at which users make requests
2) the length of time it takes to service the request
3) the keepAlive duration
The first 2 will vary enormously from application to application, while the latter is something you can control - using keepalives will improve performance at the browser at the expense of hogging memory (and therefore slowing down) at the server. Using a keepalive of more than 2 seconds is probably a waste of time.
There are good books available on Apache performance tuning which will allow you to optimize the webserver for your application.
Of course, if you have a common data substrate, then there's nothing to stop you adding more webservers on top of the storage (unlimited horizontal scalability) - so it's the storage substrate which ultimately limits the capacity / performance of the system (until you look at tuning the code and storage). And you get the added benefit of improved resillience.
Certainly a fairly low end PC (2GHz CPU, 2Gb ram) should comfortably handle upwards of 500 current connections - particularly if you're running a database-centric application, then you'll also get more benefit out of adding servers rather than upgrading the CPU/RAM.
HTH
C.