I have a question.
I own a 128mb vps with a simple blog that gets just a hundred hits per day.
I have nginx + php5-fpm installed. Considering the low visits and the ram I decided to set fpm to static with 1 server running. While I was doing my random tests like running php scripts through http that last over 30 minutes I tried to open the blog in the same machine and noticed that the site was basically unreachable. So I went to the configuration and read this:
The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes to be created when pm is set to 'dynamic'.
; **This value sets the limit on the number of simultaneous requests that will be
; served**
What shocked me the most was that I didn't know because I always assumed that a php children would handle hundreds of requests at the same time like a http server would do!
Did it get it right?
If for example I launch 2 php-fpm children and launch 2 "long scripts" at the same time all the sites using the same php backend will be unreachable?? How is this usable?
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
You've basically got it right. You configured a static number of workers (and that number was "one") -- so that's exactly what you got.
But you don't understand quite how things typically work, since you say:
I always assumed that a php children would handle hundreds of requests
at the same time like a http server would do!
I'm not really familiar with nginx, but consider the typical mod_php setup in apache. If you're using mod_php, then you're using the prefork mpm for apache. So every concurrent http requests is handled by a distinct httpd process (no threads). If you're tuning your apache/mod_php server for low-memory, you're going to have to tweak apache settings to limit the number of processes it will spawn (in particular, MaxClients).
Failing to tune this stuff means that when you get a large traffic spike, apache starts spawning a huge number of heavy processes (remember, it's mod_php, so you have the whole PHP interpreter embedded in each httpd process), and you run out of memory, and then everything starts swapping, and your server starts emitting smoke.
Tuned properly (meaning: tuned so that you ignore requests instead of allocating memory you don't have for more processes), clients will time out, but when traffic subsides, things go back to normal.
Compare that with fpm, and a smarter web server architecture like apache-worker, or nginx. Now you have some, much larger, pool of threads (still configurable!) to handle http requests, and a separate pool of php-fpm processes to handle just the requests that require PHP. It's basically the same thing, if you don't set limits on how many processes/threads can be created, you are asking for trouble. But if you do tune, you come out ahead, since only a fraction of your requests use PHP. So essentially, the average amount of memory needed per http requests is lower -- thus you can handle more requests with the same amount of memory.
But setting the number to "1" is too extreme. At "1", it doesn't even matter if you choose static or dynamic, since either way you'll just have one php-fpm process.
So, to try to give explicit answers to particular questions:
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
Yes, they'll all queue, and eventually timeout. The fact that you regularly have scripts that take 10 seconds to run is the real culprit here, though. There are lots of ways to architect around that (caching, work queues, etc), but the right solution depends entirely on what you're trying to do.
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
They do apply. You can set up an apache/mod_php server the same way as you have with nginx/php-fpm -- just set apache's MaxClients to 1!
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
Especially on linux, lots of things that report memory usage can be very misleading. But think about it this way: that 30kb is negligible. That's because most of PHP's memory was already allocated when some httpd process got started.
128MB VPS is pretty tight, but should be able to handle more than one php-process.
If you want to optimize, do something like this:
For PHP:
pm = static
pm.max_children=4
for nginx, figure out how to control processes and thread count (whatever the equivalent to apache's MaxClients, StartServers, MinSpareServers, MaxSpareServers)
Then figure out how to generate some realistic load (apachebench, siege, jmeter, etc). use vmstat, free, and top to watch your memory usage. Adjust pm.max_children and the nginx stuff to be as high as possible without causing any significant swap (according to vmstat)
Related
I am running HTTP API which should be called more than 30,000 time per minute simultaneously.
Currently I can call it 1,200 time per minute. If I call 1200 time per minute, all the request are completed and get response immediately.
But if I called 12,000 time per minute simultaneously it take 10 minute to complete all the request. And during that 10 minute, I cannot browse any webpage on the server. It is very slow
I am running CentOS 7
Server Specification
Intel® Xeon® E5-1650 v3 Hexa-Core Haswell,
RAM 256 GB DDR4 ECC RAM,
Hard Drive2 x 480 GB SSD(Software-RAID 1),
Connection 1 Gbit/s
API- simple php script that echo the time-stamp
echo time();
I check the top command, there is no load in the server
please help me on it
Thanks
Sounds like a congestion problem.
It doesn't matter how quick your script/page handling is, if the next request gets done within the execution time of the previous:
It is going to use resources (cpu, ram, disk, network traffic and connections).
And make everything parallel to it slower.
There are multiple things you could do, but you need to figure out what exactly the problem is for your setup and decide if the measure produces the desired result.
If the core problem is that resources get hogged by parallel processes, you could lower connection limits so more connections go in to wait mode, which keeps more resources available for actually handing out a page instead of congesting everything even more.
Take a look at this:
http://oxpedia.org/wiki/index.php?title=Tune_apache2_for_more_concurrent_connections
If the server accepts connections quicker then it can handle them, you are going to have a problem which ever you change. It should start dropping connections at some point. If you cram down French baguettes down its throat quicker then it can open its mouth, it is going to suffocate either way.
If the system gets overwhelmed at the network side of things (transfer speed limit, maximum possible of concurent connections for the OS etc etc) then you should consider using a load balancer. Only after the loadbalancer confirms the server has the capacity to actually take care of the page request it will send the user further.
This usually works well when you do any kind of processing which slows down page loading (server side code execution, large volumes of data etc).
Optimise performance
There are many ways to execute PHP code on a webserver and I assume you use appache. I am no expert, but there are modes like CGI and FastCGI for example. Which can greatly enhance execution speed. And tweaking settings connected to these can also show you what is happening. It could for example be that you use to little number of PHP threats to handle that number of concurrent connections.
Have a look at something like this for example
http://blog.layershift.com/which-php-mode-apache-vs-cgi-vs-fastcgi/
There is no 'best fit for all' solution here. To fix it, you need to figure out what the bottle neck for the server is. And act accordingly.
12000 Calls per minute == 200 calls a second.
You could limit your test case to a multitude of those 200 and increase/decrease it while changing settings. Your goal is to dish that number of requestst out in a shortest amount of time as possible, thus ensuring the congestion never occurs.
That said: consequences.
When you are going to implement changes to optimise the maximum number of page loads you want to achieve you are inadvertently going to introduce other conditions. For example if maximum ram usage by Apache would be the problem, the upping that limit will ensure better performance, but heightens the chance the OS runs out of memory when other processes also want to claim more memory.
Adding a load balancer adds another possible layer of failure and possible slow downs. Yes you prevent congestion, but is it worth the slow down caused by the rerouting?
Upping performance will increase the load on the system, making it possible to accept more concurrent connections. So somewhere along the line a different bottle neck will pop up. High traffic on different processes could always end in said process crashing. Apache is a very well build web server, so it should in theories protect you against said problem, however tweaking settings wrongly could still cause crashes.
So experiment with care and test before you use it live.
I'm running a web application using server-sent events (eventsource). I've been working to properly set up the apache and PHP configuration files so that the program will accommodate all of my users and not timeout. I've already set the timeout to an appropriate amount of time in both PHP and apache, but I'm worried about the Server limit, Max Clients, and Max Requests Per Child. I need to connect around 500 users to the php file that runs the eventsource and run a PHP script every time a message is sent to the server. The eventsource file seems to take up about a 1/4 MB of ram and a negligible amount of processing power. Can someone explain what these limits do, and advise me on how best to set them?
Each SSE connection will use a dedicated PHP process, so counts as one of the Apache processes. (Each will also be using a socket and a local port.)
500 simultaneous clients is a lot, even more so if they all use PHP, and you are going to need a lot of memory on your server. But, if you have enough memory, set both MaxClients and ServerLimit to 500. (I'd suggest starting with 50 or 100, run some stress tests, and keep increasing those limits and repeating until you see your server starting to swap.)
For stress-testing SSE, I've found SlimerJS to be the best choice. (The WebKit in PhantomJS (as of 1.9.x) is too old to support SSE.) Selenium would do the job too. Make sure to keep clients and server on different machines, as 100+ clients will also use a lot of memory and load.
I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.
I have problem with many sleeps scripts on apache. While i start apache, and clear cached memory to 13GB free, then free memory still falling to ap. 100MB free. Many php processes going to sleep and keeps cached memory ap. 19 MB on one script.
Sometime sleep script keep even 700MB from memory. Is some setting, how tell apache when script end, then process end to?
The resource and time limit of PHP running in Apache is controlled by php.ini
These two setting is self explain.
max_execution_time = 30
memory_limit = 128M
So I don't think the script will sleep forever and exhaust your memory.
Please check these setting first.
No your problem isn't "with many sleeps scripts on apache", is that you are using sleeps scripts on apache. Sleeping is an anathema to most web servers which are designed with one purpose in mind: to receive and to respond to stateless HTTP(S) based web requests.
Sleeping, per se, isn't an absolute evil but if the total delay is more than a few seconds then you are doing something very wrong, and you need to explore alternative approaches.
For example, phpBB implements a pseudo-cron type function where web requests schedules future work activities (e.g. to application-related table maintenance every hour, say) by using a queued-request table and a common check function. If a scheduled task is due a one-pixel image request is used to generate an async call-back to action the activity. (By doing an image load this request is decoupled from the URI that incidentally triggered it.
Another approach is a variant which uses independent daemon or cron job to service this queued-request table.
Another approach is simply to execute a child process which then forks to daemonise itself and detach from the Apache worker process.
And another approach ...
We are using Jmeter to test our Php application running on the Apache 2 web server. I can load up Jmeter to use 25 or 50 threads and the load on the server does not increase, however the response time from the server does. The more threads the slower the response time. It seems like Jmeter or Apache is queuing the requests. I have changed the maxclients value in apache web server configuration file, but this does not change the problem. While Jmeter is running I can use the application and get respectable response times. What gives? I would expect to be able to tax my server down to 0% idle by increase the number of threads. Can anyone help point me in the right direction?
Update: I found that if I remove sessions from my application I am able to simulate a full load on the server. I have tried to re-enable sessions and use an HTTP Cookie Manager for each thread, but it does not seem to make an impact.
You need to identify where the bottleneck is occurring, and then attempt to remediate the problem.
The JMeter client should be running on a well equipted machine. I prefer a Solaris/Unix server running the JVM, but for <200 threads, a modern windows machine will do just fine. JMeter can become a bottleneck, and you won't get any meaningful results once it does. Additionally, it should run on a separate machine to what your testing, and preferable on the same network. The WAN latency can become a problem if your test rig and server are far apart.
The second thing to check is your Apache workers. Apache has a module - mod_status - which will show you the state of every worker. It's possible to have your pool size set too low. From the mod_status, you'll be able to see how many workers are in use. To few, and Apache won't have any workers to process requests, and the requests will queue up. Too many, and Apache may exhaust the memory on the box it's running on.
Next, you should check your database. If it's on a separate machine, the database could have an IO or CPU shortage.
If your hitting a bottleneck, and the server and db are on the same machine, you'll generally hit a CPU, RAM, or IO limit. I listed those in the order in which they are easiest to identify. If you get a CPU bound app, you can easily see you CPU usage go to 100%. If you run out of RAM, your machine will start swapping. On both Windows and unix it's fairly easy to see your available free RAM. Lastly, you may be IO bound. This too can be monitored using various tools or stats, but it's not as obvious as CPU.
Lastly, specifically to your question, the one thing that stands out is it's possible to have a huge number of session files stored in a single directory. Often PHP stores session information in files. If this directory gets large, it will take increasingly long amount of time for PHP to find the session. If you ran your test will cookies turned off, the PHP app may have created thousands of session files for each user request. On a Windows server, it will slow down faster than on a unix server, do to differences in the way directories are stored on the two operating systems.
Are you using a constant throughput timer? If Jmeter can't service the throughput with the threads allocated to it, you'll see this queueing and blowouts in the response time. To figure out if this is the problem, try adding more threads.
I also found a report of this happening when there are javascript calls inside the script. In this instance, try to move javascript calls to the test plan element at the top of the script, or look for ways to pre-calculate the value.
Try checking a static file served by apache and not by PHP to see if the problem is in the Apache config or the PHP config.
Also check your network connections and configuration. Our JMeter testing was progressing nicely until it hit a wall. Eventually realized we only had a 100Mb connection and it was saturated, going to gigabit fixed it. Your network cards or switch may be running at a lower speed than you think, especially if their speed setting is "auto".