Heroku 1x vs 2x web dynos - php

I've been running some Facebook apps in Heroku-hosted environment with a fair bit of traffic .
These are simple apps using 2 php files (a one-page app plus an AJAX entry-point for data), which are very simple and place no real demands on server memory, the most resources are taken up serving images and the heaviest thing they'll do in terms of CPU load is a curl request to a web API, or a call to a database to get some data.
Due to the traffic and demands on server concurrency (10-20 dynos on average) I've been doing research into how to configure an app for maximum performance, and found that the biggest bottleneck comes from a limitation imposed by Heroku's default boot.sh apache deploy script which sets MaxClients=1 on my application by default.
This has the effect of limiting Apache to 1 thread for handling HTTP requests.
In come the new and improved 2x dynos which cost twice as much and promise twice as much RAM and CPU performance.
Now I can understand that RAM won't make much of a difference when the main bottleneck is handling of HTTP requests, but I would assume the new dynos will set MaxClients=2 (havent been able to check yet) and so I'm wondering whether I'd be better off running my app with half as many 2x dynos than the amount of 1x dynos I normally use. Anyone know the answer?

Upping to a 2x dyno won't change your MaxClients. You need to change your application configuration to support more than one concurrent request.

Related

Slow Performance on Scaled PHP/MySQL Application on Openshift

I have two scaled PHP installations on Openshift using Small Gears (1 for PHP, 1 for MySQL per application so 4 small gears all together) and both installations are extremely slow, and only occasionally bearable. I've seen a lot of questions about this that say to disable functionality (like Wordpress Plugins for example), but my application has worked SUPER fast on shared hosting using the current feature-set and can't exist without the features currently in use.
My hope in using Openshift was to avoid having to do server config and have an instantly scalable platform. So my question is what needs to be done to get this config up to speed so I can continue using Openshift? Or is it not possible without the same amount of time and energy it would take me to just setup my own AWS resources?
Should I be using Medium or Large Gears for my application to have the speed necessary to run a real-world application (even though there's only one user)?
For additional information:
I am currently the only user (no one else is accessing the site).
I frequently get HTTP 503 errors (after refreshing a few times these go away)
Using ssh and looking at the gear's resources I never come close to pegging the CPU and am always within the 20%-50% threshold on the CPU.
Memory usage is reporting 7416848k used of 7513696k available with the httpd processes taking a total of 1% of memory
I have done 0 customization, all I did was start the app with php 5.4 and MySQL (scaled) and did a git push of my code.

Laravel4 memory consumption concerns

Case
Currently I am developing an application using Laravel 4. I installed profiler to see the stats about my app. This is the screenshot:
Questions
You can see that it consumes 12.25 MB memory for each request (very simple page) in my vagrant (Ubuntu 64 bit + Nginx + PHP 5.3.10+ MySQL). Do you think this is too much ? This means If I have 100 concurrent connections, the memory consumption will be about 1 GB. I think this is too much but what do you think ?
It loads 237 files for each request. Do you think this is too much ?
When I deploy this app to the my server (Centos 6.4 with Apache + PHP 5.5.3 with Zend OPcache + MySQL) the memory consumption decreases dramatically. This is the screenshot from the server:
What do you think about this difference between my mac and the server ?
No, you don't really need to worry about this.
12MB is not really a large amount for a PHP program. And 100 concurrent connections is a lot.
To put it into context, assume your PHP page takes half a second to run, that would mean you'd need to have 12000 page loads per minute to achieve a consistent 100 concurrent connections. That's a lot more traffic than any of my sites get, I can tell you that.
Of course, if your page takes longer than half a second to load, this number will come down quickly, and your 100 concurrent connections can become a possibility much more easily.
This is one reason why it's a really good idea to focus on performance‡ -- the quicker your program can finish running, the quicker it can free up its memory for the next visitor. In fact unless you have a really major memory usage problem (which you don't), performance is probably more important in this context than the amount of memory being used.
In any case, if you do have 100 concurrent connections, you're likely to get issues with your server software before you have them with PHP. Apache has a default limit to the max number of connections, and it is a lot lower than 100. (you can raise it, of course, but if you really are getting that kind of traffic, you'll likely be wanting more servers anyway)
As for the 12M memory usage, you're not really ever likely to get much less than that for a PHP program. PHP needs a chunk of memory just in order to run in the first place, and the framework will need a chunk too, so most of your 12M will be due to that. This means that although your small program may be using 12M, it does not follow that a larger program would use twice as much. So you probably don't need to worry too much about it.
If you do have high traffic, and performance issues as a result, there are various ways you can mitigate the problem. The main one is by using caching. PHP 5.5 comes with an OpCache module built-in, which will cache your programs for you so that it doesn't have to do all the bootstrap work such as loading all the files every time. For some systems, this can have a dramatic impact on performance.
There are also other layers of caching you can use, such as a server-level page cache like Varnish, which will cache your static pages so that PHP doesn't even need to be called if the page content hasn't changed.
(‡ of course there are other reasons for focussing on performance too, like keeping your visitors happy)

How to stabilise PHP response times when dealing with multiple simultaneous requests?

I'm building a PHP application with an API that has be able to respond very rapidly (within 100ms) to all requests, and must be able to handle up to 200 queries per second (requests are in JSON, and responses require a DB lookup + save every time). My code runs easily fast enough (very consistently around 30ms) for single requests, but as soon as it has to respond to multiple requests per second, the response times start jumping all over the place.
I don't think it's a memory problem (PHP's memory limit is set to 128MB and the code's memory usage is only around 3.5MB) or a MySQL problem (the code before any DB request is as likely to bottleneck as the bit that interacts with the DB).
Because the timing is so important, I need to get the response times as consistent as possible. So my question is: are there any simple tweaks I can make (to php.ini or Apache) to stabilise PHP's response times when handling multiple simultaneous requests?
One of the slowest things (easiest to fix) in my experience in a server in terms of bottleneck is going to be your filesystem and hard drives. I think speeding this up will help out in all other areas.
So you could for example upgrade the hard drive where your httpdocs and database resides. You can put it on an SSD drive for example. Or even make a RAM disk and place all files on it.
Alternatively you can setup your database such that it operates off of a Memory storage engine.
(Related info here too)
Of course for all that you'll need a lot of physical memory. It is also important to note if your web/app hosting you got is shared then your going to have problems with Shared Memory.
Tune Mysql
Tune Apache
Performance tune PHP
Get Zend Optimizer enabled, or look at APC, or eAccelerator
Here's some basic LAMP tuning tips from IBM
Here's a slideshare with some good advice as well

concurrent users can a Apache + PHP solution support

How any concurrent users can a Apache + PHP solution support? Please don’t be bogged down by Mysql constraints – we are using LAP without M as we are storing around 2-8 PB at the back end.
Why not try it out:
ab - Apache HTTP server benchmarking tool
As an alternative Siege comes to mind.
Also see the answers to How to test a site rigorously
How any concurrent users
Ok, there's your first issue - HTTP is stateless, so your webserver can support an infinite numbers of users - as long as they don't actually submit any requests to the webserver. Really the limiting factor is the number of concurrent connections to the webserver. This is going to be determined by:
1) the frequency at which users make requests
2) the length of time it takes to service the request
3) the keepAlive duration
The first 2 will vary enormously from application to application, while the latter is something you can control - using keepalives will improve performance at the browser at the expense of hogging memory (and therefore slowing down) at the server. Using a keepalive of more than 2 seconds is probably a waste of time.
There are good books available on Apache performance tuning which will allow you to optimize the webserver for your application.
Of course, if you have a common data substrate, then there's nothing to stop you adding more webservers on top of the storage (unlimited horizontal scalability) - so it's the storage substrate which ultimately limits the capacity / performance of the system (until you look at tuning the code and storage). And you get the added benefit of improved resillience.
Certainly a fairly low end PC (2GHz CPU, 2Gb ram) should comfortably handle upwards of 500 current connections - particularly if you're running a database-centric application, then you'll also get more benefit out of adding servers rather than upgrading the CPU/RAM.
HTH
C.

jmeter multiple users problem

We are using Jmeter to test our Php application running on the Apache 2 web server. I can load up Jmeter to use 25 or 50 threads and the load on the server does not increase, however the response time from the server does. The more threads the slower the response time. It seems like Jmeter or Apache is queuing the requests. I have changed the maxclients value in apache web server configuration file, but this does not change the problem. While Jmeter is running I can use the application and get respectable response times. What gives? I would expect to be able to tax my server down to 0% idle by increase the number of threads. Can anyone help point me in the right direction?
Update: I found that if I remove sessions from my application I am able to simulate a full load on the server. I have tried to re-enable sessions and use an HTTP Cookie Manager for each thread, but it does not seem to make an impact.
You need to identify where the bottleneck is occurring, and then attempt to remediate the problem.
The JMeter client should be running on a well equipted machine. I prefer a Solaris/Unix server running the JVM, but for <200 threads, a modern windows machine will do just fine. JMeter can become a bottleneck, and you won't get any meaningful results once it does. Additionally, it should run on a separate machine to what your testing, and preferable on the same network. The WAN latency can become a problem if your test rig and server are far apart.
The second thing to check is your Apache workers. Apache has a module - mod_status - which will show you the state of every worker. It's possible to have your pool size set too low. From the mod_status, you'll be able to see how many workers are in use. To few, and Apache won't have any workers to process requests, and the requests will queue up. Too many, and Apache may exhaust the memory on the box it's running on.
Next, you should check your database. If it's on a separate machine, the database could have an IO or CPU shortage.
If your hitting a bottleneck, and the server and db are on the same machine, you'll generally hit a CPU, RAM, or IO limit. I listed those in the order in which they are easiest to identify. If you get a CPU bound app, you can easily see you CPU usage go to 100%. If you run out of RAM, your machine will start swapping. On both Windows and unix it's fairly easy to see your available free RAM. Lastly, you may be IO bound. This too can be monitored using various tools or stats, but it's not as obvious as CPU.
Lastly, specifically to your question, the one thing that stands out is it's possible to have a huge number of session files stored in a single directory. Often PHP stores session information in files. If this directory gets large, it will take increasingly long amount of time for PHP to find the session. If you ran your test will cookies turned off, the PHP app may have created thousands of session files for each user request. On a Windows server, it will slow down faster than on a unix server, do to differences in the way directories are stored on the two operating systems.
Are you using a constant throughput timer? If Jmeter can't service the throughput with the threads allocated to it, you'll see this queueing and blowouts in the response time. To figure out if this is the problem, try adding more threads.
I also found a report of this happening when there are javascript calls inside the script. In this instance, try to move javascript calls to the test plan element at the top of the script, or look for ways to pre-calculate the value.
Try checking a static file served by apache and not by PHP to see if the problem is in the Apache config or the PHP config.
Also check your network connections and configuration. Our JMeter testing was progressing nicely until it hit a wall. Eventually realized we only had a 100Mb connection and it was saturated, going to gigabit fixed it. Your network cards or switch may be running at a lower speed than you think, especially if their speed setting is "auto".

Categories