Server, webserver, php, mysql benchmarking - php

I've been working on a enterprise application with a user base of 1000. There are about 20-30 users at any given time using the tool and at peak times around a 100 users.
Current configuration:
LAMP Stack
MySQL 5.5.29, PHP 5.3.20 - CodeIgniter Framework, Front-end Action script with Flex Framework
Intel Xeon 32 Cores # 2.9 GHz
161 GB of RAM
2 TB Hard drive
Linux Fedora 16 OS
I have just recently received a requirement that we will be granting access to different departments which will add about 5000 more users.
I've been mostly a front-end developer with the last two years doing back-end development as well. However, I do not have enough experiencing in load balancing, what hardware configuration can support what etc.
Currently, I am building the application from the ground up trying to optimize areas with bottlenecks and also add certain features so we don't experience issues with the new additions.
First, are there any phpmyadmin type dashboards that can be installed on linux that will give me a nice GUI showing CPU load, memory usage, harddrive read/writes etc. ?
Second, are there any suggestions from people with experience in this area on what I can do to optimize the server?
Third, is my server configuration enough to support that many users? Application will be using 75% select statements with joins to display reports and 25% of the time writing/updating to the database.
Thanks in advance for your help.

Related

best practices to reduce the ram load on the server

I recently saw stats on my vps dashboard. but I wonder why the level of use in RAM so much, when the number of visitors have not boming.
following product details:
OS : CentOS 7 64bit
CPU Cores count : 3
Total CPU(s) speed : 4800Mhz
Memory : 2 GB
The number of online visitors an average of 30 people. this takes up about 50% of the amount of Memory available. so it can be estimated if the number of online visitors reached 60 people, then the use of RAM is overloaded.
Is this at a reasonable level? or I need to set up a strategy to prevent the site from being down?
Additional information: I build a site NOT from wordpress or anything else. all suggestions and opinions are greatly awaited, thank you.
The subject is too broad.
Manage your cache
Make use of virtual memory
Use pointers and references when programming: Even if you are not using C++, you still can call large objects and files by reference.
kill unused processes
Uninstall unnecessary applications. This includes starting up applications if you know what you are doing.
Do not open many windows at once
prefer text and command utilities rather than graphical interfaces.

Apache/PHP only uses 50% of CPU in Windows on VMWare

In Windows Server 2012 with 2 CPUs I have Apache 2.4 with PHP 5.6 and when I generate a PDF document with DOMPDF a cumulative 50% of total CPU power is used. No matter what I do, I cannot get the total over 50%. I tried opening a bunch of windows and creating a multiple series of PDF docs at the same time.
Each individual CPU will be less than 50% and if one spikes up the other spikes down at the same time. It seems like windows is limiting the Apache service to use 50% of the CPU. Is there somewhere to change this?
Edit: my application is already utilizing both CPUs just not to their full capacity, and after 60 seconds of load, the utilization moves to 100%. I think it is not anything to do with threading... maybe an environment setting?
It's not windows limitation but program design itself. I think it is related with CPU cores (for example it has 4 cores and uses just 2, literally 50%).
As far as I know you cannot do anything about this, as it cannot be split to more cores without proper program design.

Getting performance issue while clicking Test execution link. Tree is taking time to load

I am currently using Testlink1.9.3 for Test Management and execution of our project from last 5 years.
First we started with Testlink1.6.3 version now I have migrated to 1.9.3.
Now I am getting performance issue while clicking Test execution link. Tree is taking time to load.
Detailed Information :
The test link performance is quite slow.
It is taking time to load the test project
It is taking time to load the test plan
While executing the test cases the performance is Fair
While creating new plan/build the performance is Fair
Machine Details:
Database size is :- 193 MB
Database server detail
Processor :- Intel Xeon CPU X3470 # 2.93GHz
RAM :- 15GB
Application's H/W machine detail is as below
Processor :- Intel Core i3 CPU # 2.93GHz
RAM :- 4GB
Configurations at the application level are set to have get best performance like making sure below parameters are set:
$g_use_ext_js_library = ENABLED;
$tlCfg->treemenu_type = 'EXTJS';
And double checked to see if there can be any other configuration improvements that can be made.
Logs:
Use of undefined constant TL_LIGHTBOX_CSS - assumed 'TL_LIGHTBOX_CSS'
- in /var/www/html/thunder/gui/templates_c/%%56^560^56038526%%inc_head.tpl.php
- Line 40 [<<][512c073808f8a063878788][DEFAULT][/lib/execute/execNavigator.php][13/Feb/26
00:52:08][13/Feb/26 00:52:25][took 17.290396 secs]
I have also migrated 1.9.3 to 1.9.5 but performance issue is still there.
Any help on this would be highly appreciated. Still waiting for an expert answer.

Put Magento's var directory in RAM

I need to speed up my magento installation, so I'm planning to put the content of 'var/' (or only var/cache and var/sessions) on a tmpfs.
I'm also buying a reserved instance on Amazon, so I would like to keep a sufficent amount of RAM. I want to enable memcached, PHP Apc, MySQL caching and HTTP caching.
I'm thinking of a Medium Reserved Instance with the following specs:
3.75 GB memory
2 EC2 Compute Unit (1 virtual core with 2 EC2 Compute Unit)
410 GB instance storage
32-bit or 64-bit platform
I/O Performance: Moderate
EBS-Optimized Available: No
API name: m1.medium
Will the RAM be enough to appy a good caching system?
Looking now (after 3 months) the var directory is 14gb, but I think cleaning it up each 5/7 days would be good too.
Do you have any suggestion for me?
P.S. the store will contain an average of 100/150 products.
I think moving /var to a tmpfs is probably not your biggest bottleneck and would likely be more trouble than its worth. Make sure Magento caching is enabled and you have APC enabled.
This post covers some general tips on increasing Magento performance:
Why is Magento so slow?
I would suggest looking into setting up a reverse proxy like Varnish.
Getting Varnish To Work on Magento
If you do plan on just using a tmpfs in memory I would suggest looking into Colin's improved over Zend_Cache_Backend_File
https://github.com/colinmollenhour/Cm_Cache_Backend_File
Also I would suggest looking into mytop to keep tabs of if you have any places you can optimize queries in the application itself or in my.cnf to help ease any DB bottlenecks.
http://jeremy.zawodny.com/mysql/mytop/
Session Digital has a good white paper (although somewhat dated) on optimizing Magento enterprise and the same can be applied to Community. Out of everything I've tried, Varnish, as mentioned in the White paper offered the most significant increase in response time.
http://www.sessiondigital.com/resources/tech/060611/Mag-Perf-WP-final.pdf
Hope this helps!
Firstly, +1 to all of the answers here.
if you're thinking about running /var/ out of tmpfs it's probably because you've heard of the lousy file IO on AWS or you have experienced issues with it yourself. However, the /var/ directory is the least of your concern - Zend / Magento's autoloaders are more taxing to IO. To mitigate that you want to run APC and the compilation (assuming you're not using persistent shopping cart).
As echoed by other commenters, anything that runs from cache or memory will circumvent PHP and thus the need to touch the disk and incur IO issues. Varnish is a bit of a brute-force approach and is a wonderful tool on massive sites that scale to millions of views; but I believe that Varnish's limitations with SSL and the lack of real documentation and support from our Magento community make it a better intellectual choice than an actual alternative.
When running Magento Community I prefer to run Tinybrick's Lightspeed on AWS on a Medium instance - which gives me the most bang-for-buck and is itself a full-page-cache. I get 200+ concurrent pages/second in this setup and I'm not running memcached or using compilation.
http://www.tinybrick.com/improve-magentos-slow-performance.html/
Be careful with running memcached in your AWS instance as well - I find that it can be impeded by a power-hungry Apache gone wild in the rare instance you haven't got a primed cache which causes Apache maxclients issues while it waits for cache response. If you could afford it I would rather run two micro Apache instances with a shared memcached session store and a load balancer in front of them - give some horsepower to the db on a separate box for them to share, though. But all setups are unique and your traffic/usage will dictate what you need.
I have run Magento in the AWS cloud for 3 years with great success - and I wish the same to you. Cheers.

Why is this Zend framework killing my CPU usage and loading pages so slow

The framework I am using is called SocialEngine.net v4, and it's completely written in Zend, so it's insanely super CPU intensive. SocialEngine is in PHP and uses MySQL.
I need to know what OS, what hardware you suggest (dual xeons, amd, how much ram, etc...) and how to optimize it properly to handle large amounts of traffic.
I only have 11k users right now, and it's running incredibly slow, I'm talking 7 second page load times.
The framework however does have memcached, and apc options for caching installed, but even with APC or Memcache on, it doesn't make a big enough difference...
I need to know what the best way to attack this is as far as optimizing mysql, inoodb tweaks, apache tweaks, any performance tweaks, what type of hardware, and amount of ram.
I have a very big marketing plan in place, and will probably start increasing traffic by 1,000+ signups per day... So traffic will start to rise very progressively. When I initially marketed, I did 50k uniques in 6 hours, 20k signups, and 500k pageviews... (server crashed, lost half my users... and haven't marketed since, because I been trying to rebuild)
You could start with xdebug to profile your application and find the bottleneck
Honestly? And this is just my opinion, instead of spending a small fortune on a single server - buy many small servers and load balance them. Mac Mini's are wonderful for this and are capable of running their standard OS X or Linux if you choose. You will get way more performance out of 10 small $500 machines than you will out of 1 $5000 machine.
You don't provide us with any information about your set up.
How many servers do you have? What services are they running?
When you say APC and Memcached is on, have you actually set them up to actually work?
How many connections does your Apache allow for?
What is your MySQL configuration like? Is the memory settings optimised? Most importantly are all your tables indexed properly? Have you checked your slow query log? Did you run EXPLAIN for your queries?
ZF wise, are you caching your table metadata? Are you caching tables that don't change so that you save network traffic? Have you checked the official ZF optimisation guide?
Also... Why did you assume that ZF is killing your CPU usage?

Categories