i am running the single core 512MB DO(digital ocean) droplet and Cent OS 6 i have configured php to use mod_suphp for security reasons. i will be running multiple sites off this box at some point, i want to isolate them all from eachother. the suphp setup went perfectly, i was able to install wordpress and set up the databases, ftp etc. the issue i am having is that certain actions spike the php-cgi process up to 100% and eventually timeout. the wordpress customizer hangs on save while accessing the admin-ajax.php file. one of the themes i was using (the X theme) when trying to upload a json file ended up hanging and timing out on line 30 of wp-includes/compat.php on cpanel servers, i used suphp without any issue, and the same actions and themes work fine. the only difference i notice is that the php process on cpanel machines is "php" whereas mine is "php-cgi". i have no idea if this is part of the issue, but any help at identifying why and how only certain wordpress scripts are overloading the cpu would be helpful. an important note is that the site is not under any traffic when this happens, as it is only in development. also there is just over 50% of the RAM used while the CPU is spiking so i am not running out of memory
SuPHP processes the file every single time it is called, due to this, it causes a lot of CPU usage. SuPHP just in general uses a lot of CPU, adding WordPress to the mix just makes the CPU usage even more. I recommend using FastCGI as your PHP handler as it uses a low amount of CPU but a high amount of memory. In addition you will be able to use OPCode caching such as APC or memcached, causing WordPress to be significantly faster. In regards to your security concern, FastCGI has the same security as SuPHP and you can upload things no problem. One small thing to note though is your going to need to tweak the settings quite a bit before you get it right, there will be errors at first possibly, the answers to all of which you can get courtesy of Google. Also, I am not sure how DO operates but if you need to fix permissions and have Cpanel, here is a nice article: http://boomshadow.net/tech/fixes/fixperms-script/
Related
I am on a VPS server (Quad core with 768MB of RAM. I am running Apache and using APC as my default caching engine) . I get around 2000 visitors per day with a maximum of 100 concurrent visitors.
According to CakePHP DebugKit, my most memory intensive pages use around 16MB and the other pages average at around 10MB.
Is this memory usage normal?
Is it normal for my memory to bottleneck at 2000 visitors per page?
Should I consider upgrading my plan to 2GB RAM?
I also noticed that the View rendering is taking up most of the memory, around 70% on most pages.
I am monitoring my resource usage, when I have around 50 or more concurrent users, I am getting 0 free MB left.
Thank you
You should also check your other processes. From my experience, MySQL takes up more memory than anything else on any stack that I run. You should also implement better page caching so that PHP doesn't need to be touched when it isn't absolutely necessary. But Apache is also a memory hog that needs to be fine tuned. If you want to stick with Apache, then run Varnish in front of it.
Also, keep APC, but also add Memcached. It's much faster.
If your site has a spike-load that brings it to zero memory, then, if you can, consider launching extra instances of the server and doing a sort of round-robin (if the VPS is cloud-hosted and this is possible). If the load is constant, then definitely upgrade.
#burzum is completely right, however, that you should just switch to nginx. It's far, far better than Apache at this point. But, just to put you on the right track, quite a few people run nginx as a reverse proxy in front of Apache, and while that does speed up the server, it's entirely unnecessary because nginx can do pretty much anything you need it to do without Apache. Don't bother using Varnish in front of nginx either because nginx can act as its own reverse proxy.
Your best bet is to implement nginx with apcu (upgrade php to 5.5 if possible for better performance) and use memcached, and implement nginx's native microcaching abilities. If your site is heavy on read and light on write, then you might notice that everything is taken care of by nginx just retrieving a cached copy from memcache. While this helps with memory, it also helps with processing. My servers' CPUs usually have a 3-5% usage during peaks.
Is this memory usage normal?
Yes, it doesn't seem to be high for a normal CakePHP application.
Is it normal for my memory to bottleneck at 2000 visitors per page?
I guess yes, I'm not a "server guy".
Should I consider upgrading my plan to 2GB RAM?
I would try switching to Nginx first. Apache is a memory eating monster compared to Nginx, just search for a few banchmarks, a random one I've picked by a quick search is from this page.
Overall Nginx should provide you a much better performance. At some stage I would consider updating the memory but try Nginx first.
i have some problem with my VPS.
We are using a VPS to run our CMS and our websites, for now we have 300MB memory limit, and now we are close to reach the limit.
To maintain low cost(i know, increase memory is not to much expensive), but if i find a solution to optimize what we have, will be better.
What can i do?
Thanks!
I would suggest to increase memory - more memory, faster website :-)
But if speed is not important, reduce all cache sizes, set php memory_limit to 8M, disable opcode caching (APC, eAccelerator)
or try Raspberry Pi as server, now comes with 512MB :-)
I have a small 256MB vps which uses
apache + php + mariadb (mysqld)
I found memory was being gobbled by apache at every request 20MB at time for a simple wordpress page. It would settle but over time gobble into swap space and slow everything to a crawl. I suspect there are ways to fine tune mpm_event and mpm_worker to stop entering swap but I didn't figure out how.
When working in such tight environments it is important to know what is using what memory and reduce everything, so that swapping is minimized.
A summary of what I did follows, I managed to get 100MB physical headroom for tweaking, this is not a heavily loaded server but needs to be accessible (and cheap):
Using slackware (the os and default services seem to use less memory than debian, perhaps there is less loaded by default on my providers image)
Switch mysqld innodb tables off and use myisam
configure mysqld to reduce cache sizes if relevant
install cms fresh so they are definitely using myisam or alter all the tables
in apache choose to use the mod_mpm_prefork rather than mpm_event and mpm_worker ("apachectl -V" will tell you which is being used)
set sensible values for maxserver(start from low values for number of servers and requests and work up)
test the server under load whilst doing 'watch free' or 'top' through ssh
you should be able to see memory & the processes being created and destroyed as the load increases
adjust your httpd server settings and retest until happy
I like to see the memory max out at no more than 90% and reduce back down when the load is gone before I am convinced it will stay up without grinding to a halt (start using swap).
check your php.ini memory settings as stated in another answer.
I have also set a cron job to send me an email when swap starts to get used significantly, so I can restart something or even restart the whole server if I can't find what has used the memory, this should happen less and less the more you fine tune.
As stated this is not an environment that will perform well under heavy load but the cost may be more important to you.
Just my two pence worth...
I would look at Nginx like concerto49 recommended, if you have just one website on it also consider Litespeed (www.litespeedtech.com) they have a free version which may be sufficient to power your site.
If its PHP based, then strip out everything you're not using. Use APC/XCache to processing every request. Nginx also has a caching module, that could help so you can avoid hitting PHP for every request if its still fresh.
What type of VPS is it? OpenVZ? Xen? KVM? If it is OpenVZ does it have VSwap or Burst memory?
What type of CMS / Website are you running? Is it PHP based? Are you using Apache? If so have you tried nginx? I would look at optimizing the web server component and removing unused processes/applications to reduce memory and increase performance.
Running WordPress on IIS 7 (Windows Server 2008) with WP-SuperCache as per IIS.net's guide.
Was running great but recently we changed the permissions on some folders and the administrator password and we're getting huge spikes in our CPU usage as a result of the PHP-cgi.exe processes.
This leads me to believe it's not caching however the pages themselves have the "Cached with WP-SuperCache" comments at the bottom, and the caching seems to be working correctly.
What else could be the issue here?
I think I may have found a solution or at least a work round to this problem, at least it seems to be working for me reliably.
Try setting the Max Instances setting, under IIS Server --> FastCGI Settings, to 1.
It seemed to me that only certain requests were causing a php-cgi.exe process to go rogue and hog the cpu, usually when updating a post. When reading other posts on this issue one of them mentioned the Max Instances setting and that it is set to default at 0 or automatic. I wondered if this might not have a good effect when things aren't as they should be. I'm guessing (but this isn't quite my field of expertise) if a certain request(s) is causing the process to lock-up, so FastCGI just creates another, whilst leaving the first in place. Somehow it seems only having a single instance allows PHP to move on from the lock-up and the cpu stays under control.
For servers with high-levels of requests setting FastCGI to only a single instance may not be ideal, but it certainly beats the delays I was getting before. Used in combination with WP-SuperCache and WinCache, things seem to nipping along nicely now.
Looking at that task mgr looks like its missing the cache on every request. Plus that article dates to 2008 so difficult to say whether the directions as written would still work. Something with WP-SuperCache could have changed.
I would recommend using W3 Total Cache. I've done extensive testing with it on Windows Server 2008 and IIS 7 and it works great. It is also compatible with and leverages the WinCache extension for PHP. Has some other great features too if you're interested, minification, CDN support, etc. It's a really great performance plugin for WordPress. You can get the plugin here, http://wordpress.org/extend/plugins/w3-total-cache/
some other things to check...
What size is the app pool? (# of processes?)
Make sure you are using PHP 5.3.
Make sure you are using WinCache.
Make sure to set MaxInstanceRequests to something less than PHP_FCGI_MAX_REQUESTS. Definitely do not allow PHP to handle recyling the app pool. The default is 10K requests. If you are seeing these results during a load test then this might be the cause. Increase MaxInstanceRequests and keep it one less than PHP_FCGI_MAX_REQUESTS.
Hope that helps.
I have an installation of 1.3.2.4 running two Store Views and 2,734 products. The site sees around 15,000 visits a month.
Apache and MySQL (mostly Apache) hovers at around 1.5 GB RAM usage most of the time and peaks over 3 GB. My questions is, considering the stats, is this normal? Seems like a lot.
If that memory usage is in fact abnormal, would an upgrade to 1.4.1.1 help?
If you consider your stores, then you are doing just fine. But regarding the traffic you're getting, it seems that you need to provide some extra features to Magento to let it fire up. For this, you can have some of the following:-
Install APC (Alternative PHP Cache) or XCache (or any other alternative) and configure the use of it in your Magento back-end. It dramatically increases the speed of Magento.
You can have Magento's cache stored in memory (tmpfs in Linux).
You can also tell Magento to save sessions into Memcache so that your sessions are in memory & distributed.
Check your Magento's Index Management section for any requirement of indexes, every month or bi-monthly. If you do find any indexing required, then do it immediately & clear the cache from your Cache Management.
Check your database every week or bi-monthly for any overhead in any of your database's tables. If you do find any overhead, then "optimize" those tables immediately.
Try reading some of these articles, to know more about these.
Also, upgrading to 1.4.1.1 will help you out in terms of features provided by Magento. But for performance, I think it will be best to wait for some more time, until Magento releases its version 2 in the market, in which some performance issues may be taken care of by Magento.
Hope it helps.
1.3.2.4 is a good stable release, upgrading to 1.4.0.1 is very painless and will give you the added benefit of split index management and much faster administration area (mass attribute update is fixed).
Don't be overly concerned about memory usage, depending on the number of Apache modules you have loaded, you should expect to see about 30MB per child. As long as your not swapping or encroaching your limits, you shouldn't have any real concerns over how much is being consumed. Disabling unused modules will help cut down memory - but to be honest, not by any noticeable margin.
You could always throw Nginx in front as a reverse proxy to serve static content requests and pipe PHP/dynamic reqs. back to Apache. That way you can keep the modular Apache build with .htaccess support and cut down your memory overheads significantly.
However, this could do with more information, such as the output from
free -m
To see how some of the memory is being allocated.
I'd probably suggest downloading tuning-primer.sh to run on your MySQL config. It will give a good (entry-level) indication of how efficient your memory allocation is.
Those stats look quite typical for Magento, if you consider a single hit/page load can use upwards of 64MB RAM.
Your Apache settings can also drastically effect the amount of RAM your system uses. Upgrading your Magento installation may give some small performance boost, but don't expect it to do much for memory consumption etc.
If your memory consumption is a real issue for you then you have several possible routes to reduce resource usage, such as:
Install Nginx as a reverse caching proxy to apache (apache is a hog and is poor serving static content).
Use Nginx + PHP Fast CGI and remove apache
Try using worker MPM module for apache, or Fast CGI.
Install caching proxy such as Varnish/Squid.
If you are stuck with apache you can tweek KeepAlive and other settings to allow you to reduce memory usage
Tweek MySQL settings, such as query caching to imporove resource usage / performance
I have found 1. to work very well in reducing cpu/memory usage as it will allow Nginx to serve static images etc without requiring apache to hog RAM trying to serve them.
We run a medium-size site that gets a few hundred thousand pageviews a day. Up until last weekend we ran with a load usually below 0.2 on a virtual machine. The OS is Ubuntu.
When deploying the latest version of our application, we also did an apt-get dist-upgrade before deploying. After we had deployed we noticed that the load on the CPU had spiked dramatically (sometimes reaching 10 and stopping to respond to page requests).
We tried dumping a full minute of Xdebug profiling data from PHP, but looking through it revealed only a few somewhat slow parts, but nothing to explain the huge jump.
We are now pretty sure that nothing in the new version of our website is triggering the problem, but we have no way to be sure. We have rolled back a lot of the changes, but the problem still persists.
When look at processes, we see that single Apache processes use quite a bit of CPU over a longer period of time than strictly necessary. However, when using strace on the affected process, we never see anything but
accept(3,
and it hangs for a while before receiving a new connection, so we can't actually see what is causing the problem.
The stack is PHP 5, Apache 2 (prefork), MySQL 5.1. Most things run through Memcached. We've tried APC and eAccelerator.
So, what should be our next step? Are there any profiling methods we overlooked/don't know about?
The answer ended up being not-Apache related. As mentioned, we were on a virtual machine. Our user sessions are pretty big (think 500kB per active user), so we had a lot of disk IO. The disk was nearly full, meaning that Ubuntu spent a lot of time moving things around (or so we think). There was no easy way to extend the disk (because it was not set up properly for VMWare). This completely killed performance, and Apache and MySQL would occasionally use 100% CPU (for a very short time), and the system would be so slow to update the CPU usage meters that it seemed to be stuck there.
We ended up setting up a new VM (which also gave us the opportunity to thoroughly document everything on the server). On the new VM we allocated plenty of disk space, and moved sessions into memory (using memcached). Our load dropped to 0.2 on off-peak use and around 1 near peak use (on a 2-CPU VM). Moving the sessions into memcached took a lot of disk IO away (we were constantly using about 2MB/s of disk IO, which is very bad).
Conclusion; sometimes you just have to start over... :)
Seeing an accept() call from your Apache process isn't at all unusual - that's the webserver waiting for a new request.
First of all, you want to establish what the parameters of the load are. Something like
vmstat 1
will show you what your system is up to. Look in the 'swap' and 'io' columns. If you see anything other than '0' in the 'si' and 'so' columns, your system is swapping because of a low memory condition. Consider reducing the number of running Apache children, or throwing more RAM in your server.
If RAM isn't an issue, look at the 'cpu' columns. You're interested in the 'us' and 'sy' columns. These show you the percentage of CPU time spent in either user processes or system. A high 'us' number points the finger at Apache or your scripts - or potentially something else on the server.
Running
top
will show you which processes are the most active.
Have you ruled out your database? The most common cause of unexpectedly high load I've seen on production LAMP stacks come down to database queries. You may have deployed new code with an expensive query in it; or got to the point where there are enough rows in your dataset to cause previously cheap queries to become expensive.
During periods of high load, do
echo "show full processlist" | mysql | grep -v Sleep
to see if there are either long-running queries, or huge numbers of the same query operating at once. Other mysql tools will help you optimise these.
You may find it useful to configure and use mod_status for Apache, which will allow you to see what request each Apache child is serving and for how long it has been doing so.
Finally, get some long-term statistical monitoring set up. Something like zabbix is straightforward to configure, and will let you monitor resource usage over time, such that if things get slow, you have historical baselines to compare against, and a better ieda of when problems started.
Perhaps you where using worker MPM before and now you are not?
I know PHP5 does not work with the Worker MPM. On my Ubuntu server, PHP5 can only be installed with the Prefork MPM. It seems that PHP5 module is not compatible with multithreading version of Apache.
I found a link here that will show you how to get better performance with mod_fcgid
To see what worker MPM is see here.
I'd use dTrace to solve this mystery... if it was running on Solaris or Mac... but since Linux doesn't have it you might want to try their Systemtap, however I can't say anything about its usability since I haven't used it.
With dTrace you could easily sniff out the culprits within a day, and would hope with Systemtap it would be similiar
Another option that I can't assure you will do any good, but it's more than worth the effort. Is to read the detailed changelog for the new version, and review what might have changed that could remotely affect you.
Going through the changelogs has saved me more than once. Especially when some config options have changed and when something got deprecated. Worst case is it'll give you some clues as to where to look next