I am in the middle of migrating 7 Wordpress sites to one single server which is running Ubuntu and has 4 core CPUs & 15gb ram)
So far I have moved one website and installed APC, but I am unsure if this method of caching supports multiple websites on one server.
Can anyone give me some advice on configuration? Can I just install it and use it 'Out of the box' or do I need to make some tweaks that will increase performance for all 7 websites.
It looks like it currently only uses 30mb ram which doesnt seem like a lot to me.
Cheers,
Rich
You will have to use some apc cache plugin, the most common one in use is http://wordpress.org/extend/plugins/w3-total-cache/.
It's easy to set up and provides a lot more.
W3 Total Cache would be a good choice, and it would pose no problems for APC. For a basic install it is sufficient to just configure the main configuration page.
Typically you would set the options there as follows:
page cache: enable, disk enhanced
minify: disable (you could experiment with this, but maybe causes errors on your site. If you try it, set it to disk)
database cache: enable, apc
object cache: enable, apc
browser cache: enable
CDN: disable
Varnish: disable
Of course there are a lot of other options, with which you can experiment a little to suit your needs. But these settings should give you a solid performance boost.
I would also take a look at the settings in your apc.ini. By default, the amount of memory allocated for the service is not very large. Maybe increase that to 512mb or 1024mb.
i have some problem with my VPS.
We are using a VPS to run our CMS and our websites, for now we have 300MB memory limit, and now we are close to reach the limit.
To maintain low cost(i know, increase memory is not to much expensive), but if i find a solution to optimize what we have, will be better.
What can i do?
Thanks!
I would suggest to increase memory - more memory, faster website :-)
But if speed is not important, reduce all cache sizes, set php memory_limit to 8M, disable opcode caching (APC, eAccelerator)
or try Raspberry Pi as server, now comes with 512MB :-)
I have a small 256MB vps which uses
apache + php + mariadb (mysqld)
I found memory was being gobbled by apache at every request 20MB at time for a simple wordpress page. It would settle but over time gobble into swap space and slow everything to a crawl. I suspect there are ways to fine tune mpm_event and mpm_worker to stop entering swap but I didn't figure out how.
When working in such tight environments it is important to know what is using what memory and reduce everything, so that swapping is minimized.
A summary of what I did follows, I managed to get 100MB physical headroom for tweaking, this is not a heavily loaded server but needs to be accessible (and cheap):
Using slackware (the os and default services seem to use less memory than debian, perhaps there is less loaded by default on my providers image)
Switch mysqld innodb tables off and use myisam
configure mysqld to reduce cache sizes if relevant
install cms fresh so they are definitely using myisam or alter all the tables
in apache choose to use the mod_mpm_prefork rather than mpm_event and mpm_worker ("apachectl -V" will tell you which is being used)
set sensible values for maxserver(start from low values for number of servers and requests and work up)
test the server under load whilst doing 'watch free' or 'top' through ssh
you should be able to see memory & the processes being created and destroyed as the load increases
adjust your httpd server settings and retest until happy
I like to see the memory max out at no more than 90% and reduce back down when the load is gone before I am convinced it will stay up without grinding to a halt (start using swap).
check your php.ini memory settings as stated in another answer.
I have also set a cron job to send me an email when swap starts to get used significantly, so I can restart something or even restart the whole server if I can't find what has used the memory, this should happen less and less the more you fine tune.
As stated this is not an environment that will perform well under heavy load but the cost may be more important to you.
Just my two pence worth...
I would look at Nginx like concerto49 recommended, if you have just one website on it also consider Litespeed (www.litespeedtech.com) they have a free version which may be sufficient to power your site.
If its PHP based, then strip out everything you're not using. Use APC/XCache to processing every request. Nginx also has a caching module, that could help so you can avoid hitting PHP for every request if its still fresh.
What type of VPS is it? OpenVZ? Xen? KVM? If it is OpenVZ does it have VSwap or Burst memory?
What type of CMS / Website are you running? Is it PHP based? Are you using Apache? If so have you tried nginx? I would look at optimizing the web server component and removing unused processes/applications to reduce memory and increase performance.
So I've completely finished setting up APC and I'm liking it so far. The only thing is; Memory is a bit of a constraint on my system. I know: buy more, right? Well, my host doesn't support it and I like their service so...
I'm running APC using FCGID on a cPanel/WHM server. Cache is shared between visits on a per user basis because I use FcgidMaxProcessesPerClass 1 in php.conf.
By default, APC is initialized when APC is started, so all user accounts have ONE (MaxProcesses) unique and personal share of SHM, but the SHM size is the same for all users. However, I have some sites that would really benefit from having, say, 128M of SHM whereas others could easily suffice with 16M or even 8M in some cases.
I've already been fiddling with custom fcgi loaders in /cgi-bin and the likes as described here:
http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/
and http://chrisgilligan.com/search/apc-shm_size-each-user/
But it would seem to me that if APC is already loaded on a per-user basis, there should be a simpler option. I can't really get per-user FCGI to work and still persist the php process.
Any ideas?
This actually turned out to be impossible with Mod_FCGID.
Since I can't get Mod_FastCGI to run on my server, which would make this possible, I've since switched to Mod_DSO. Everything is shared now, so it's okay.
I am working on trying to tune my caches and in doing benchmarking I found something that is confusing the hell out of me.
Pulling a key from my remote Memcached box (local network) is taking 0.0008 seconds whereas pulling a key from my local APC cache is taking 0.0114 seconds. Yes, it is a full 14x faster.
That seems awfully slow for a local cache... what settings should I be looking at tuning to make it more effective?
Edit: As requested, here is my APC config from php.ini
[APC]
;specifies the size for each shared memory segment will need adjustment for your environment.
apc.shm_size=8
;max amount of memory a script can occupy
apc.max_file_size=1M
apc.ttl=0
apc.gc_ttl=3600
; means we are always atomically editing the files
apc.file_update_protection=0
apc.enabled=1
apc.enable_cli=0
apc.cache_by_default=1
apc.include_once_override=0
apc.localcache=0
apc.localcache.size=512
apc.num_files_hint=1000
apc.report_autofilter=0
apc.rfc1867=0
apc.slam_defense=0
apc.stat=1
apc.stat_ctime=0
apc.ttl=7200
apc.user_entries_hint=4096
apc.user_ttl=7200
apc.write_lock=1
The fetch is being done by a simple apc_fetch('my_key');
Take advantage of your memory! Try raising the apc.shm_size to 128mb - its an easy tweak that can improve performance substantially. Also, consider changing apc.user_entries_hint to fit your app's requirements - see apc vs custom mmap extension.
Revelant links:
APC vs Custom Mmap extension
http://2bits.com/articles/importance-tuning-apc-sites-high-number-drupal-modules.html
I've occasionally run up against a server's memory allocation limit, particularly with a bloated application like Wordpress, but never encountered "Unable to allocate memory for pool" and having trouble tracking down any information.
Does anyone know what this means? I've tried increasing the memory_limit without success. I also haven't made any significant changes to the application. One day there was no problem, the next day I hit this error.
Using a TTL of 0 means that APC will flush all the cache when it runs out of memory. The error don't appear anymore but it makes APC far less efficient. It's a no risk, no trouble, "I don't want to do my job" decision. APC is not meant to be used that way. You should choose a TTL high enough so the most accessed pages won't expire. The best is to give enough memory so APC doesn't need to flush cache.
Just read the manual to understand how ttl is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
The solution is to increase memory allocated to APC.
Do this by increasing apc.shm_size.
If APC is compiled to use Shared Segment Memory you will be limited by your operating system. Type this command to see your system limit for each segment :
sysctl -a | grep -E "shmall|shmmax"
To alocate more memory you'll have to increase the number of segments with the parameter apc.shm_segments.
If APC is using mmap memory then you have no limit. The amount of memory is still defined by the same option apc.shm_size.
If there's not enough memory on the server, then use filters option to prevent less frequently accessed php files from being cached.
But never use a TTL of 0.
As c33s said, use apc.php to check your config. Copy the file from apc package to a webfolder and point browser to it. You'll see what is really allocated and how it is used. The graphs must remain stable after hours, if they are completly changing at each refresh, then it means that your setup is wrong (APC is flushing everything). Allocate 20% more ram than what APC really use as a security margin, and check it on a regular basis.
The default of allowing only 32MB is ridiculously low. PHP was designed when servers were 64MB and most scripts were using one php file per page. Nowadays solutions like Magento require more than 10k files (~60Mb in APC). You should allow enough memory so most of php files are always cached. It's not a waste, it's more efficient to keep opcode in ram rather than having the corresponding raw php in file cache.
Nowadays we can find dedicated servers with 24Gb of memory for as low as $80/month, so don't hesitate to allow several GB to APC. I put 2GB out of 24GB on a server hosting 5Magento stores and ~40 wordpress website, APC uses 1.2GB. Count 64MB for Magento installation, 40MB for a Wordpress with some plugins.
Also, if you have developpment websites on the same server. Exclude them from cache.
Probably is APC related.
For the people having this problem, please specify you .ini settings. Specifically your apc.mmap_file_mask setting.
For file-backed mmap, it should be set to something like:
apc.mmap_file_mask=/tmp/apc.XXXXXX
To mmap directly from /dev/zero, use:
apc.mmap_file_mask=/dev/zero
For POSIX-compliant shared-memory-backed mmap, use:
apc.mmap_file_mask=/apc.shm.XXXXXX
solution for me:
apc.ttl=0
apc.shm_size=anything you want
edit start
warning!
#bokan indicated me that i should add a warning here.
if you have a ttl of 0 this means the every cached item can be purged immediately. so if you have a small cache size like 2mb and a ttl of 0 this would render the apc useless, because the data in the cache gets always overwritten.
lowering the ttl means only that the cache cannot become full, only with items which can't be replaced.
so you have to choose a good balance between ttl and cache size.
in my case i had a cache size of 1gb, so it was more than enough for me.
edit end
had the same issue on centos 5 with php 5.2.17 and noticed that if the
cache size is small and the ttl parameter is "high" (like 7200) while
having a lot of php files to cache, then the cache fills up quite fast
and apc doesn't find anything which it can remove because all files in
the cache still fit in the ttl.
increasing the memory size is only a part solution, you still run in
this error if you cache fills up and all files are within the ttl.
so my solution was to set the ttl to 0, so apc fills up the cache an
there is allways the possibility for apc to clear some memory for new
data.
hope that helps
edit:
see also: http://pecl.php.net/bugs/bug.php?id=16966
download http://pecl.php.net/get/APC extract and run the apc.php, there you have a nice diagram how your cache usage look like
Running the apc.php script is key to understanding what your problem is, IMO. This helped us size our cache properly and for the moment, seems to have resolved the problem.
For newbies like myself, these resources helped:
Finding the apc.ini file to make the changes recommended by c33s above, and setting recommended amounts:
http://www.untwistedvortex.com/optimizing-tuning-apc-alternate-php-cache/
Understanding what apc.ttl is:
http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Understanding what apc.shm_size is:
http://www.php.net/manual/en/apc.configuration.php#ini.apc.shm-size
As Bokan has mentioned, you can up the memory if available, and he is right on how counter productive setting TTL to 0 is.
NotE: This is how I fixed this error for my particular problem. Its a generic issue that can be caused by allot of things so only follow the below if you get the error and you think its caused by duplicate PHP files being loaded into APC.
The issue I was having was when I released a new version of my PHP application. Ie replaced all my .php files with new ones APC would load both versions into cache.
Because I didnt have enough memory for two versions of the php files APC would run out of memory.
There is a option called apc.stat to tell APC to check if a particular file has changed and if so replace it, this is typically ok for development because you are constantly making changes however on production its usually turned off as it was with in my case - http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
Turning apc.stat on would fix this issue if you are ok with the performance hit.
The solution I came up with for my problem is check if the the project version has changed and if so empty the cache and reload the page.
define('PROJECT_VERSION', '0.28');
if(apc_exists('MY_APP_VERSION') ){
if(apc_fetch('MY_APP_VERSION') != PROJECT_VERSION){
apc_clear_cache();
apc_store ('MY_APP_VERSION', PROJECT_VERSION);
header('Location: ' . 'http'.(empty($_SERVER['HTTPS'])?'':'s').'://'.$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI']);
exit;
}
}else{
apc_store ('MY_APP_VERSION', PROJECT_VERSION);
}
This worked for our guys (running a slew of Wordpress sites on the same server).
Changed memory settings in the /etc/php.d/apc.ini file. It was set to 64M, so we doubled it to 128M.
apc.shm_size=128M
Looking at the internets there can be various of causes.
In my case leaving everything default except...
apc.shm_size = 64M
...cleared the countless warnings that I was getting earlier.
I received the error "Unable to allocate memory for pool" after moving an OpenCart installation to a different server. I also tried raising the memory_limit.
The error stopped after I changed the permissions of the file in the error message to have write access by the user that apache runs as (apache, www-data, etc.). Instead of modifying /etc/group directly (or chmod-ing the files to 0777), I used usermod:
usermod -a -G vhost-user-group apache-user
Then I had to restart apache for the change to take effect:
apachectl restart
Or
sudo /etc/init.d/httpd restart
Or whatever your system uses to restart apache.
If the site is on shared hosting, maybe you must change the file permissions with an FTP program, or contact the hosting provider?
To resolve this problem set value for apc.shm_size as integer
Locate your apc.ini file (In my system apc.ini file location /etc/php5/conf.d/apc.ini) and set:
apc.shm_size = 1000
on my system i had to insert
apc.shm_size = 64M
into /usr/local/etc/php.ini
(FreeBSD 9.1)
then when i looked at apc.php (which i copied from /usr/local/share/doc/APC/apc.php to /usr/local/www/apache24/data)
i found that the cache size had increased from the default of 32M to 64M and i was no longer getting a large cache full count
references:
http://au1.php.net/manual/en/apc.configuration.php
also read Bokan's comments, they were very helpful
Monitor your Cached Files Size (you can use apc.php from apc pecl package) and increase
apc.shm_size according to your needs.
This solves the problem.