I have a potentially very large (several megabytes perhaps) PHP class, generated of course. Is there any setting or limitation that would cause opcache slowdown in this case?
You should check opcache.max_file_size option. This option can set a maximum file size to cache. Thus, big files can be skipped by opcode cacher. However, it defaults to 0, meaning all files will be cached.
Next option to check is opcache.max_accelerated_files. For big projects with Twig and annotations default value 2000 is not enought. Consider to increase it.
And the last one is opcache.memory_consumption. I noticed, that after reaching this limit, opcache won't add new items into the cache. So, increase it to 256M or 512M.
Few things make OPCache slow, like opcache.consistency_checks if is it enabled, OPcache will verify the cache checksum every N requests, where N is the value of this configuration directive. For your large file size i am sure it is not good idea to enable this.
Also, if you doubt that it is effecting OPCache, why can't you try tools like OpCacheGUI to check it.
Related
What will happen if I set higher value for a function in php.ini, For example set 2GB to opcache.memory_consumption for a normal e-commerce web application.
opcache.memory_consumption = 2048
Below is my pod resource configuration
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "450m"
memory: "256Mi"
If I set 2GB for OPcache itself. What will happen if it's exceed actual memory limit of the application.
From the PHP manual:
opcache.memory_consumption int
The size of the shared memory storage used by OPcache, in megabytes. The minimum permissible value is "8", which is enforced if a smaller value is set.
OK, so what is the shared memory storage used for, and why might we want more of it? As it happens, Nikita Popov, one of the most important developers of PHP's internals right now, just wrote a blog post about how opcache works. Before going into details of how, he summarises thus:
The primary purposes of opcache is to cache compilation artifacts in shared memory, to avoid the need to recompile PHP scripts on every execution.
So, the memory is used to cache the result of compiling your PHP code to an intermediate representation used internally. The amount of space needed will depend on how much compiled code you're trying to cache at once.
How do we know if it's enough? By running the opcache_get_status() function. If the memory is full, or very nearly full, there's a chance that some of your scripts aren't getting cached because they don't fit in memory. In that case, increasing the configured memory size will improve performance by caching those files.
If the memory already has plenty of space, then increasing the size further won't make any difference, other than preventing the server using that memory for other things.
Maybe this is stupid question but i'm trying to figure out how does max_accelerated_files actually work...
I understand the "description/instructions" from PHP net
opcache.max_accelerated_files integer The maximum number of keys (and
therefore scripts) in the OPcache hash table. The actual value used
will be the first number in the set of prime numbers { 223, 463, 983,
1979, 3907, 7963, 16229, 32531, 65407, 130987 } that is greater than
or equal to the configured value. The minimum value is 200. The
maximum value is 100000 in PHP < 5.5.6, and 1000000 in later versions.
But my question is what it does with this number once is configured. Does it allocate a memory for this setting? why don't we just set number 1000000 and that's it if we have enough memory? What happens if we let say configure this number to 2000 and we have 2010 files? Do they some sort stack and once its that file turn it gets cached? What happens with un cached files?
Thank you for your answers
OPCache stores cached scripts in an HashTable, a data structure with very fast lookup time (on average), so cached scripts can be retrieved quickly. max_accelerated_files represent the maximum number of keys that can be stored in this HashTable. You could think about it as the max number of keys in an associative array. The memory allocated for this is part of the shared memory that OPCache can use, that you can configure it with the opcache.memory_consumption option. When OPCache tries to cache more scripts than the available number of keys, it triggers a restart and clean the current cache.
So let's just say you configured opcache.max_accelerated_files to 223 and a request to your /home route parse and cache into OPCache 200 scripts. As long as your next requests will need only those 200 scripts OPCache is fine. But if one of the following requests parse 24 new scripts, OPCache triggers a restart to make room for caching those. Since you don't want that to happen you should monitor OPCache and choose an appropriate number.
Keep in mind that one file can be cached more than once with different keys if required with a relative path like require include.php or require ../../include.php. The cleanest solution to avoid this is to use a proper autoload.
I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions.
I'm wondering: does OP code caching not work as well for arrays?
My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%.
apc.shm_size=32 M
apc.max_file_size = 1M
apc.shm_segments= 1
APC 3.1.6
I'm using PHP 5.2, Apache 2, and Windows Vista.
All your arrays need to be serialized when stored in cache and then unserialised again when you load them from cache, this costs time and might be the significant factor of speed loss that you experience. (for your info: Serialisation)
One way to speed up serialisation a bit is to use igbinary, igbinary can be used seamlessly with APC by putting apc.serializer=igbinary in php.ini or in the ini file that goes over APC. (note: this requires APC >= 3.1.7)
You could also put apc.stat (in the same ini file) as 0 so that it only check files for modifications once as opposed to every time.
One thing about opcode caching is that unless you have it configured correctly, it will continue to stat each file to look for changes. This can cause significant overhead if you need to parse and convert many files to opcode.
You typically get a huge boost in performance by setting apc.stat = 0. However, be aware, that in order to make changes to your code, you'll need to call apc_clear_cache() or restart apache.
http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
The problem was using the gettext library to translate everything. When I get rid of around 1000 function calls, the load time is reduced from 200 ms to 6 ms.
My guess is that the serialization of the data is also a problem, however it is a secondary one.
I've occasionally run up against a server's memory allocation limit, particularly with a bloated application like Wordpress, but never encountered "Unable to allocate memory for pool" and having trouble tracking down any information.
Does anyone know what this means? I've tried increasing the memory_limit without success. I also haven't made any significant changes to the application. One day there was no problem, the next day I hit this error.
Using a TTL of 0 means that APC will flush all the cache when it runs out of memory. The error don't appear anymore but it makes APC far less efficient. It's a no risk, no trouble, "I don't want to do my job" decision. APC is not meant to be used that way. You should choose a TTL high enough so the most accessed pages won't expire. The best is to give enough memory so APC doesn't need to flush cache.
Just read the manual to understand how ttl is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
The solution is to increase memory allocated to APC.
Do this by increasing apc.shm_size.
If APC is compiled to use Shared Segment Memory you will be limited by your operating system. Type this command to see your system limit for each segment :
sysctl -a | grep -E "shmall|shmmax"
To alocate more memory you'll have to increase the number of segments with the parameter apc.shm_segments.
If APC is using mmap memory then you have no limit. The amount of memory is still defined by the same option apc.shm_size.
If there's not enough memory on the server, then use filters option to prevent less frequently accessed php files from being cached.
But never use a TTL of 0.
As c33s said, use apc.php to check your config. Copy the file from apc package to a webfolder and point browser to it. You'll see what is really allocated and how it is used. The graphs must remain stable after hours, if they are completly changing at each refresh, then it means that your setup is wrong (APC is flushing everything). Allocate 20% more ram than what APC really use as a security margin, and check it on a regular basis.
The default of allowing only 32MB is ridiculously low. PHP was designed when servers were 64MB and most scripts were using one php file per page. Nowadays solutions like Magento require more than 10k files (~60Mb in APC). You should allow enough memory so most of php files are always cached. It's not a waste, it's more efficient to keep opcode in ram rather than having the corresponding raw php in file cache.
Nowadays we can find dedicated servers with 24Gb of memory for as low as $80/month, so don't hesitate to allow several GB to APC. I put 2GB out of 24GB on a server hosting 5Magento stores and ~40 wordpress website, APC uses 1.2GB. Count 64MB for Magento installation, 40MB for a Wordpress with some plugins.
Also, if you have developpment websites on the same server. Exclude them from cache.
Probably is APC related.
For the people having this problem, please specify you .ini settings. Specifically your apc.mmap_file_mask setting.
For file-backed mmap, it should be set to something like:
apc.mmap_file_mask=/tmp/apc.XXXXXX
To mmap directly from /dev/zero, use:
apc.mmap_file_mask=/dev/zero
For POSIX-compliant shared-memory-backed mmap, use:
apc.mmap_file_mask=/apc.shm.XXXXXX
solution for me:
apc.ttl=0
apc.shm_size=anything you want
edit start
warning!
#bokan indicated me that i should add a warning here.
if you have a ttl of 0 this means the every cached item can be purged immediately. so if you have a small cache size like 2mb and a ttl of 0 this would render the apc useless, because the data in the cache gets always overwritten.
lowering the ttl means only that the cache cannot become full, only with items which can't be replaced.
so you have to choose a good balance between ttl and cache size.
in my case i had a cache size of 1gb, so it was more than enough for me.
edit end
had the same issue on centos 5 with php 5.2.17 and noticed that if the
cache size is small and the ttl parameter is "high" (like 7200) while
having a lot of php files to cache, then the cache fills up quite fast
and apc doesn't find anything which it can remove because all files in
the cache still fit in the ttl.
increasing the memory size is only a part solution, you still run in
this error if you cache fills up and all files are within the ttl.
so my solution was to set the ttl to 0, so apc fills up the cache an
there is allways the possibility for apc to clear some memory for new
data.
hope that helps
edit:
see also: http://pecl.php.net/bugs/bug.php?id=16966
download http://pecl.php.net/get/APC extract and run the apc.php, there you have a nice diagram how your cache usage look like
Running the apc.php script is key to understanding what your problem is, IMO. This helped us size our cache properly and for the moment, seems to have resolved the problem.
For newbies like myself, these resources helped:
Finding the apc.ini file to make the changes recommended by c33s above, and setting recommended amounts:
http://www.untwistedvortex.com/optimizing-tuning-apc-alternate-php-cache/
Understanding what apc.ttl is:
http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Understanding what apc.shm_size is:
http://www.php.net/manual/en/apc.configuration.php#ini.apc.shm-size
As Bokan has mentioned, you can up the memory if available, and he is right on how counter productive setting TTL to 0 is.
NotE: This is how I fixed this error for my particular problem. Its a generic issue that can be caused by allot of things so only follow the below if you get the error and you think its caused by duplicate PHP files being loaded into APC.
The issue I was having was when I released a new version of my PHP application. Ie replaced all my .php files with new ones APC would load both versions into cache.
Because I didnt have enough memory for two versions of the php files APC would run out of memory.
There is a option called apc.stat to tell APC to check if a particular file has changed and if so replace it, this is typically ok for development because you are constantly making changes however on production its usually turned off as it was with in my case - http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
Turning apc.stat on would fix this issue if you are ok with the performance hit.
The solution I came up with for my problem is check if the the project version has changed and if so empty the cache and reload the page.
define('PROJECT_VERSION', '0.28');
if(apc_exists('MY_APP_VERSION') ){
if(apc_fetch('MY_APP_VERSION') != PROJECT_VERSION){
apc_clear_cache();
apc_store ('MY_APP_VERSION', PROJECT_VERSION);
header('Location: ' . 'http'.(empty($_SERVER['HTTPS'])?'':'s').'://'.$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI']);
exit;
}
}else{
apc_store ('MY_APP_VERSION', PROJECT_VERSION);
}
This worked for our guys (running a slew of Wordpress sites on the same server).
Changed memory settings in the /etc/php.d/apc.ini file. It was set to 64M, so we doubled it to 128M.
apc.shm_size=128M
Looking at the internets there can be various of causes.
In my case leaving everything default except...
apc.shm_size = 64M
...cleared the countless warnings that I was getting earlier.
I received the error "Unable to allocate memory for pool" after moving an OpenCart installation to a different server. I also tried raising the memory_limit.
The error stopped after I changed the permissions of the file in the error message to have write access by the user that apache runs as (apache, www-data, etc.). Instead of modifying /etc/group directly (or chmod-ing the files to 0777), I used usermod:
usermod -a -G vhost-user-group apache-user
Then I had to restart apache for the change to take effect:
apachectl restart
Or
sudo /etc/init.d/httpd restart
Or whatever your system uses to restart apache.
If the site is on shared hosting, maybe you must change the file permissions with an FTP program, or contact the hosting provider?
To resolve this problem set value for apc.shm_size as integer
Locate your apc.ini file (In my system apc.ini file location /etc/php5/conf.d/apc.ini) and set:
apc.shm_size = 1000
on my system i had to insert
apc.shm_size = 64M
into /usr/local/etc/php.ini
(FreeBSD 9.1)
then when i looked at apc.php (which i copied from /usr/local/share/doc/APC/apc.php to /usr/local/www/apache24/data)
i found that the cache size had increased from the default of 32M to 64M and i was no longer getting a large cache full count
references:
http://au1.php.net/manual/en/apc.configuration.php
also read Bokan's comments, they were very helpful
Monitor your Cached Files Size (you can use apc.php from apc pecl package) and increase
apc.shm_size according to your needs.
This solves the problem.
Recent versions of PHP have a cache of filenames for knowing the real path of files, and require_once() and include_once() can take advantage of it.
There's a value you can set in your php.ini to set the size of the cache, but I have no idea how to tell what the size should be. The default value is 16k, but I see no way of telling how much of that cache we're using. The docs are vague:
Determines the size of the realpath cache to be used by PHP. This value should be increased on systems where PHP opens many files, to reflect the quantity of the file operations performed.
Yes, I can jack up the amount of cache allowed, and run tests with ab or some other testing, but I'd like something with a little more introspection than just timing from a distance.
You've probably already found this, but for those who come across this question, you can use realpath_cache_size() and realpath_cache_get() to figure out how much of the realpath cache is being used on your site and tune the settings accordingly.
Though I can't offer anything specific to your situation, my understanding is that 16k is pretty low for most larger PHP applications (particularly ones that use a framework like the Zend Framework). I'd say at least double the cache size if your application uses lots of includes and see where to go from there. You might also want to increase the TTL as long as your directory structure is pretty consistent.
To expand on the answer provided by Noodles, you can create a little test.php with the following code:
<?php
echo "<br>cache size: ".realpath_cache_size();
echo "<br>";
echo "<br>cache: ".print_r(realpath_cache_get(););
?>
Upload this to your site and navigate to it. It will show you the amount of bytes currently being used by your cache, as well as what's actually in the cache. This value is changing all the time so keep hitting that F5 button to get a better sense of where you're at. It's a good idea also to do your testing during peak times.
If you see the value is frequently hitting your max cache size as defined in your php.ini then it's time to increase that value.
Keep in mind that the default PHP setting is 16K which is 16384 bytes.
the 16K is the # of files not activity.
Set to 1k for most sites. Very similar to settings in APC, xcache ea etc.