Recent versions of PHP have a cache of filenames for knowing the real path of files, and require_once() and include_once() can take advantage of it.
There's a value you can set in your php.ini to set the size of the cache, but I have no idea how to tell what the size should be. The default value is 16k, but I see no way of telling how much of that cache we're using. The docs are vague:
Determines the size of the realpath cache to be used by PHP. This value should be increased on systems where PHP opens many files, to reflect the quantity of the file operations performed.
Yes, I can jack up the amount of cache allowed, and run tests with ab or some other testing, but I'd like something with a little more introspection than just timing from a distance.
You've probably already found this, but for those who come across this question, you can use realpath_cache_size() and realpath_cache_get() to figure out how much of the realpath cache is being used on your site and tune the settings accordingly.
Though I can't offer anything specific to your situation, my understanding is that 16k is pretty low for most larger PHP applications (particularly ones that use a framework like the Zend Framework). I'd say at least double the cache size if your application uses lots of includes and see where to go from there. You might also want to increase the TTL as long as your directory structure is pretty consistent.
To expand on the answer provided by Noodles, you can create a little test.php with the following code:
<?php
echo "<br>cache size: ".realpath_cache_size();
echo "<br>";
echo "<br>cache: ".print_r(realpath_cache_get(););
?>
Upload this to your site and navigate to it. It will show you the amount of bytes currently being used by your cache, as well as what's actually in the cache. This value is changing all the time so keep hitting that F5 button to get a better sense of where you're at. It's a good idea also to do your testing during peak times.
If you see the value is frequently hitting your max cache size as defined in your php.ini then it's time to increase that value.
Keep in mind that the default PHP setting is 16K which is 16384 bytes.
the 16K is the # of files not activity.
Set to 1k for most sites. Very similar to settings in APC, xcache ea etc.
Related
I have a potentially very large (several megabytes perhaps) PHP class, generated of course. Is there any setting or limitation that would cause opcache slowdown in this case?
You should check opcache.max_file_size option. This option can set a maximum file size to cache. Thus, big files can be skipped by opcode cacher. However, it defaults to 0, meaning all files will be cached.
Next option to check is opcache.max_accelerated_files. For big projects with Twig and annotations default value 2000 is not enought. Consider to increase it.
And the last one is opcache.memory_consumption. I noticed, that after reaching this limit, opcache won't add new items into the cache. So, increase it to 256M or 512M.
Few things make OPCache slow, like opcache.consistency_checks if is it enabled, OPcache will verify the cache checksum every N requests, where N is the value of this configuration directive. For your large file size i am sure it is not good idea to enable this.
Also, if you doubt that it is effecting OPCache, why can't you try tools like OpCacheGUI to check it.
I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions.
I'm wondering: does OP code caching not work as well for arrays?
My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%.
apc.shm_size=32 M
apc.max_file_size = 1M
apc.shm_segments= 1
APC 3.1.6
I'm using PHP 5.2, Apache 2, and Windows Vista.
All your arrays need to be serialized when stored in cache and then unserialised again when you load them from cache, this costs time and might be the significant factor of speed loss that you experience. (for your info: Serialisation)
One way to speed up serialisation a bit is to use igbinary, igbinary can be used seamlessly with APC by putting apc.serializer=igbinary in php.ini or in the ini file that goes over APC. (note: this requires APC >= 3.1.7)
You could also put apc.stat (in the same ini file) as 0 so that it only check files for modifications once as opposed to every time.
One thing about opcode caching is that unless you have it configured correctly, it will continue to stat each file to look for changes. This can cause significant overhead if you need to parse and convert many files to opcode.
You typically get a huge boost in performance by setting apc.stat = 0. However, be aware, that in order to make changes to your code, you'll need to call apc_clear_cache() or restart apache.
http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
The problem was using the gettext library to translate everything. When I get rid of around 1000 function calls, the load time is reduced from 200 ms to 6 ms.
My guess is that the serialization of the data is also a problem, however it is a secondary one.
I've occasionally run up against a server's memory allocation limit, particularly with a bloated application like Wordpress, but never encountered "Unable to allocate memory for pool" and having trouble tracking down any information.
Does anyone know what this means? I've tried increasing the memory_limit without success. I also haven't made any significant changes to the application. One day there was no problem, the next day I hit this error.
Using a TTL of 0 means that APC will flush all the cache when it runs out of memory. The error don't appear anymore but it makes APC far less efficient. It's a no risk, no trouble, "I don't want to do my job" decision. APC is not meant to be used that way. You should choose a TTL high enough so the most accessed pages won't expire. The best is to give enough memory so APC doesn't need to flush cache.
Just read the manual to understand how ttl is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
The solution is to increase memory allocated to APC.
Do this by increasing apc.shm_size.
If APC is compiled to use Shared Segment Memory you will be limited by your operating system. Type this command to see your system limit for each segment :
sysctl -a | grep -E "shmall|shmmax"
To alocate more memory you'll have to increase the number of segments with the parameter apc.shm_segments.
If APC is using mmap memory then you have no limit. The amount of memory is still defined by the same option apc.shm_size.
If there's not enough memory on the server, then use filters option to prevent less frequently accessed php files from being cached.
But never use a TTL of 0.
As c33s said, use apc.php to check your config. Copy the file from apc package to a webfolder and point browser to it. You'll see what is really allocated and how it is used. The graphs must remain stable after hours, if they are completly changing at each refresh, then it means that your setup is wrong (APC is flushing everything). Allocate 20% more ram than what APC really use as a security margin, and check it on a regular basis.
The default of allowing only 32MB is ridiculously low. PHP was designed when servers were 64MB and most scripts were using one php file per page. Nowadays solutions like Magento require more than 10k files (~60Mb in APC). You should allow enough memory so most of php files are always cached. It's not a waste, it's more efficient to keep opcode in ram rather than having the corresponding raw php in file cache.
Nowadays we can find dedicated servers with 24Gb of memory for as low as $80/month, so don't hesitate to allow several GB to APC. I put 2GB out of 24GB on a server hosting 5Magento stores and ~40 wordpress website, APC uses 1.2GB. Count 64MB for Magento installation, 40MB for a Wordpress with some plugins.
Also, if you have developpment websites on the same server. Exclude them from cache.
Probably is APC related.
For the people having this problem, please specify you .ini settings. Specifically your apc.mmap_file_mask setting.
For file-backed mmap, it should be set to something like:
apc.mmap_file_mask=/tmp/apc.XXXXXX
To mmap directly from /dev/zero, use:
apc.mmap_file_mask=/dev/zero
For POSIX-compliant shared-memory-backed mmap, use:
apc.mmap_file_mask=/apc.shm.XXXXXX
solution for me:
apc.ttl=0
apc.shm_size=anything you want
edit start
warning!
#bokan indicated me that i should add a warning here.
if you have a ttl of 0 this means the every cached item can be purged immediately. so if you have a small cache size like 2mb and a ttl of 0 this would render the apc useless, because the data in the cache gets always overwritten.
lowering the ttl means only that the cache cannot become full, only with items which can't be replaced.
so you have to choose a good balance between ttl and cache size.
in my case i had a cache size of 1gb, so it was more than enough for me.
edit end
had the same issue on centos 5 with php 5.2.17 and noticed that if the
cache size is small and the ttl parameter is "high" (like 7200) while
having a lot of php files to cache, then the cache fills up quite fast
and apc doesn't find anything which it can remove because all files in
the cache still fit in the ttl.
increasing the memory size is only a part solution, you still run in
this error if you cache fills up and all files are within the ttl.
so my solution was to set the ttl to 0, so apc fills up the cache an
there is allways the possibility for apc to clear some memory for new
data.
hope that helps
edit:
see also: http://pecl.php.net/bugs/bug.php?id=16966
download http://pecl.php.net/get/APC extract and run the apc.php, there you have a nice diagram how your cache usage look like
Running the apc.php script is key to understanding what your problem is, IMO. This helped us size our cache properly and for the moment, seems to have resolved the problem.
For newbies like myself, these resources helped:
Finding the apc.ini file to make the changes recommended by c33s above, and setting recommended amounts:
http://www.untwistedvortex.com/optimizing-tuning-apc-alternate-php-cache/
Understanding what apc.ttl is:
http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Understanding what apc.shm_size is:
http://www.php.net/manual/en/apc.configuration.php#ini.apc.shm-size
As Bokan has mentioned, you can up the memory if available, and he is right on how counter productive setting TTL to 0 is.
NotE: This is how I fixed this error for my particular problem. Its a generic issue that can be caused by allot of things so only follow the below if you get the error and you think its caused by duplicate PHP files being loaded into APC.
The issue I was having was when I released a new version of my PHP application. Ie replaced all my .php files with new ones APC would load both versions into cache.
Because I didnt have enough memory for two versions of the php files APC would run out of memory.
There is a option called apc.stat to tell APC to check if a particular file has changed and if so replace it, this is typically ok for development because you are constantly making changes however on production its usually turned off as it was with in my case - http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
Turning apc.stat on would fix this issue if you are ok with the performance hit.
The solution I came up with for my problem is check if the the project version has changed and if so empty the cache and reload the page.
define('PROJECT_VERSION', '0.28');
if(apc_exists('MY_APP_VERSION') ){
if(apc_fetch('MY_APP_VERSION') != PROJECT_VERSION){
apc_clear_cache();
apc_store ('MY_APP_VERSION', PROJECT_VERSION);
header('Location: ' . 'http'.(empty($_SERVER['HTTPS'])?'':'s').'://'.$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI']);
exit;
}
}else{
apc_store ('MY_APP_VERSION', PROJECT_VERSION);
}
This worked for our guys (running a slew of Wordpress sites on the same server).
Changed memory settings in the /etc/php.d/apc.ini file. It was set to 64M, so we doubled it to 128M.
apc.shm_size=128M
Looking at the internets there can be various of causes.
In my case leaving everything default except...
apc.shm_size = 64M
...cleared the countless warnings that I was getting earlier.
I received the error "Unable to allocate memory for pool" after moving an OpenCart installation to a different server. I also tried raising the memory_limit.
The error stopped after I changed the permissions of the file in the error message to have write access by the user that apache runs as (apache, www-data, etc.). Instead of modifying /etc/group directly (or chmod-ing the files to 0777), I used usermod:
usermod -a -G vhost-user-group apache-user
Then I had to restart apache for the change to take effect:
apachectl restart
Or
sudo /etc/init.d/httpd restart
Or whatever your system uses to restart apache.
If the site is on shared hosting, maybe you must change the file permissions with an FTP program, or contact the hosting provider?
To resolve this problem set value for apc.shm_size as integer
Locate your apc.ini file (In my system apc.ini file location /etc/php5/conf.d/apc.ini) and set:
apc.shm_size = 1000
on my system i had to insert
apc.shm_size = 64M
into /usr/local/etc/php.ini
(FreeBSD 9.1)
then when i looked at apc.php (which i copied from /usr/local/share/doc/APC/apc.php to /usr/local/www/apache24/data)
i found that the cache size had increased from the default of 32M to 64M and i was no longer getting a large cache full count
references:
http://au1.php.net/manual/en/apc.configuration.php
also read Bokan's comments, they were very helpful
Monitor your Cached Files Size (you can use apc.php from apc pecl package) and increase
apc.shm_size according to your needs.
This solves the problem.
I've just noticed that my app is including over 148 php files on one page. Bear in mind this is the back end admin and not the main site, but is this too many? What impact does a large number of includes have on a server, both whilst under average load and stressed? Would disk I/o be a problem?
Included File Stats
File Type - Include Count - Combined File Size
Index - 1 - 0.00169 MB
Bootstrap - 1 - 0.01757 MB
Helper - 98 - 0.58557 MB - (11 are Profiler related classes)
Configuration - 8 - 0.00672 MB
Data Store - 23 - 0.10836 MB
Action - 8 - 0.02652 MB
Page - 1 - 0.00094 MB
I18n Resource - 7 - 0.00870 MB
Vendor Library - 1 - 0.02754 MB
Total Files - 148 - 0.78362 MB
Time ran 0.123920917511
Memory used 2.891 MB
Edit 1. Should be noted that this is a worst case scenario page. It has many different template models, controllers and associated views because it handles publishing with custom fields.
Edit 2. Also the frontend has agressive page caching so the number of includes in the front is roughly 30-40 at the moment.
Edit 3. Profiler when turned off won't include the files so this will reduce quite a few includes
So, here's a breakdown of the potential problems.
The number of files itself is an issue. Unless you're using a bytecode cache (and you are), and that cache is configured to not stat the file prior to pulling in the compiled bytecode, PHP is going to stat every single one of those files on include, then read them in. In some cases, that can also mean path resolution and a naive autoloader that pokes and prods at numerous directories. This won't be "slow" because the OS will surely have things cached if the files are hit frequently, but it does add precious milliseconds to each request.
If every autoloader is designed properly and the codebase relies entirely on the autoloader to pull in the required classes (meaning nothing uses include/require/include_once/require_once on a class file), you can avoid having to open and read many of the files by gluing every single class together into a single large include. This is a bit on the impractical side of things, mainly because if there is no bytecode cache, PHP still has to parse, compile and interpret it all. Additionally, not every class is going to be used on every request, so it may be a bit wasteful.
The bottom line is that a well-configured bytecode cache will completely mitigate this problem. There's nothing wrong with telling your customers that they have to properly configure their servers for optimal performance. If they know what they're doing, they'll have everything correct to begin with.
Yes, so many files can be a problem.
No, it is probably not a problem in your case, since this is only a back-end, which is probably accessed by a few people, and not too often.
In general, I would discourage having more than 20 PHP files called on each page. This is because even the website and the server are highly optimized, for every page, the server must go and look at every file to see at least if it changed since the last request (if there is no cache implemented on this level).
Even if the time to access a file is tiny, it is a time you are loosing at each request. This tiny period of time multiplied by 148 can become an issue (and a huge scalability problem).
When I worked on a PHP framework project, I used a trick to reduce the number of files. Several files were combined to one minified file, and this single file was cached. Then, if there was a need to update the framework or the website, the cached file was automatically removed, then rebuilt.
Even if I personally discourage you to minify the source code (because it is difficult to do, difficult to test, and creates a bunch of problems, like the meaningless numbers of lines in errors), you can probably do the same thing by combining all your files into a single file.
Be careful: if a page A uses half of those files, and page B - another half, combining everything will probably decrease the performance, since PHP engine will have to parse more code.
Are the includes themselves doing something fancy, like db queries? And are they all at the top of the page, or are they included as-needed?
Those stats don't look bad, so, if admin access is infrequent, you may be ok. But you should examine this from a design angle: can things can be organized in a way that would prevent you from having to maintain so many includes? Separate from any performance issues, there is a risk here of creating hard-to-track dependency bugs.
(It could be as MainMa said, related to a framework, in which case you may have no control over the above. I only mention it in case you do.)
A couple things in case you didn't know already:
If it's just text or static HTML, you
can get the contents with
file_get_contents(), readfile(), etc. This is
somewhat faster because the loaded
file doesn't need parsing. But
obviously if it contains PHP code
this won't help.
You can use
include_once() to prevent the same
file from being included twice (if, for instance, it's included by two files
that are themselves included by the top level file).
Disk I/O won't be your problem. The system will cache frequently accessed files in RAM, or if they aren't that frequently accessed, it won't matter.
Load times may be an issue, as each file has to be requested and interpreted by the server separately.
I don't know how the web server will cope with the many requests; it may not care. If the client doesn't do pipelined requests though, you'll pay for many many TCP connections built up and torn down, which also costs a goodly amount of latency.
Honestly, don't worry about it - 148 is nothing, even if 0 caching happened at php side you're going to be hitting fs caches almost everytime - and in the grand scheme of things virtually every opensource anything out there has way more files without a problem (drupal, wordpress, joomla, elgg, anything).
Really, no problem here - even if you managed to shave a millisecond here or there off, it's so far down the priority list and places where you can make speed gains it's barely worth considering for more than a second.
caveat: do try to use require_once and include_once where suited and ensure you only load those classes/files that are needed for a given request to process.
I'm studying high-performance coding for websites in PHP, and this idea popped into my mind:
We know that accessing a database uses a significant amount of CPU usage, so we cache such data, saving it to the HDD. But I was wondering, can't it rest in the RAM of the server, so I can access it even more faster?
You might want to check out memcached:
http://www.php.net/manual/en/intro.memcache.php
PHP normally comes with APC as a bytecode cache. You can also use it as a local cache. If you need something in a distributed/clustered environment, then memcached (plus possibly beanstalkd) is the way to go.
XCache, eaccelerator, apc and memcache allow you to save items to semi persistent memory (you don't necessarily know when an item will expire in most cases). It isn't the same as a database, more like a key/value list. The downside being that it requires a third party library, so you might be a bit limited depending on your environment.
I think you might be able to get the same effect using shared memory (via php's shmop_ functions). But I have never used them or know if they are included with php's library so someone feel free to bash me or edit out this mention.
If your server is ANY good, then it will already do so. But of course, it may be the case that your server is serving a few thousand other tasks besides yours as well, meaning you don't have that server's cache all for yourself.
And if there really are a few thousand others being served besides you, then the probability just gets higher that there is at least one nutcase among those thousands of others, who is doing something that he really shouldn't be doing but that the server has not been programmed to detect, not been programmed to stop, but just been programmed to try and make the best of it, at the expense of availability of resources for the x999 "responsible" users.