Ignore caching of a specific file with APC - php

Is there a way to prevent a specific file from being opcode cached with APC? The use case is as follows:
An application that sits on the cloud, which dynamically resizes itself (spinning up and down servers as required). The config.php script must know of the new IPs as they become available or unavailable.
Since these changes happen frequently enough, and the config.php file is fairly basic, it would be ideal to not have to worry about clearing APC just for the one file.
Clearing the one file out of APC is definitely a possibility, but since you can't access APC via the command line, the solution ends up being rather inelegant.

I have a similar use case. I've asked myself the same question many times, and I have not been able to find a solution. However, my solution has been to create a quick script that takes care of clearing the APC cache for each server. Every time I rebuild the app, I need to hit the file on each server to clear the opcode cache using apc_clear_cache If you only have to clear one file, you may be better off with apc_compile_file
Hope this helps.

Yes. you should check out the apc.filter configuration directive. Another Question | PHP Docs

I don't know of a way to do what you're suggesting, but you should be able to engineer your way around it.
The obvious solution is to not store the data in a php file. Since you've already got APC, why not just keep the configuration data in APC (as cached data, not opcodes).
So whatever modifies config.php, would now do something like this:
Modify some non-php file (something.ini, or something like that)
Invalidate the APC cache entry.
When config.php needed the data, it would typically read from the cache. If the cache has been invalidated, it reads/parses the data from the ini file, updates the cache, and proceeds as usual.
At the end of the day, you're using an opcode cache to cache data. You should use a data cache instead. Luckily, APC provides both.

Related

In apache - are php files read from disk each time they are called?

I think it's a pretty big bottleneck for big sites? Is there a way to store them in memory?
Yes PHP files are by default read and executed every page request.
You should look into something like APC, Zend Accelerator, another PHP opcode cache
You may already have these installed, however most of the time they will need some edits to PHP.INI to get them doing their job.

php Apc caching or File caching for semi-static website?

i'm new in PHP and want to try caching(for the first time), so i make website and it has :
dynamic home page
dynamic portfolio page
dynamic contact page
static about page
static admin page
so i read the tutorial about caching and i try to make my own caching system:
using file cache based on the what page is requested, when the page is requested the cache system will check if there's cache in cache directory if there's no cache file yet then write all the output(html) from the php script(in this case output from output buffer) and if there's cache file that corresponds with the specific id(based on URI) then just include_once() the html file.
Then i read in CodeIgniter(i make this website using CI) says there's APC for caching, then i read again about APC, what i read about APC is that it caches the DB results, but now i'm confused which should i use
what i get so far:
file caching probably would slower if there's alot of request (i dont know if this is true or not but i read it somewhere from search engine)
APC is fast
but i'm still confused which i should use , i'm on shared hosting
The levels of caching most relevant in a PHP application:
File / Script caching - The operating system will actually do this to a large extent. When a file is opened it's added to an OS-level cache. It stays there until the file is touched or the OS needs to free memory for other processes. A homegrown PHP solution isn't a good replacement for this.
Opcode caching - In order to function, PHP needs to parse and compile a script into opcodes. A mechanism like APC will cache the opcodes of every PHP script executed by Apache, provided that the cache doesn't overflow. A homegrown PHP solution build on top of APC can partially do this, but APC already does it ... so don't bother.
Query caching - If your script accesses a lot of data that doesn't change very frequently, or wherein some latency between updates and the visibility of those updates is acceptable, caching the results from complex queries is beneficial. A homegrown PHP solution built on APC is acceptable and beneficial at this level. But a database level solution is also appropriate here, and often more appropriate.
Output caching - If your page is largely deterministic and/or the same sort of latency applicable to query caching is acceptable, you can cache the entire output of the script using output buffering and APC. A homegrown PHP solution built on APC is acceptable here, but generally not necessary. If the page is static, you're probably not saving yourself any re-computation. And if it's dynamic, it's usually preferable to just re-render the page anyway.
In a dedicated or virtual-dedicated environment you'd need install APC (or something similar) yourself. But, in a shared hosting environment, it's very likely that APC is installed. And if it weren't you couldn't install it yourself anyway.
And, due to my own uncertainty, I'd recommend not performing any query or output caching with APC in a shared environment -- I'm not sure whether APC segregates caches by virtual host. Even if it does, I wouldn't assume that my site is truly a separate virtual host.

Selective Disable APC caching

I installed APC on my VPS and it works great with W3 Cache wordpress plugin. My problem is that there is one database in MySQL which is pinged by client end every few seconds to see if there are new updates. These db contains certain time sensitive information and hence it can't be part of cached data.
How can I disable APC for this database/files? or Can I set a very short expiry of certain type of data?
Any help is highly appreciated.
APC does two things. It provides a transparent cache of PHP bytecode, and it can cache data at the request of the application.
There is no reason at all to attempt to disable the bytecode cache, but that's not what you seem to be talking about here. The bytecode cache just caches bytecode, not data.
If the application you are using asks APC to cache certain data, and it does not contain an option to disable this caching if APC is installed and available, you are going to need to modify that application. Look for calls to apc_store and apc_fetch and alter the code as required.
As mentioned in the comments, your real problem is probably with the Wordpress caching plugin that you've chosen, not with APC. APC just stores data. If it can not disable itself for selected pages, you may need to find a solution that can, or find another way to get to the data you need that bypasses it.

cache methods in php?

what are the available cache methods i could use in php ?
Cache HTML output
Cache some variables
it would be great to implement more than one caching method , so i need them all , all the available out there (i do caching currently with files , any other ideas ?)
Most PHP build don't have a caching mechanism built in. There are extensions though that can take care of caching for you.
Have a look at APC or MemCache
If you are using a framework, then most come with some form of caching mechanism that you can use e.g. Zend Framework's Zend_Cache.
If you are not using a framework then the APC or Memcache as Pelle ten Cate mentioned can be used. The correct approach to use does depend in your situation though, do you have your website or application running on more than server and does the information in the cache need to be shared between those servers? (if yes then something like memcache is your answer, or maybe a database or distributed NoSQL solution if you are feeling brave).
If you code is only running on the one server you could try something simple like serializing your variables, and writing them to disk, then on every request afterwards, see if the files exists, if it does, open it and unserialize the string into the variable you need.
This though is only worth it if it would take a long time to generate the varaible normally,
(e.g longer than it would to open,read,unserialize the file on disk)
For HTML caching you are generally going to get the most mileage from using a proxy like Varnish or Squid to do it for you but i realise that this may not be an option for you.
If its not then you could the write to disk approach i mentioned above, and save chunks of HTML to files. look in the PHP manual for ob_start and its friends.
Since every PHP run starts from scratch on page request, there is nothing that would persist between calls, making cacheing moot.
Well, that's the basic view. Of course there are ways to implement a caching, sort of - and a few packages and extensions do so (like Zend Extensions and APC). However, you should have a very close look whether it actually improves performance. Other methods like memcache (for DB results), or switching from PHP to e.g. Java will often yield better results.
You can store variables in the $_SESSION, but you shouldn't keep larger HTML there.
Please check what you are actually trying to do. "Bytecode cacheing" (that is, saving PHP parsing time) needs to be done by the PHP runtime executable. For cacheing Database (SQL) request/reply-pairs, there is memcache. Cacheing HTML output can be done, but is often not a good idea.
See also an earlier answer on a similar question.

Page cache in PHP that handles concurrency?

I've read previous answers here about caching in PHP, and the articles they link to. I've checked out the oft-recommended Pear Cache_Light, QuickCache, and WordPress Super Cache. (Sorry - apparently I'm allowed to hyperlink only once.)
Either none deal with concurrency issues, or none explicitly call out that they do in their documentation.
Can anyone point me in the direction of a PHP page cache that handles concurrency?
This is on a shared host, so memcache and opcode caches are unfortunately not an option. I don't use a templating engine and would like to avoid taking a dependency on one. WP Super Cache's approach is preferable - i.e. storing static files under wwwroot to let Apache serve them - but not a requirement.
Thanks!
P.S. Examples of things that should be handled automatically:
Apache / the PHP cache is in the middle of reading a cached file. The cached file becomes obsolete and deletion is attempted.
A cached file was deleted because it was obsolete. A request for that file comes in, and the file is in the process of being recreated. Another request for the file comes in during this.
It seems PEAR::Cache_Lite has some kind of security to deal with concurrency issues.
If you take a look at the manual of constructor Cache_Lite::Cache_Lite, you have those options :
fileLocking
enable / disable fileLocking. Can avoid cache corruption under bad
circumstances.
writeControl
enable / disable write control. Enable write control will lightly slow
the cache writing but not the cache
reading. Write control can detect some
corrupt cache files but maybe it's not
a perfect control.
readControl
enable / disable read control. If enabled, a control key is embeded in
cache file and this key is compared
with the one calculated after the
reading
readControlType
Type of read control (only if read control is enabled). Must be 'md5'
(for a md5 hash control (best but
slowest)), 'crc32' (for a crc32 hash
control (lightly less safe but
faster)) or 'strlen' (for a length
only test (fastest))
Which one to use is still up to you, and will depend on what kind of performance you are ready to sacrifice -- and the risk of concurrency access that probably exists in your application.
You might also want to take a look at Zend_Cache_Frontend_Output, to cache a page, using something like Zend_Cache_Backend_File as backend.
That one seems to support some kind of security as well -- the same kinf of stuff that Cache_Lite already gave you (so I won't copy-paste a second time)
As a sidenote, if your website runs on a shared host, I suppose it doesn't have that many users ? So the risks of concurrent access are probably not that high, are they ?
Anyway, I probably would not search any farther that what those tow Frameworks propose : it is already probably more than enough for the needs of your application :-)
(I've never seen any caching mecanism "more secure" than what those allow you to do... And i've never run into some catastrophic concurrency problem of that sort yet... In 3 years of PHP-development)
Anyway : have fun !
I would be tempted to modify one of the existing caches. Zend Framework's cache should be able to do the trick. If not, I would change it.
You could create a really primitive locking strategy. The database could be used to track all of the cached items, allow locking for update, allow people to wait for someone else's update to complete, ...
That would handle your ACID issues. You could set the lock for someone else's update to a very short period, or possibly have it just skip the cache altogether for that round trip depending on your server load/capacity and the cost of producing the cached content.
Jacob
Concurrent resource creation aka cache slamming / thread race can be a serious issue on busy websites. That's why I've created cache library that synchronize read/write processes/threads.
It has elegant and clear structure: interfaces -> adaptors -> classes for easy extension. At github page im explaining in details what's the problem with slamming and how The Library is resolving it.
Check it here:
https://github.com/tztztztz/php-no-slam-cache
Under Linux, generally, the file will remain "open" for read, even if it's "deleted" until the process closes the file. This is something built into the system, and can sometimes cause huge discrepancies in disk usage sizes (deleting a 3G file while it's still "open" would mean that is still allocated on the disk as in use until the process closes it) - I'm unsure as to whether the same is true under linux.
Assuming a Journalling Filesystem (most Linux Filesystems, and NTFS) - then the file should not be seen as "created" until the process closes the file. This should show up as a non-existant file!
Assuming a Journalling Filesystem (most Linux Filesystems, and NTFS) -
then the file should not be seen as "created" until the process
closes the file. This should show up as a non-existant file!
Nope, it is visible as soon as it is created, you have to lock it.
Rename is atomic though. So you could open(), write(), close(), rename(), but this will not prevent the same cache item being re-created twice at the same time.
A cached file was deleted because it was obsolete.
A request for that file comes in, and the file is in the process of being recreated. Another request for the file comes in during this.
If it is not locked, a half-complete file will be served, or two processes will try to regenerate the same file at the same time, giving "interesting" results.
You could cache pages in the database, just create a simple "name,value" table and store cached pages on it.

Categories