I'm working on a PHP project in a team. The team members have their own working directory on a CentOS/apache server, like this.
/home/user1/public_html/project/xxxxx.php
/home/user2/public_html/project/xxxxx.php
and so on.
We write and upload php files there and test our work by accessing the server from a browser.
The problem is that APC caches those php files without distinguishing their directories. So, after accessing user1/project/xxxxx.php, it is cached, then accessing user2/project/xxxxx.php produces a result from user1's php.
I think this is because APC shares cache between different processes and/or paths. Is there any way to turn this feature off? For some reason we cannot simply turn off APC, we need it.
Thank you very much in advance.
Try clearing the APC cache. You can use PHP's built-in function apc_clear_cache( to clear the system cache.
There's also apc_clear_cache('user'). Calling that will clear the user cache.
Hope that helps!
Related
I have a music site developed on CodeIgniter with youtube API without database. Recently I have noticed that too many files are getting generated in system/cache folder.
How can I stop generating this cache files? Note that I am not using any cache method.
system/cache is NOT default codeigniter cache directory at first. I would not store cache in there, as its framework main folder. Default is application/cache.
By default, CI does NOT cache anything. So your application is build with caching.
You told you don't use database, so it's not DB cache I assume.
Check in your app for somethign like "$this->load->driver('cache'".
Caching can be loaded WITHOUT additional parameters like
$this->load->driver('cache'); OR with parameters like
$this->load->driver('cache',array('adapther'=>'xxx'));
https://www.codeigniter.com/userguide3/libraries/caching.html
Now, in your app search for $this->cache->save OR $this->cache->file->save
if you found this, it means you are using CI caching.
Problem is, you cannot just remove cache loading, as app initiates cache object, and your app will fail, unless you rewrite all places where caching is used.
Now, you have few choices:
1.just clean cache dir with some script periodically via cron.
you can change cache folder permissions to NON writable, which will generate warnings in your logs, so logging should be disabled. This is not the right way IMHO, as can cause fatal errors/blank pages but just one of possible solutions. If file caching is used, this should not cause issues, while in other cases it could.
you can extend caching library, and simply create empty cache SAVE function. In this case your files will not be saved.
you can cache to memcached, if you have it on your server. Well, if your caching is written like $this->cache->file->{operation}, then you will need update all those to $this->cache->memcached->{operation}. If caching is written like $this->cache->{operation}, you can just adjust configuration something like
$this->load->driver('cache',array('adapther'=>'memcached'));
and set memcached server info in config file. (config/memcached.php)
You told you are not using any caching method. So you should not find any of code I put above.
The last thing I can think about is
$this->output->cache(xxx);
where xxx is cache time in minutes.
it forces entire generated page to be cached;
if you find such lines, you can try comment them out and see what happens
https://www.codeigniter.com/user_guide/general/caching.html
there is a good note: If you change configuration options that might affect your output, you have to manually delete your cache files.
If absolutely none from the examples above is not found, you might use some custom make caching.
Good luck!
Put this in your common controller
$this->output->delete_cache();
I've been struggling to put SilverStripe behind a load balancer and I've been fixing multiple problems with rsyncing the instances and using shared storage and have almost got it stable, however I've found another issue which breaks the CMS.
Specifically when you try to add a link in the CMS in the TinyMCE editor, when the pop-up screen shows to select the page/file the JavaScript throws an exception that tinyMCE.activeEditor returns null.
I've rsynced the cache directory silverstripe-cache between the two servers and still there is a discrepancy between the m=timestamp of only a few seconds, but I'm guessing this is enough to cause tiny_mce_gzip.php to be forced to load again.
I have a shared redis cache for session storage, shared db, have rsynced the cache directory and use CodeDeploy to deploy the app so it should all be in sync. What other storage areas could cause the different m timestamp? Has anyone had success with SilverStripe CMS being used behind a load balancer without sticky sessions?
You can disable the gzip version of the HTMLEditor. I've seen this happen before. Try adding the following to your config/config.yml:
HTMLEditorField:
use_gzip: false
After that, do a full flush and try again?
Another option, is the javascript not syncing correctly. For that, you'll need to change the way the ?m=12345 is built. By default, it's built based on the timestamp.
I'll see if I can dig up the md5-based one, which might otherwise solve your problem.
*edit
Here ya go, try creating this somewhere in your project, and add the following to _config.php
Requirements::set_backend(new MysiteRequirementsBackend());
https://gist.github.com/Firesphere/794dc0b5a8508cd4c192a1fc88271bbf
Actual work is by one of my colleagues, when we ran into the same issue.
I am working on changing a Drupal 7 site and have run into strange behavior where an old version of a file I've changed keeps re-appearing. I've flushed caches via the admin interface as well as truncating the cache_ tables.
On my staging server (which I have access to), things work fine. On our production server (which I do not have SSH access to and cannot easily get access to), they do not and I have limited ability to debug, so I have to guess. I suspect there is some Drupal or Apache setting that is causing these old files to be shown because the filesystem has identical contents. The behavior is almost as if Drupal will look for any file named the same (even if it is in the wrong directory) and show that.
In my case, I have all my files under /var/www/html (standard LAMP setup). At one point, I tar cfz the entire thing and kept that at /var/www/html/archive.tgz (not removing it by mistake). So now I'm wondering if somehow Drupal is reading the contents of that archive and using the old file. Sounds crazy, but has anyone run into something like that?
The other possibility is that my cache cleaning is still limited in some way. Outside of truncating cache_ tables in the database, is there any way to forcibly remove all cached entries? Any insight into this mystery would be appreciated.
Obviously, your production server runs some additional caching proxy like Varnish. You need to clear cache there as well.
I am running the site at www.euroworker.no, it's a linux server and the site has a backend editor. It's a smarty/php site, and when I try to update a few of the .tpl's (two or three) they don't update. I have tried uploading through FTP and that doesn't work either.
It runs on the livecart system.
any ideas?
Thanks!
Most likely, Smarty is fetching the template from the cache and not rebuilding it. If it's a one-time thing, just empty the cache directory or directories (templates_c). If it happens more often, you may have to adjust smarty's caching behaviour in the configuration (among others, $smarty->cachingand $smarty->cache_lifetime)
Are you saying that when you attempt to upload a new version it isn't updating the file? Or it's updating the file but the browser output does not conform to the new standards?
If it's the latter problem, delete all the files in your template_c directory. If it's the former problem, er, might want to check out ServerFault or SuperUser.
We've recently enabled APC on our servers, and occasionally when we publish new code or changes we discover that the source files that were changed start throwing errors that aren't reflected in the code, usually parse errors describing a token that doesn't exist. We have verified this by running php -l on the files the error logs say are affected. Usually a republish fixes the problem. We're using PHP 5.2.0 and APC 3.01.9. My question is, has anyone else experienced this problem, or does anyone recognize what our problem is? If so, how did you fix it or how could we fix it?
Edit: I should probably add in some details about our publishing process. The content is being pushed to the production servers via rsync from a staging server. We enabled apc.stat_ctime because it said this helps things run smoother with rsync. apc.write_lock is on by default and we haven't disabled it. Ditto for apc.file_update_protection.
Sounds like a part-published file is being read and cached as broken. apc.file_update_protection is designed to help stop this.
in php.ini: apc.file_update_protection integer
apc.file_update_protection setting
puts a delay on caching brand new
files. The default is 2 seconds which
means that if the modification
timestamp (mtime) on a file shows that
it is less than 2 seconds old when it
is accessed, it will not be cached.
The unfortunate person who accessed
this half-written file will still see
weirdness, but at least it won't
persist.
Following the question being edited: One reason I don't see these kinds of problems is that I push a whole new copy of the site (with SVN export). Only after that is fully completed does it become visable to Apache/Mod_php (see my answer How to get started deploying PHP applications from a subversion repository? )
The other thing that may happen of course, is that if you are updating in place, you may be updating files that depend on others that have not yet been uploaded. Rsync can only guarantee atomic updates for individual files, not the entire collection that is being changed/uploaded. Another reason I think to upload the site en-mass, and only then put into use.
It sounds like APC isn't preforming or getting the correct file stat info. You could check it to make sure the APC configuration apc.stat is set correctly. Another thing you could do it force the cache to clear with apc_clear_cache() when you publish new code.
Never saw that before, even if i'm a huge user of APC.
Maybe try to trigger a script that empty the APC opcode everytime you send new code on the server ?
When you get a file with a parse error, back it up, then repubish. Take that same file that now works and do a diff against the file with the parse error.
ctime means creation time. You will want to manually flush your entire cache every time you do updates.
You can easily do this, by putting the apc.php script somewhere on your server. This script gives you cache statistics, and will allow you to drop the cache altogether.
The script comes with APC.
Hopet his helps,
Evert
This is probably happening because there's a mismatch between your code, and the cached versions of the code.
For example, APC has a cached version of User.php, but you made changes to User.php or to the data that User uses. The cached version is still running even after your deploy, because it hasn't expired yet.
If you clear your APC cache entries when you deploy, this issue should disappear.