mod_php APC confused by symlinks, includes same files twice - why? - php

I have an Apache vhost set up for the folder (say website) which is a symlink to another one linking to a folder with a current release (say website_N) where N is release number (website -> website_123). When a new release is deployed, another folder named website_N+1 is created and when its content is ready website symlink is recreated to link to that new folder (website -> website124).
This setup seems to confuse APC's cache of includes. Sometimes (not always, which is annoying) after a new deployment and the following symlink switch include and require instructions in application start to result in redeclaration errors:
Fatal error: Cannot redeclare class Foobar in /absolute/path/to/deployment/physical/folder/website_N/include_foobar.php
website_N folder in that message is usually one of the old build folders, sometimes even no longer existing. But sometimes error is generated showing the right physical location of the most recent release folder. What stays the same is "cannot redeclare" error for classes loaded for the first time.
I'm pretty confident that is an APC issue because every time that happens adding apc_clear_cache() to application bootstrap resolves the problem.
I guess that happens because different releases reside under the same symlink folder sharing the same "unresolved" path. As a result it could be that old include connections are loaded for precompiled images, and another attempt to include a dependency is performed for its "resolved" path, so it appears as a new one and results in double include and following redeclaration error. Although this theory might not make much sense, I don't understand APC internals very well.
There are many ways around that (to clear cache as a part of deployment process being the obvious one), but if anyone can explain to me the mechanism behind that error, i.e. what in this setup breaks APC behaviour and at which point (and why physically removed folder paths sometimes appear in those error messages) that'd be great.

I use TYPO3 and had the same problem. TYPO3 caches quite a lot. When changing the symlink to another source it is neccesary to go into the TYPO3 installation folder using a shell and manually delete the cached files.
After that all is good. It took a day to figure out.

Related

PHP lists files that don't exist

I'm running Grav CMS on a Linode Ubuntu 16.04 server where PHP7 (php-fpm + nginx) returns cached results when listing directory contents. I first encountered the problem with FilesystemIterator, but it isn't limited to that class - the same problem appears when I use scandir.
Basically what happens is that any time I sync new content to the server, whether I use rsync or FTP, PHP will return the old contents of a particular folder.I've tried calling clearstatcache, but it didn't help – even if I called it from the appropriate PHP file, just before I scanned the directory.
touch'ing the files to update their mtime doesn't help either. Restarting the php-fpm service does work, however.
Is it possible that PHP caches the contents of the directory in some other way? Could it be the file system that is fooling PHP somehow?
There is but it's not something I would use in production.
Run sync first: $ sync
This command writes any cache data that hasn't been written to the disk out to the disk.
Free pagecache:echo 1 > /proc/sys/vm/drop_caches
Free dentries and inodes: echo 2 > /proc/sys/vm/drop_caches
Free pagecache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches
That being said I would also take a look at realpath_cache.
It turns out the problem I was having wasn't due to the OS or PHP, but to the Grav CMS itself – specifically how it caches page objects. It doesn't invalidate cached pages if all that had changed was the associated media files. Turning off the global cache setting for Grav helped this issue, but I've also opened an issue on the Grav repo to see if this is inteded behaviour or not.

OpenX Admin not loading

I have two openx 2.8 servers running here. The issue is that trying to open the admin page redirects me to the full path www.myserver.com/www/admin/index.php with blank screen. Still I can access www.myserver.com/www/images/, delivery etc. I tried a lot of stuff without any success.
this one doesn't work for me OpenX Admin not accessible
I had something similar happen to me last week. Check your configuration file in the /var folder of your openx installation, probably called something like www.yoursite.com.conf.php. Mine was corrupted. I suspect outside malicious activity, especially since there are known vulnerabilities in OpenX.. I'm currently looking at upgrading to Revive Ad server, which is the successor and is supposed to offer a complete upgrade path.

require() errors while target file is being written

I use vi to edit a live file on the server. This is a core file required by virtually every page on a moderately busy website. Everything runs fine while I am editing, but when I save my changes about half the time the logs show a user suffered a "Failed opening required 'common.php'" error.
I can only assume the page request came in while the file was being written and vi maintains an exclusive lock on the file during writing, and PHP just gives up immediately instead of queuing for the lock to be released. Can't find any discussion on this issue though. Anybody know? Is there a way to fix this situation? I'm guessing that doing it the "proper" way by editing locally, pushing to the repository then updating the changes to the production site will have the same issue since svn seems to take longer to run then vi takes to write.
The proper fix would involve separate development, testing, and production environments, along with a sane deployment process.
But a simple trick to atomically update a file is to rename the new version over the target. Workflow:
cd temporary-working-dir
cp your/web/stuff/common.php common.php
vi common.php # make your changes
mv common.php your/web/stuff/common.php
As long as the source and target files are on the same device / partition, mv should be instant, and every request should see either the old or the new version of the file, with no weirdness in between.

Joomla 3.1 Upgrade, issue with extension manager

Now before anyone says "No one can help you with that" please understand that the upgrade worked, this is just a weird bug.
Ok so I upgraded Joomla from 2.5.14 successfully and everything works, nothing out of the ordinary. Every component/module/plugin seems to be working as it should.
HOWEVER the extension manager has a strange behavior, it constantly says:
-1 Copy file failed
I would think file permission problems, but everything is writable. The wierdest thing is it actually installed perfectly, just get this error rather then it saying it worked. (Meaning that when I install the component, there is no error).
This also only happens with components.
Just strange, might be worthy of diving deeper into it in case others run into this with an upgrade. I do not know enough about how the extension manager works to try to identify the problem either, and the lack of a real (or accurate) error message makes it even harder. (the files did copy, so that error message seems out of place)
I will try to look a little deeper in it and see if I cannot isolate it, For those who want to try to recreate it, you can do it by upgrading from 2.5.14 to 3.1.5 though the update manager. The main components I have are no number extension manager, akeeba and admin tools that I feel might have something to do with it.
That most likely is a permissions issue. That error is a RuntimeException thrown by the JFolder class method copy() while trying to copy a file into a folder - src_folder/file to dest_folder/file.
Check your FTP settings in global configuration and then the directories permissions.
I had this error with my module and I did several tests. I removed one level to my image directories from my module and the upgrade finally work...

How to avoid crashing your web app during replacing a file?

Let's say you have a big web app with large visits, but you don't want your app to crash & you don't want people to see the php or mysql errors that happens during replacing files using FTP, How to avoid that? How to just execute the old version of file until the replacing is done?
Thanks
you can follow at least one of this 2 rules:
to use accelerators (like APC) with turned off mtime checking. so until you clear cache manually - old versions will be used from memory
to use virtualhost symlinked to directory with your project: let's examine you store yout project at /home/project/www. and /home/project/public_html is your real webroot and symlinked to www. so - create /home/project/www2, checkout files there, setup and do whatever you want. after this - just change symlink.
I use git to upload my changes to a staging website on the same server, after testing I then push it to the production website. None of the files are changed until they are all received. On the plus side, it only sends the changes compressed, so I don't even have to send an entire file.
The staging area isn't required. I work with a lot of servers and sometimes some of the specific configurations on that server (mostly just find that an extension isn't installed)/
I'm sure you can do the same with another version control system. You need to be careful though. The tutorial I linked specifically stores the git information OUTSIDE the document root. Otherwise someone can just clone all the source code for your website.
If you like SVN, the .svn being in every directory can be a little annoying. Make sure that people can't download what they shouldn't be able to.
Deploy your app into the temporary directory. Then after you done, just rename the original app directory to app.old and the directory where you deployed your files into app
Note this should work okay in Unix environments. Also this will only work if all of the above directories are on the same file systems. In rare case users might see 404 error if they happen to access the app after your renamed the original app into .old and before you renamed temp dir into the original app directory.

Categories