I have multiple virtual hosts running PHP 5.2.17. Periodically a script will seemingly randomly stop working properly. The script will silently fail and the browser will attempt to download the file. The only solution I've been able to find is ridiculous.
I have to go through all the files executed for the request in question. In each file I make an arbitrary change such as a blank line or extra space and save the file. I refresh after every save and eventually I find the file causing the issue and all is well again until the next occurrence.
My only hunch is that it has something to do with the function_exists function. It seems to be a common denominator in messed up files and that function is pretty uncommon. I can't seem to find anything to back up my guess and I can't find any evidence of it myself.
You have an issue with some sort of PHP optimizer/cacher/accelerators (Zend Optimizer, eAccelerator, APC, XCache, IonCube, etc.)
That is the only time that changing the file by that small amount would make a difference.
Normally every time you access a PHP file it is converted into bytecode and then processed by PHP.
An accelerator (among other things) saves the bytecode the first time you access the file to speed things up, and would only refresh the bytecode cache if it thinks the file has changed.
Related
I recently ditched XAMPP on my Windows 10 machine and re-installed Apache (2.4), PHP 7 and MySQL manually (I followed the instructions given here in order to be able to switch between PHP versions easily).
Everything works fine, except that now when I make a change in a PHP file and hit Refresh in the browser, the change often doesn't appear immediately in the browser. No matter how hard I hit F5 (or Ctrl+F5), I still get the non-modified source code, and I have to wait a couple of minutes before those change are finally visible to the browser.
Needless to say, it's quite annoying when developing. And it didn't happen when I was using XAMPP.
So there seems to be some kind of cache somewhere, but I can't find where it is. I don't know if it's Apache or PHP, although I suspect it might be PHP, because the CSS or JS files are not affected by this problem (as far as I can tell).
Any idea what's causing this behavior and how to disable it?
EDIT: I did some more testing.
I created the simplest PHP file possible. Just:
<?php
echo 'test1';
I can confirm that the problem occurs even in this simple case (changing "test1" to "test2": the browser still shows "test1" for a while).
Opening the same page in another browser still shows the outdated code (test1 instead of test2).
Clearing the browser cache doesn't help.
So the problem doesn't seem to happen on the client side.
However, if I do the same test with an HTML file instead of a PHP file, then the problem doesn't occur. Any change done to that file is visible immediately in the browser (of course I'm still accessing this file via Apache, so http://localhost/some-path/test.html)
So the problem seems to affect only PHP files.
It seems the problem was caused by the OPCache module, which I had to enable in order to work on another (drupal 8) project.
In php.ini, the following line:
; How often (in seconds) to check file timestamps for changes to the shared
; memory storage allocation. ("1" means validate once per second, but only
; once per request. "0" means always validate)
opcache.revalidate_freq=60
Changing 60 to 1 (and restarting Apache) basically solved the problem.
Even if there seem to exist a few duplicate questions, I think this one is unique. I'm not asking if there are any limits, it's only about performance drawbacks in context of Apache. Or unix file system in general.
Lets say if I request a file from an Apache server
http://example.com/media/example.jpg
does it matter how many files there are in the same directory "media"?
The reason I'm asking is that my PHP application generates images on the fly.
Once created, it places it at the same location the PHP script would trigger due to ModRewrite. If the file exists, Apache will skip the whole PHP execution and directly serve the static image instead. Some kind of gateway cache if you want to call it that way.
Apache has basically two things to do:
Check if the file exists
Serve the file or forward the request to PHP
Till now, I have about 25.000 files with about 8 GB in this single directory. I expect it to grow at least 10 times in the next years.
While I don't face any issues managing these files, I have the slight feeling that it keeps getting slower when requesting them via HTTP. So I wondered if this is really what happens or if it's just my subjective impression.
Most file systems based on the Berkeley FFS will degrade in performance with large numbers of files in one directory due to multiple levels of indirection.
I don't know about other file systems like HFS or NTFS, but my suspicion is that they may well suffer from the same issue.
I once had to deal with a similar issue and ended up using a map for the files.
I think it was something like md5 myfilename-00001 yielding (for example): e5948ba174d28e80886a48336dcdf4a4 which I then put into a file named e5/94/8ba174d28e80886a48336dcdf4a4. Then a map file mapped 'myfilename-00001' to 'e5/94/8ba174d28e80886a48336dcdf4a4'. This not-quite-elegant solution worked for my purposes and it only took a little bit of code.
I'm currently testing a web application and kept noticing my changes weren't being updated after each save of the PHP file.
As far as I'm aware, I'm using no current PHP caching solution. I'm running PHP 5.5.3 with a fresh copy of CodeIgniter.
Here's what happened:
I noticed PHP didn't seem to be loading the latest code changes as I made them
After noticing the problem, I output <?php echo time();?> into all my view files
Now, the time updates on every page load - indicating PHP is processing each page's code
But still sometimes the page code updates don't take place until 3-5 page loads later
I've tried disabling and empty my browser cache multiple times
Am I right in thinking this is a PHP issue rather than a browser one, given the fact that the time() output is being updated?
PHP doesn't cache anything.
If time() is being updated, you know that there isn't a traditional cache issue.
It's possible you are dealing with filesystem caching if you are using something like NFS. That would explain the 3-5 second delay.
Well that was frustrating. Turns out that MAMP now installed and enables, by default, Zend OpCache - see Stop caching for PHP 5.5.3 in MAMP
Let's say I have a website running on PHP using Kernel pattern. Let's say I have 1000 requests per second accessing Kernel.php file. I want to upload a new version of that file without turning on a maintenance mode. Is it safe to do it? Can I just upload the new file and at some point requests will be handled by this new one?
Kernel.php is error free for sure
the file is included by require_once() in index.php
forget about maintenance mode in this case, please
I was told to add some information about why I even thought about that approach.
We are trying to develop a system providing possibility of updating, any part of webpage, driven by our engine. The Kernel is just an example - if this file can be modified without maintenance mode, in your opinion, than any other less important might be as well.
Sometimes the update is so simple that turning on maintenance mode is like stopping the military invasion on a country because one of privates (soldier) sneezed.
Since we are talking about blowing up things and inter-process communications: none of us will risk uploading the core files on running website without freezing request for few seconds, but how about template files? It's of course a rhetorical question, but now I think you fully understand all of it.
First let me say that this is probably not a very good idea.
Are you running on a Linux server? If so, renaming files is an atomic operation, and the best way to accomplish this is going to be to upload the new file with a different name, then rename it over the old file.
If not, renaming it over the old file is probably still a better approach than just uploading it in place, since you will probably get some requests while the file is being written, which will cause errors.
Turn on PHP opcode caching for your web server, and set the interval to 5 minutes or more.
You can now copy files overtop of running PHP code, and the next time the interval expires the server will check for modifications and recompile the opcode. You'll have to wait a few minutes before you notice the change, because the server will continue to use the cached code until it expires.
I can not advise what would happen if you break dependencies amount PHP files, or if the server will update 1 file but have a different cached copies of other files.
For the most reliable method. You need to use a feature in your web server that allows you to hot swap the directory for a host. You then install a complete new copy of all your PHP code into a new directory, and then hot swap the host to this new location. No requests should be interrupted.
We've recently enabled APC on our servers, and occasionally when we publish new code or changes we discover that the source files that were changed start throwing errors that aren't reflected in the code, usually parse errors describing a token that doesn't exist. We have verified this by running php -l on the files the error logs say are affected. Usually a republish fixes the problem. We're using PHP 5.2.0 and APC 3.01.9. My question is, has anyone else experienced this problem, or does anyone recognize what our problem is? If so, how did you fix it or how could we fix it?
Edit: I should probably add in some details about our publishing process. The content is being pushed to the production servers via rsync from a staging server. We enabled apc.stat_ctime because it said this helps things run smoother with rsync. apc.write_lock is on by default and we haven't disabled it. Ditto for apc.file_update_protection.
Sounds like a part-published file is being read and cached as broken. apc.file_update_protection is designed to help stop this.
in php.ini: apc.file_update_protection integer
apc.file_update_protection setting
puts a delay on caching brand new
files. The default is 2 seconds which
means that if the modification
timestamp (mtime) on a file shows that
it is less than 2 seconds old when it
is accessed, it will not be cached.
The unfortunate person who accessed
this half-written file will still see
weirdness, but at least it won't
persist.
Following the question being edited: One reason I don't see these kinds of problems is that I push a whole new copy of the site (with SVN export). Only after that is fully completed does it become visable to Apache/Mod_php (see my answer How to get started deploying PHP applications from a subversion repository? )
The other thing that may happen of course, is that if you are updating in place, you may be updating files that depend on others that have not yet been uploaded. Rsync can only guarantee atomic updates for individual files, not the entire collection that is being changed/uploaded. Another reason I think to upload the site en-mass, and only then put into use.
It sounds like APC isn't preforming or getting the correct file stat info. You could check it to make sure the APC configuration apc.stat is set correctly. Another thing you could do it force the cache to clear with apc_clear_cache() when you publish new code.
Never saw that before, even if i'm a huge user of APC.
Maybe try to trigger a script that empty the APC opcode everytime you send new code on the server ?
When you get a file with a parse error, back it up, then repubish. Take that same file that now works and do a diff against the file with the parse error.
ctime means creation time. You will want to manually flush your entire cache every time you do updates.
You can easily do this, by putting the apc.php script somewhere on your server. This script gives you cache statistics, and will allow you to drop the cache altogether.
The script comes with APC.
Hopet his helps,
Evert
This is probably happening because there's a mismatch between your code, and the cached versions of the code.
For example, APC has a cached version of User.php, but you made changes to User.php or to the data that User uses. The cached version is still running even after your deploy, because it hasn't expired yet.
If you clear your APC cache entries when you deploy, this issue should disappear.