I have a PHP application that for every request loads 1 ini file, and at least 10 PHP files.
As these same files are loaded for every single request I thought about mounting them on a ram disk but I have been told that the linux filing system (ext3) will basically cache them in some way that a ram disk would not improve performance.
Can anyone verify this and possibly explain what is actually happening?
Many thanks.
The virtual file system of (not only) linux uses a cache for virtually every filesystem. So yes, that's in place for ext3, too.
But you might be interested in something like apc which stores the byte/intermediate code for php's zend engine in memory.
Related
Considering a common LAMP setup, you can leave your PHP configurations at some .ini file to PHP read and apply them upon web server initialization. But how does it compare, in performance matters, to runtime configurations that developers usually leave at the application bootstrap file?!
Since PHP uses a shared nothing architecture , each request will starts a new (sub?)process so it will need to read the *.ini files again? Or it comes already shared by the main PHP process? If yes, changing a lot of configurations at runtime will add much more overhead to each request than leaving that at ini files, right?!
Well, firstly it is not PHP that forks a new process. That is completely up to the Web server that PHP is a apart of. So yes, if you are using LAMP, and therefor Apache, the entirety of the PHP module has to get loaded into memory for each process anyways (each process is upwards of 30-50 MB which is massive!).
And again yes, it will need to read the .ini for each new process, but that is completely negligible to all of the other loading that needs to be done.
Ofcourse, the alternative is to use ini_set which would have to be called on each request. Performance wise, it would be just the same as an .ini file IF processes were recreated for every request. However, processes are oftentimes reused (which is why you should tinker with the min and max process count for the Apache config).
So in conclusion, there is a slight performance benefit for a php.ini file.
However, like all performance concerns with PHP and Apache, do what WORKS! If you are trying to optimize, it's probably your queries!
I think it's a pretty big bottleneck for big sites? Is there a way to store them in memory?
Yes PHP files are by default read and executed every page request.
You should look into something like APC, Zend Accelerator, another PHP opcode cache
You may already have these installed, however most of the time they will need some edits to PHP.INI to get them doing their job.
My scripts need to read a small file, about 10 bytes, on every HTTP request processed by PHP (PHP-FPM), so I wonder whether the file will be cached by the OS (in my case Ubuntu) to avoid disk I/O. Or should I avoid it?
Yes. If you start a program like htop and observe the yellow part of the memory usage, this is the amount of memory currently being used for disk cache. However, accessing the file will result in a disk-write to update the access time of that file, this can be disabled by adding the "noatime" option to the relevant partition line in /etc/fstab
The answer to your question is:
Yes, it will be cached.
It depends on yourself.
The following question would be very helpful.
Does the Linux filesystem cache files efficiently?
Just wondering... Does it? And how much
Like including 20 .php files whith classes in them, but without actually using the classes (they might be used though).
I will give a slight variant Answer to this:
If you are running on a tuned VPS or dedicated server: a trivial amount.
If you are running on a shared hosting service: it can considerably degrade performance of your script execution time.
Why? because in the first case you should have configured a PHP opcode cache such as APC or Xcache, which can, in practical terms, eliminate script load and compilation overheads. Even where files need to be read or stat-checked the meta and file data will be "hot" and therefore largely cached in the file-system cache if the (virtual) server is dedicated to the application.
On a shared service everything is running in the opposite direction: PHP is run as a per-request image in the users UID; no opcode caching solutions support this mode, so everything needs to be compiled. The killer here is that files need to be read and many (perhaps most) shared LAMP hosting providers use a scalable server farm for the LAMP tier, with the user data on shared NFS mounted NAS infrastructure. Since these NFS mounts with have an acremin of less than 1 min, the I/O requests will require RPCs off-server. My blog article gives some benchmarks here. The details for a shared IIS hosting template are different but the net effects are similar.
I run the phpBB forum package on my shared service and I roughly halved response times by aggregating the common set of source includes as I describe here.
Yes, though by how much depends on a number of things. The performance cost isn't too high if you are using a PHP accelerator, but will drastically slow things if you're aren't. Your best bet is generally to use autoloading, so you only load things at the point of actual use, rather than loading everything just in case. That may reduce your memory consumption too.
Of course it affects the performance. Everything you do in PHP does.
How much performance is a matter of how much data is in them, and how long it takes to execute them or in the case of classes, read them.
If your not using them, why include them? I assume your using some main engine file, or header file and should rethink your methods of including files.
EDIT: Or as #Pekka pointed out, you can autoload classes.
Short answer - yes it will.
For longer answers a quick google search revealed these - Will including unnecessary php files slow down website? ; PHP Performance on including multiple files
Searching helps!
--Matīss
Is there a known issue leading to file modification times of cache files on Windows XP SP 3 getting arbitrarily updated, but without any actual change?
Is there some service on a standard Windows XP - Backup, Sync, Versioning, Virus scanner - known to touch files? They all have a .txt extension.
If there isn't, forget it. Then I'm getting something wrong in my cache routines, and I'll debug my way through.
Background:
I'm building a simple caching wrapper around a slow web site on a Windows server.
I am comparing the filemtime() time stamp to some columns in the data base to determine whether a cached file is stale.
I'm having problems using this method because the modification time of the cache files seems to get updated in between operations without me doing anything. THis results in stale files being displayed.
I'm the only user on the machine. The operating system is Windows XP, the webserver a XAMPP Apache 2 with PHP 5.2
You could setup logging* on the machine to find out what is changing your files. From your description I take this happens frequently, so you might find ProcessMonitor (german) to be a better solution for monitoring.
* I think you can setup logging with on-board tools as well, just not sure anymore how
The only mtime issue I can think of is the dreaded DST bug. It doesn't sound quite like what you're getting though.
Certainly there are other Windows tools that might modify a file behind your back, but typically it's user-level stuff like WMP screwing with the ID3 tags, or dodgy AV... not anything I would expect to be touching your cache files.
(Maybe you could try an equality comparison of mtimes rather than greater-than/less-than, only using the cache if there is an exact match? This at least means that if some anti-social bleeder is touching files it'll just slow you down a bit, instead of making you serve stale files. FWIW this is what Python does with its bytecode cache.)