I'm running a linux web server that uses apache, php, and suphp. Each time a guest accesses the server, suphp is started, the php interpreter is started and the php file is processed, but all of these files are on the disk.
I want to make it so that when the suphp and php programs start for the first time, they get cached in memory, and then the next time (and times after that) they try to start again, they will load from memory, making the startup time much smaller.
I think there is a setting inside /proc somewhere that can help me with this, but I'm not sure which one.
What you're trying to change is an aspect of application behavior, not kernel behavior, so there is nothing in /proc that will help you.
PHP opcode caching is not available under suPHP. You will need to use something else (possibly mod_php or FastCGI) to take advantage of it.
Related
Considering a common LAMP setup, you can leave your PHP configurations at some .ini file to PHP read and apply them upon web server initialization. But how does it compare, in performance matters, to runtime configurations that developers usually leave at the application bootstrap file?!
Since PHP uses a shared nothing architecture , each request will starts a new (sub?)process so it will need to read the *.ini files again? Or it comes already shared by the main PHP process? If yes, changing a lot of configurations at runtime will add much more overhead to each request than leaving that at ini files, right?!
Well, firstly it is not PHP that forks a new process. That is completely up to the Web server that PHP is a apart of. So yes, if you are using LAMP, and therefor Apache, the entirety of the PHP module has to get loaded into memory for each process anyways (each process is upwards of 30-50 MB which is massive!).
And again yes, it will need to read the .ini for each new process, but that is completely negligible to all of the other loading that needs to be done.
Ofcourse, the alternative is to use ini_set which would have to be called on each request. Performance wise, it would be just the same as an .ini file IF processes were recreated for every request. However, processes are oftentimes reused (which is why you should tinker with the min and max process count for the Apache config).
So in conclusion, there is a slight performance benefit for a php.ini file.
However, like all performance concerns with PHP and Apache, do what WORKS! If you are trying to optimize, it's probably your queries!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Imagine there is a PHP page on http://server/page.php.
Client(s) send 100 requests from browser to the server for that page simultaneously.
Does the server run 100 separate processes of php.exe simultaneously?
Does it re-interpret the page.php 100 times?
The answer is highly variable, according to server config.
Let's answer question 1 first:
Does the server run 100 separate processes of php.exe simultaneously?
This depends on the way PHP is installed. If PHP is being run via CGI, then the answer is "Yes, each request calls a separate instance of PHP". If it's being run via an Apache module, then the answer is "No, each request starts a new PHP thread within the Apache executable".
Similar variations will exist for other web servers. Please note that for a Unix/Linux based operating system, running separate copies of the executable for each request is not necessarily a bad thing for performance; the core of the OS is designed such that in many cases, tasks are better done by a number of separate executables rather than one monolithic one.
However, no matter what you do about it, having large numbers of simultaneous requests will drain your server resources and lead to timeouts and errors for your users. This is why it is important for your PHP programs to finish running as quickly as possible. Do not write PHP programs for web consumption that are slow to run; if you're likely to have a lot of traffic, you need to test for performance as much as you do for functionality. Having your programs exit quickly will dramatically reduce the likelihood of having a significant number of simultaneous requests, which will obviously have a big impact on your site's performance.
Now your second question:
Does it re-interpret the page.php 100 times?
For a standard PHP installation, the answer here is "Yes it does, and yes it does have a performance impact."
However, PHP provides several Caching solutions that are designed specifically to mitigate this. The main options are APC and the Zend Cache, either of which can be installed as standard modules. Using these modules will mean that PHP caches the interpreted code, so it can be run much faster for subsequent calls.
The Zend Cache will be included as part of the standard PHP installation as of the forthcoming PHP 5.5 release.
Apache2 has multiple different mode to work.
In "prefork" (the most commonly used) mode, Apache will create process for every request, each process will run a own php.exe. Config file will assign a maximum number of connections (MaxClients in httpd.conf), Apache will only create MaxClients. This is to prevent memory exhaustion. More requests are queued, waiting for the previous request to complete.
If you do not install opcode cache extensions like APC, XCache, eAccelerator, php.exe will re-interpret the page.php 100 times.
It depends.
There are different ways of setting things up, and things can get quite complex.
The short answer is 'more or less'. A number of apache processes will be spawned, the PHP code will be parsed and run.
If you want to avoid the parsing overhead use an opcode cache. APC (Alternative PHP Cache) is a very popular one. This has a number of neat features which are worth digging into, but without any config other than installing it it will ensure that each php page is only parsed into opcode once.
To change how many apache services are spawned, most likely you'll be using MPM Prefork. This lets you decide if how you want Apache to deal with multiple users.
For general advice, in my experience (small sites, not a huge amount of traffic), installing APC is worth doing, for everything else the defaults are not too bad.
There are a number of answers to this. In general, Apache will create a process for an incoming request, so it is possible that 100 process are created. However, a process takes time to create, so it might be that by the time a process has finished and died, one of those 100 connections comes in a fraction of a second later (since 100 connections at exactly the same time is very rare indeed, unless you're Google).
However, let us imagine that 100 processes do really need to be held in memory simultaneously, but that there is only room for 50 in available server RAM. In that case, 50 connections will be served, and 50 will have to wait for processes to die and be re-spawned. Thus, a random half of those requests will be delayed, though if a process create-process-die sequence only takes a fraction of a second, they won't have to wait very long. This is why, when improving server capacity, reducing your page load time is as important as adding more RAM - the sooner a process finishes, the sooner a new one can take its place.
One way, incidentally, to reduce load time is to spawn a number of PHP processes and hold them in memory. This is the basis of FastCGI (or fcgid, which is compatible). Rather than creating and killing a process for every request, a process is spawned in memory immediately and is re-used for several requests. For PHP, these are usually configure to die after a certain number of page requests (e.g. 1000) as historically PHP has had quite a lot of memory leaks (the more a process is reused, the worse the memory leaks get).
You ask if a page is re-interpreted for every request. Normally yes, but if you also run a PHP Accelerator, then no - the byte-code that PHP compiles to is cached and reused. Thus, mixing the FastCGI approach with an accelerator can make for a very speedy server indeed. Standard PHP does not come with an accelerator, but Zend Cache is scheduled for inclusion into the PHP core.
I have a fairly basic PHP script that caches data to a text file. I need to come up with a solution that prevents two running instances of the script from writing to the file at the same time. I've looked into the PHP flock function, however, the PHP manual (http://php.net/manual/en/function.flock.php) mentioned one big limitation:
On some operating systems flock() is implemented at the process level.
When using a multithreaded server API like ISAPI you may not be able
to rely on flock() to protect files against other PHP scripts running
in parallel threads of the same server instance!
I've got two questions regarding this warning that I'm hoping someone can answer. First, how can I check if my implementation of flock is done at the process level or not? Btw, I'm running CentOS, with cPanel.
Second, if my implementation is at the process level, does that mean that one running instance of my script will not be aware of a lock done by another running instance of the same script? Or do script instances run on separate threads and not separate processes? Any clarification about this is very much appreciated.
Thanks.
The only common case would be running Apache, with the some kind threaded (non-forking) npm. 99% of the cases you are not running PHP threaded.. it's a relatively safe assumption.
Aside from that, it may be worth trying to avoid locking..
The biggest issue you have is that 2 processes may be writing at the same time, or 1 process reads the cache when it's not fully generated. The easiest way to get around this, is to let the PHP script generate the cache in a different file in a temporary location. When the file is written, just move it into place (with rename()). File moves are guaranteed to be atomic when it's happening on the same mount.
I know PHP is mostly an interpreted language. Does the PHP interpreter (php.exe in Windows and php file in Linux) do interpretation every time my script executes or only when I change the source? To put it another way, does the PHP interpreter cache interpreted scripts or not?
Yes you have a performance penalty as PHP does interpretation every time. Though, if you have APC(Alternative PHP Cache: http://php.net/apc) installed and configured it will keep whole byte code in memory and will re-build it when some changes occur.
This is in essence what happens every time a request arrives:
PHP reads the file
PHP compiles the file to a language it can process, the so called opcode
PHP runs the opcode
There is some overhead in compiling the file into opcode as many have already pointed out, and PHP by default has no cache, so it will do the "compilation" process every time a request arrives even if the file didn't change.
There are some optional modules that can produce opcode caches to avoid that overhead, of which generally the most recommended is APC, since it will ship by default on PHP 6.
Yes.
Being an interpreted language, you do pay a performance penalty.
However there is some research in the direction of compiling and using it.
Take a look at PHP Accelerator.
Most PHP accelerators work by caching the compiled bytecode of PHP
scripts to avoid the overhead of parsing and
compiling source code on each request (some or even most of which may
never be executed). To further improve performance, the cached code is
stored in shared memory and directly executed from there, minimizing
the amount of slow disk reads and memory copying at runtime.
I have a PHP application that for every request loads 1 ini file, and at least 10 PHP files.
As these same files are loaded for every single request I thought about mounting them on a ram disk but I have been told that the linux filing system (ext3) will basically cache them in some way that a ram disk would not improve performance.
Can anyone verify this and possibly explain what is actually happening?
Many thanks.
The virtual file system of (not only) linux uses a cache for virtually every filesystem. So yes, that's in place for ext3, too.
But you might be interested in something like apc which stores the byte/intermediate code for php's zend engine in memory.