Related
TL;DR – What's the simplest approach to very quickly loading large files (~tens of MB) in PHP to avoid responsiveness issues in a web page? For example, I suspect I'll need to do something like pre-loading the files and sharing the resulting large (static) associative arrays from one PHP instance to the instances created by the web server. I'm specifically trying to avoid having to deal with SQL...
Details
I've put together an online dictionary (here) with a home-brew search engine that works beautifully on my local computer, but has performance problems on the server. (Note: I have no control over the server aside from updating my own files.)
The performance problems are largely due to the index files that I need to load (large associative arrays, json encoded). These are on the order of 6-16 MB and only those required for the specific type of search are loaded. This means non-typical searches (e.g., searching specifically in quotes or etymology) show a significant performance hit due to the necessity of loading extra (large) files.
Now, I realize many people will tell me that I should be using SQL or some other DB solution instead of loading and searching my own index files. They're correct, but right now I don't have the time or energy to fight with learning how to roll my own SQL server, converting all my data structures over to SQL format, and figuring out how to interface PHP with SQL, etc. I have no experience with SQL and all the tutorials I've looked at so far make it seem ill suited to my data structures (i.e., I can't define my records in terms of fixed-width strings because I have arbitrary length blocks of text for every entry and arbitrary numbers of entries associated with a given headword, etc.)
Again, my current code works perfectly on my local computer - all the problems boil down to the time expensive step of loading these large index files. A "simple" fix (that allows me to keep my current design) would seem to be pre-loading these index files in a separate script and then have any PHP instances created by the web server access these static data structures that already exist in memory (again, they're just a bunch of large/complex associative arrays). These data structures/files are static, so locking/race conditions aren't an issue (i.e., the data only changes periodically when I push out updates with corrections/updated inflection lists, etc.).
I've attempted to find information on how to approach this, but the results are confusing and contradictory and are beyond the scope of things I've worked with before. I'm happy to learn new things, but it's not clear which path will even work for what I'm trying to do.
Lot's of (older?) recommendations to use ACP, but I see comments that this only works for instances on the same processor - so even if they're on the same server it may not work (even worse, maybe the host server will spread different instances across different virtual machines). I also see comments that some aspects of ACP are deprecated as well as discussions about re-compiling PHP with various flags - this is obviously not an option for me; I only have what's available on the server.
A recommendation to use built in IPC functions within PHP (link) - but it's not clear if I can carve out a ~50 MB chunk to share (the way it's written makes it sound like this is for small chunks of memory) and some of the comments suggest there can memory collision problems or that this would cause performance issues and a daemon/socket approach (see comments at the end of the linked article). Honestly, this approach looks appealing as I can just use my existing data structures. However, I'm worried this will have the same problems as those leveled at APC (e.g., will this work if the server is putting different instances on different virtual machines?)
Daemon/socket approach - a comment in the link above claims this would be best performance-wise. I haven't worked with sockets before, but I feel like it would be less painful to learn this than restructuring everything around an SQL approach (maybe that's a bad assessment on my part?) I can at least conceptualize how this would work, but I would need to learn how to get a daemon script that's always running in the background and ensure it's responsive to multiple simultaneous instances, etc.
I'm sure there are other approaches that I haven't yet stumbled across.
Again, ideally, I'd like an approach that simply lets me load all the index files and hang those associative arrays out in the wind for everyone to look at. Security's not an issue (it's just a processed version of the data in the simple web pages) and the data is static.
Note: the web server is running PHP 7.4 (Zend 3.4) and my local environment is OSX 10.14 or 10.15 with PHP 7.3 (Zend 3.3)
The easiest solution is to use memcached to store your large objects. It is well supported in PHP. You will need to increase the maximum object size, the default is 1MB, but this is a simple configuration option.
I know you can minify PHP, but I'm wondering if there is any point. PHP is an interpreted language so will run a little slower than a compiled language. My question is: would clients see a visible speed improvement in page loads and such if I were to minify my PHP?
Also, is there a way to compile PHP or something similar?
PHP is compiled into bytecode, which is then interpreted on top of something resembling a VM. Many other scripting languages follow the same general process, including Perl and Ruby. It's not really a traditional interpreted language like, say, BASIC.
There would be no effective speed increase if you attempted to "minify" the source. You would get a major increase by using a bytecode cache like APC.
Facebook introduced a compiler named HipHop that transforms PHP source into C++ code. Rasmus Lerdorf, one of the big PHP guys did a presentation for Digg earlier this year that covers the performance improvements given by HipHop. In short, it's not too much faster than optimizing code and using a bytecode cache. HipHop is overkill for the majority of users.
Facebook also recently unveiled HHVM, a new virtual machine based on their work making HipHop. It's still rather new and it's not clear if it will provide a major performance boost to the general public.
Just to make sure it's stated expressly, please read that presentation in full. It points out numerous ways to benchmark and profile code and identify bottlenecks using tools like xdebug and xhprof, also from Facebook.
2021 Update
HHVM diverged away from vanilla PHP a couple versions ago. PHP 7 and 8 bring a whole bunch of amazing performance improvements that have pretty much closed the gap. You now no longer need to do weird things to get better performance out of PHP!
Minifying PHP source code continues to be useless for performance reasons.
Forgo the idea of minifying PHP in favor of using an opcode cache, like PHP Accelerator, or APC.
Or something else like memcached
Yes there is one (non-technical) point.
Your hoster can spy your code on his server. If you minify and uglify it, it is for spys more difficult to steal your ideas.
One reason for minifying and uglifying php may be spy-protection. I think uglyfing code should one step in an automatic deployment.
With some rewriting (shorter variable names) you could save a few bytes of memory, but that's also seldomly significant.
However I do design some of my applications in a way that allows to concatenate include scripts together. With php -w it can be compacted significantly, adding a little speed gain for script startup. On an opcode-enabled server this however only saves a few file mtime checks.
This is less an answer than an advertisement. I'm been working on a PHP extension that translates Zend opcodes to run on a VM with static typing. It doesn't accelerate arbitrary PHP code. It does allow you to write code that run way faster than what regular PHP allows. The key here is static typing. On a modern CPU, a dynamic language eats branch misprediction penalty left and right. Fact that PHP arrays are hash tables also imposes high cost: lot of branch mispredictions, inefficient use of cache, poor memory prefetching, and no SIMD optimization whatsoever. Branch misprediction and cache misses in particular are achilles' heel for today's processors. My little VM sidesteps those problem by using static types and C array instead of hash table. The result ends up running roughly ten times faster. This is using bytecode interpretation. The extension can optionally compile a function through gcc. In that case, you get two to five times more speed.
Here's the link for anyone interested:
https://github.com/chung-leong/qb/wiki
Again, the extension is not a general PHP accelerator. You have to write code specific for it.
There are PHP compilers... see this previous question for a list; but (unless you're the size of Facebook or are targetting your application to run client-side) they're generally a lot more trouble than they're worth
Simple opcode caching will give you more benefit for the effort involved. Or profile your code to identify the bottlenecks, and then optimise it.
You don't need to minify PHP.
In order to get a better performance, install an Opcode cache; but the ideal solution would be to upgrade your PHP to the 5.5 version or above because the newer versions have an opcode cache by default called Zend Optimiser that is performing better than the other ones http://massivescale.blogspot.com/2013/06/php-55-zend-optimiser-opcache-vs-xcache.html.
The "point" is to make the file smaller, because smaller files load faster than bigger files. Also, removing whitespace will make parsing a tiny bit faster since those characters don't need to be parsed out.
Will it be noticeable? Almost never, unless the file is huge and there's a big difference in size.
A site I built with Kohana was slammed with an enormous amount of traffic yesterday, causing me to take a step back and evaluate some of the design. I'm curious what are some standard techniques for optimizing Kohana-based applications?
I'm interested in benchmarking as well. Do I need to setup Benchmark::start() and Benchmark::stop() for each controller-method in order to see execution times for all pages, or am I able to apply benchmarking globally and quickly?
I will be using the Cache-library more in time to come, but I am open to more suggestions as I'm sure there's a lot I can do that I'm simply not aware of at the moment.
What I will say in this answer is not specific to Kohana, and can probably apply to lots of PHP projects.
Here are some points that come to my mind when talking about performance, scalability, PHP, ...
I've used many of those ideas while working on several projects -- and they helped; so they could probably help here too.
First of all, when it comes to performances, there are many aspects/questions that are to consider:
configuration of the server (both Apache, PHP, MySQL, other possible daemons, and system); you might get more help about that on ServerFault, I suppose,
PHP code,
Database queries,
Using or not your webserver?
Can you use any kind of caching mechanism? Or do you need always more that up to date data on the website?
Using a reverse proxy
The first thing that could be really useful is using a reverse proxy, like varnish, in front of your webserver: let it cache as many things as possible, so only requests that really need PHP/MySQL calculations (and, of course, some other requests, when they are not in the cache of the proxy) make it to Apache/PHP/MySQL.
First of all, your CSS/Javascript/Images -- well, everything that is static -- probably don't need to be always served by Apache
So, you can have the reverse proxy cache all those.
Serving those static files is no big deal for Apache, but the less it has to work for those, the more it will be able to do with PHP.
Remember: Apache can only server a finite, limited, number of requests at a time.
Then, have the reverse proxy serve as many PHP-pages as possible from cache: there are probably some pages that don't change that often, and could be served from cache. Instead of using some PHP-based cache, why not let another, lighter, server serve those (and fetch them from the PHP server from time to time, so they are always almost up to date)?
For instance, if you have some RSS feeds (We generally tend to forget those, when trying to optimize for performances) that are requested very often, having them in cache for a couple of minutes could save hundreds/thousands of request to Apache+PHP+MySQL!
Same for the most visited pages of your site, if they don't change for at least a couple of minutes (example: homepage?), then, no need to waste CPU re-generating them each time a user requests them.
Maybe there is a difference between pages served for anonymous users (the same page for all anonymous users) and pages served for identified users ("Hello Mr X, you have new messages", for instance)?
If so, you can probably configure the reverse proxy to cache the page that is served for anonymous users (based on a cookie, like the session cookie, typically)
It'll mean that Apache+PHP has less to deal with: only identified users -- which might be only a small part of your users.
About using a reverse-proxy as cache, for a PHP application, you can, for instance, take a look at Benchmark Results Show 400%-700% Increase In Server Capabilities with APC and Squid Cache.
(Yep, they are using Squid, and I was talking about varnish -- that's just another possibility ^^ Varnish being more recent, but more dedicated to caching)
If you do that well enough, and manage to stop re-generating too many pages again and again, maybe you won't even have to optimize any of your code ;-)
At least, maybe not in any kind of rush... And it's always better to perform optimizations when you are not under too much presure...
As a sidenote: you are saying in the OP:
A site I built with Kohana was slammed with
an enormous amount of traffic yesterday,
This is the kind of sudden situation where a reverse-proxy can literally save the day, if your website can deal with not being up to date by the second:
install it, configure it, let it always -- every normal day -- run:
Configure it to not keep PHP pages in cache; or only for a short duration; this way, you always have up to date data displayed
And, the day you take a slashdot or digg effect:
Configure the reverse proxy to keep PHP pages in cache; or for a longer period of time; maybe your pages will not be up to date by the second, but it will allow your website to survive the digg-effect!
About that, How can I detect and survive being “Slashdotted”? might be an interesting read.
On the PHP side of things:
First of all: are you using a recent version of PHP? There are regularly improvements in speed, with new versions ;-)
For instance, take a look at Benchmark of PHP Branches 3.0 through 5.3-CVS.
Note that performances is quite a good reason to use PHP 5.3 (I've made some benchmarks (in French), and results are great)...
Another pretty good reason being, of course, that PHP 5.2 has reached its end of life, and is not maintained anymore!
Are you using any opcode cache?
I'm thinking about APC - Alternative PHP Cache, for instance (pecl, manual), which is the solution I've seen used the most -- and that is used on all servers on which I've worked.
See also: Slides APC Facebook,
Or Benchmark Results Show 400%-700% Increase In Server Capabilities with APC and Squid Cache.
It can really lower the CPU-load of a server a lot, in some cases (I've seen CPU-load on some servers go from 80% to 40%, just by installing APC and activating it's opcode-cache functionality!)
Basically, execution of a PHP script goes in two steps:
Compilation of the PHP source-code to opcodes (kind of an equivalent of JAVA's bytecode)
Execution of those opcodes
APC keeps those in memory, so there is less work to be done each time a PHP script/file is executed: only fetch the opcodes from RAM, and execute them.
You might need to take a look at APC's configuration options, by the way
there are quite a few of those, and some can have a great impact on both speed / CPU-load / ease of use for you
For instance, disabling [apc.stat](https://php.net/manual/en/apc.configuration.php#ini.apc.stat) can be good for system-load; but it means modifications made to PHP files won't be take into account unless you flush the whole opcode-cache; about that, for more details, see for instance To stat() Or Not To stat()?
Using cache for data
As much as possible, it is better to avoid doing the same thing over and over again.
The main thing I'm thinking about is, of course, SQL Queries: many of your pages probably do the same queries, and the results of some of those is probably almost always the same... Which means lots of "useless" queries made to the database, which has to spend time serving the same data over and over again.
Of course, this is true for other stuff, like Web Services calls, fetching information from other websites, heavy calculations, ...
It might be very interesting for you to identify:
Which queries are run lots of times, always returning the same data
Which other (heavy) calculations are done lots of time, always returning the same result
And store these data/results in some kind of cache, so they are easier to get -- faster -- and you don't have to go to your SQL server for "nothing".
Great caching mechanisms are, for instance:
APC: in addition to the opcode-cache I talked about earlier, it allows you to store data in memory,
And/or memcached (see also), which is very useful if you literally have lots of data and/or are using multiple servers, as it is distributed.
of course, you can think about files; and probably many other ideas.
I'm pretty sure your framework comes with some cache-related stuff; you probably already know that, as you said "I will be using the Cache-library more in time to come" in the OP ;-)
Profiling
Now, a nice thing to do would be to use the Xdebug extension to profile your application: it often allows to find a couple of weak-spots quite easily -- at least, if there is any function that takes lots of time.
Configured properly, it will generate profiling files that can be analysed with some graphic tools, such as:
KCachegrind: my favorite, but works only on Linux/KDE
Wincachegrind for windows; it does a bit less stuff than KCacheGrind, unfortunately -- it doesn't display callgraphs, typically.
Webgrind which runs on a PHP webserver, so works anywhere -- but probably has less features.
For instance, here are a couple screenshots of KCacheGrind:
(source: pascal-martin.fr)
(source: pascal-martin.fr)
(BTW, the callgraph presented on the second screenshot is typically something neither WinCacheGrind nor Webgrind can do, if I remember correctly ^^ )
(Thanks #Mikushi for the comment) Another possibility that I haven't used much is the the xhprof extension : it also helps with profiling, can generate callgraphs -- but is lighter than Xdebug, which mean you should be able to install it on a production server.
You should be able to use it alonside XHGui, which will help for the visualisation of data.
On the SQL side of things:
Now that we've spoken a bit about PHP, note that it is more than possible that your bottleneck isn't the PHP-side of things, but the database one...
At least two or three things, here:
You should determine:
What are the most frequent queries your application is doing
Whether those are optimized (using the right indexes, mainly?), using the EXPLAIN instruction, if you are using MySQL
See also: Optimizing SELECT and Other Statements
You can, for instance, activate log_slow_queries to get a list of the requests that take "too much" time, and start your optimization by those.
whether you could cache some of these queries (see what I said earlier)
Is your MySQL well configured? I don't know much about that, but there are some configuration options that might have some impact.
Optimizing the MySQL Server might give you some interesting informations about that.
Still, the two most important things are:
Don't go to the DB if you don't need to: cache as much as you can!
When you have to go to the DB, use efficient queries: use indexes; and profile!
And what now?
If you are still reading, what else could be optimized?
Well, there is still room for improvements... A couple of architecture-oriented ideas might be:
Switch to an n-tier architecture:
Put MySQL on another server (2-tier: one for PHP; the other for MySQL)
Use several PHP servers (and load-balance the users between those)
Use another machines for static files, with a lighter webserver, like:
lighttpd
or nginx -- this one is becoming more and more popular, btw.
Use several servers for MySQL, several servers for PHP, and several reverse-proxies in front of those
Of course: install memcached daemons on whatever server has any amount of free RAM, and use them to cache as much as you can / makes sense.
Use something "more efficient" that Apache?
I hear more and more often about nginx, which is supposed to be great when it comes to PHP and high-volume websites; I've never used it myself, but you might find some interesting articles about it on the net;
for instance, PHP performance III -- Running nginx.
See also: PHP-FPM - FastCGI Process Manager, which is bundled with PHP >= 5.3.3, and does wonders with nginx.
Well, maybe some of those ideas are a bit overkill in your situation ^^
But, still... Why not study them a bit, just in case ? ;-)
And what about Kohana?
Your initial question was about optimizing an application that uses Kohana... Well, I've posted some ideas that are true for any PHP application... Which means they are true for Kohana too ;-)
(Even if not specific to it ^^)
I said: use cache; Kohana seems to support some caching stuff (You talked about it yourself, so nothing new here...)
If there is anything that can be done quickly, try it ;-)
I also said you shouldn't do anything that's not necessary; is there anything enabled by default in Kohana that you don't need?
Browsing the net, it seems there is at least something about XSS filtering; do you need that?
Still, here's a couple of links that might be useful:
Kohana General Discussion: Caching?
Community Support: Web Site Optimization: Maximum Website Performance using Kohana
Conclusion?
And, to conclude, a simple thought:
How much will it cost your company to pay you 5 days? -- considering it is a reasonable amount of time to do some great optimizations
How much will it cost your company to buy (pay for?) a second server, and its maintenance?
What if you have to scale larger?
How much will it cost to spend 10 days? more? optimizing every possible bit of your application?
And how much for a couple more servers?
I'm not saying you shouldn't optimize: you definitely should!
But go for "quick" optimizations that will get you big rewards first: using some opcode cache might help you get between 10 and 50 percent off your server's CPU-load... And it takes only a couple of minutes to set up ;-) On the other side, spending 3 days for 2 percent...
Oh, and, btw: before doing anything: put some monitoring stuff in place, so you know what improvements have been made, and how!
Without monitoring, you will have no idea of the effect of what you did... Not even if it's a real optimization or not!
For instance, you could use something like RRDtool + cacti.
And showing your boss some nice graphics with a 40% CPU-load drop is always great ;-)
Anyway, and to really conclude: have fun!
(Yes, optimizing is fun!)
(Ergh, I didn't think I would write that much... Hope at least some parts of this are useful... And I should remember this answer: might be useful some other times...)
Use XDebug and WinCacheGrind or WebCacheGrind to profile and analyze slow code execution.
(source: jokke.dk)
Profile code with XDebug.
Use a lot of caching. If your pages are relatively static, then reverse proxy might be the best way to do it.
Kohana is out of the box very very fast, except for the use of database objects. To quote Zombor "You can reduce memory usage by ensuring you are using the database result object instead of result arrays." This makes a HUGEE performance difference on a site that is being slammed. Not only does it use more memory, it slows down execution of scripts.
Also - you must use caching. I prefer memcache and use it in my models like this:
public function get($e_id)
{
$event_data = $this->cache->get('event_get_'.$e_id.Kohana::config('config.site_domain'));
if ($event_data === NULL)
{
$this->db_slave
->select('e_id,e_name')
->from('Events')
->where('e_id', $e_id);
$result = $this->db_slave->get();
$event_data = ($result->count() ==1)? $result->current() : FALSE;
$this->cache->set('event_get_'.$e_id.Kohana::config('config.site_domain'), $event_data, NULL, 300); // 5 minutes
}
return $event_data;
}
This will also dramatically increase performance. The above two techniques improved a sites performance by 80%.
If you gave some more information about where you think the bottleneck is, I'm sure we could give some better ideas.
Also check out yslow (google it) for some other performance tips.
Strictly related to Kohana (you probably already have done this, or not):
In production mode:
Enable internal caching (this will only cache the Kohana::find_file results, but this actually can help a lot.
Disable profiler
Just my 2 cents :)
I totally agree with the XDebug and caching answers. Don't look into the Kohana layer for optimization until you've identified your biggest speed and scale bottlenecks.
XDebug will tell you were you spend the most of your time and identify 'hotspots' in your code. Keep this profiling information so you can baseline and measure performance improvements.
Example problem and solution:
If you find that you're building up expensive objects from the database each time, that don't really change often, then you can look at caching them with memcached or another mechanism. All of these performance fixes take time and add complexity to your system, so be sure of your bottlenecks before you start fixing them.
I found this PECL package called threads, but there is not a release yet. And nothing is coming up on the PHP website.
From the PHP manual for the pthreads extension:
pthreads is an Object Orientated API that allows user-land multi-threading in PHP. It includes all the tools you need to create multi-threaded applications targeted at the Web or the Console. PHP applications can create, read, write, execute and synchronize with Threads, Workers and Stackables.
As unbelievable as this sounds, it's entirely true. Today, PHP can multi-thread for those wishing to try it.
The first release of PHP4, 22 May 2000, PHP was shipped with a thread safe architecture - a way for it to execute multiple instances of it's interpreter in separate threads in multi-threaded SAPI ( Server API ) environments. Over the last 13 years, the design of this architecture has been maintained and advanced: It has been in production use on the worlds largest websites ever since.
Threading in user land was never a concern for the PHP team, and it remains as such today. You should understand that in the world where PHP does it's business, there's already a defined method of scaling - add hardware. Over the many years PHP has existed, hardware has got cheaper and cheaper and so this became less and less of a concern for the PHP team. While it was getting cheaper, it also got much more powerful; today, our mobile phones and tablets have dual and quad core architectures and plenty of RAM to go with it, our desktops and servers commonly have 8 or 16 cores, 16 and 32 gigabytes of RAM, though we may not always be able to have two within budget and having two desktops is rarely useful for most of us.
Additionally, PHP was written for the non-programmer, it is many hobbyists native tongue. The reason PHP is so easily adopted is because it is an easy language to learn and write. The reason PHP is so reliable today is because of the vast amount of work that goes into it's design, and every single decision made by the PHP group. It's reliability and sheer greatness keep it in the spot light, after all these years; where it's rivals have fallen to time or pressure.
Multi-threaded programming is not easy for most, even with the most coherent and reliable API, there are different things to think about, and many misconceptions. The PHP group do not wish for user land multi-threading to be a core feature, it has never been given serious attention - and rightly so. PHP should not be complex, for everyone.
All things considered, there are still benefits to be had from allowing PHP to utilize it's production ready and tested features to allow a means of making the most out of what we have, when adding more isn't always an option, and for a lot of tasks is never really needed.
pthreads achieves, for those wishing to explore it, an API that does allow a user to multi-thread PHP applications. It's API is very much a work in progress, and designated a beta level of stability and completeness.
It is common knowledge that some of the libraries PHP uses are not thread safe, it should be clear to the programmer that pthreads cannot change this, and does not attempt to try. However, any library that is thread safe is useable, as in any other thread safe setup of the interpreter.
pthreads utilizes Posix Threads ( even in Windows ), what the programmer creates are real threads of execution, but for those threads to be useful, they must be aware of PHP - able to execute user code, share variables and allow a useful means of communication ( synchronization ). So every thread is created with an instance of the interpreter, but by design, it's interpreter is isolated from all other instances of the interpreter - just like multi-threaded Server API environments. pthreads attempts to bridge the gap in a sane and safe way. Many of the concerns of the programmer of threads in C just aren't there for the programmer of pthreads, by design, pthreads is copy on read and copy on write ( RAM is cheap ), so no two instances ever manipulate the same physical data, but they can both affect data in another thread. The fact that PHP may use thread unsafe features in it's core programming is entirely irrelevant, user threads, and it's operations are completely safe.
Why copy on read and copy on write:
public function run() {
...
(1) $this->data = $data;
...
(2) $this->other = someOperation($this->data);
...
}
(3) echo preg_match($pattern, $replace, $thread->data);
(1) While a read, and write lock are held on the pthreads object data store, data is copied from its original location in memory to the object store. pthreads does not adjust the refcount of the variable, Zend is able to free the original data if there are no further references to it.
(2) The argument to someOperation references the object store, the original data stored, which it itself a copy of the result of (1), is copied again for the engine into a zval container, while this occurs a read lock is held on the object store, the lock is released and the engine can execute the function. When the zval is created, it has a refcount of 0, enabling the engine to free the copy on completion of the operation, because no other references to it exist.
(3) The last argument to preg_match references the data store, a read lock is obtained, the data set in (1) is copied to a zval, again with a refcount of 0. The lock is released, The call to preg_match operates on a copy of data, that is itself a copy of the original data.
Things to know:
The object store's hash table where data is stored, thread safe, is
based on the TsHashTable shipped with PHP, by Zend.
The object store has a read and write lock, an additional access lock is provided for the TsHashTable such that if requires ( and it does, var_dump/print_r, direct access to properties as the PHP engine wants to reference them ) pthreads can manipulate the TsHashTable outside of the defined API.
The locks are only held while the copying operations occur, when the copies have been made the locks are released, in a sensible order.
This means:
When a write occurs, not only are a read and write lock held, but an
additional access lock. The table itself is locked down, there is no
possible way another context can lock, read, write or affect it.
When a read occurs, not only is the read lock held, but the
additional access lock too, again the table is locked down.
No two contexts can physically nor concurrently access the same data from the object store, but writes made in any context with a reference will affect the data read in any context with a reference.
This is shared nothing architecture and the only way to exist is co-exist. Those a bit savvy will see that, there's a lot of copying going on here, and they will wonder if that is a good thing. Quite a lot of copying goes on within a dynamic runtime, that's the dynamics of a dynamic language. pthreads is implemented at the level of the object, because good control can be gained over one object, but methods - the code the programmer executes - have another context, free of locking and copies - the local method scope. The object scope in the case of a pthreads object should be treated as a way to share data among contexts, that is it's purpose. With this in mind you can adopt techniques to avoid locking the object store unless it's necessary, such as passing local scope variables to other methods in a threaded object rather than having them copy from the object store upon execution.
Most of the libraries and extensions available for PHP are thin wrappers around 3rd parties, PHP core functionality to a degree is the same thing. pthreads is not a thin wrapper around Posix Threads; it is a threading API based on Posix Threads. There is no point in implementing Threads in PHP that it's users do not understand or cannot use. There's no reason that a person with no knowledge of what a mutex is or does should not be able to take advantage of all that they have, both in terms of skill, and resources. An object functions like an object, but wherever two contexts would otherwise collide, pthreads provides stability and safety.
Anyone who has worked in java will see the similarities between a pthreads object and threading in java, those same people will have no doubt seen an error called ConcurrentModificationException - as it sounds an error raised by the java runtime if two threads write the same physical data concurrently. I understand why it exists, but it baffles me that with resources as cheap as they are, coupled with the fact the runtime is able to detect the concurrency at the exact and only time that safety could be achieved for the user, that it chooses to throw a possibly fatal error at runtime rather than manage the execution and access to the data.
No such stupid errors will be emitted by pthreads, the API is written to make threading as stable, and compatible as is possible, I believe.
Multi-threading isn't like using a new database, close attention should be paid to every word in the manual and examples shipped with pthreads.
Lastly, from the PHP manual:
pthreads was, and is, an experiment with pretty good results. Any of its limitations or features may change at any time; that is the nature of experimentation. It's limitations - often imposed by the implementation - exist for good reason; the aim of pthreads is to provide a useable solution to multi-tasking in PHP at any level. In the environment which pthreads executes, some restrictions and limitations are necessary in order to provide a stable environment.
Here is an example of what Wilco suggested:
$cmd = 'nohup nice -n 10 /usr/bin/php -c /path/to/php.ini -f /path/to/php/file.php action=generate var1_id=23 var2_id=35 gen_id=535 > /path/to/log/file.log & echo $!';
$pid = shell_exec($cmd);
Basically this executes the PHP script at the command line, but immediately returns the PID and then runs in the background. (The echo $! ensures nothing else is returned other than the PID.) This allows your PHP script to continue or quit if you want. When I have used this, I have redirected the user to another page, where every 5 to 60 seconds an AJAX call is made to check if the report is still running. (I have a table to store the gen_id and the user it's related to.) The check script runs the following:
exec('ps ' . $pid , $processState);
if (count($processState) < 2) {
// less than 2 rows in the ps, therefore report is complete
}
There is a short post on this technique here: http://nsaunders.wordpress.com/2007/01/12/running-a-background-process-in-php/
There is nothing available that I'm aware of. The next best thing would be to simply have one script execute another via CLI, but that's a bit rudimentary. Depending on what you are trying to do and how complex it is, this may or may not be an option.
In short: yes, there is multithreading in php but you should use multiprocessing instead.
Backgroud info: threads vs. processes
There is always a bit confusion about the distinction of threads and processes, so i'll shortly describe both:
A thread is a sequence of commands that the CPU will process. The only data it consists of is a program counter. Each CPU core will only process one thread at a time but can switch between the execution of different ones via scheduling.
A process is a set of shared resources. That means it consists of a part of memory, variables, object instances, file handles, mutexes, database connections and so on. Each process also contains one or more threads. All threads of the same process share its resources, so you may use a variable in one thread that you created in another. If those threads are parts of two different processes, then they cannot access each others resources directly. In this case you need inter-process communication through e.g. pipes, files, sockets...
Multiprocessing
You can achieve parallel computing by creating new processes (that also contain a new thread) with php. If your threads do not need much communication or synchronization, this is your choice, since the processes are isolated and cannot interfere with each other's work. Even if one crashes, that doesn't concern the others. If you do need much communication, you should read on at "multithreading" or - sadly - consider using another programming language, because inter-process communication and synchronization introduces a lot of complexion.
In php you have two ways to create a new process:
let the OS do it for you: you can tell your operation system to create a new process and run a new (or the same) php script in it.
for linux you can use the following or consider Darryl Hein's answer:
$cmd = 'nice php script.php 2>&1 & echo $!';
pclose(popen($cmd, 'r'));
for windows you may use this:
$cmd = 'start "processname" /MIN /belownormal cmd /c "script.php 2>&1"';
pclose(popen($cmd, 'r'));
do it yourself with a fork: php also provides the possibility to use forking through the function pcntl_fork(). A good tutorial on how to do this can be found here but i strongly recommend not to use it, since fork is a crime against humanity and especially against oop.
Multithreading
With multithreading all your threads share their resources so you can easily communicate between and synchronize them without a lot of overhead. On the other side you have to know what you are doing, since race conditions and deadlocks are easy to produce but very difficult to debug.
Standard php does not provide any multithreading but there is an (experimental) extension that actually does - pthreads. Its api documentation even made it into php.net.
With it you can do some stuff as you can in real programming languages :-) like this:
class MyThread extends Thread {
public function run(){
//do something time consuming
}
}
$t = new MyThread();
if($t->start()){
while($t->isRunning()){
echo ".";
usleep(100);
}
$t->join();
}
For linux there is an installation guide right here at stackoverflow's.
For windows there is one now:
First you need the thread-safe version of php.
You need the pre-compiled versions of both pthreads and its php extension. They can be downloaded here. Make sure that you download the version that is compatible with your php version.
Copy php_pthreads.dll (from the zip you just downloaded) into your php extension folder ([phpDirectory]/ext).
Copy pthreadVC2.dll into [phpDirectory] (the root folder - not the extension folder).
Edit [phpDirectory]/php.ini and insert the following line
extension=php_pthreads.dll
Test it with the script above with some sleep or something right there where the comment is.
And now the big BUT: Although this really works, php wasn't originally made for multithreading. There exists a thread-safe version of php and as of v5.4 it seems to be nearly bug-free but using php in a multi-threaded environment is still discouraged in the php manual (but maybe they just did not update their manual on this, yet). A much bigger problem might be that a lot of common extensions are not thread-safe. So you might get threads with this php extension but the functions you're depending on are still not thread-safe so you will probably encounter race conditions, deadlocks and so on in code you did not write yourself...
You can use pcntl_fork() to achieve something similar to threads. Technically it's separate processes, so the communication between the two is not as simple with threads, and I believe it will not work if PHP is called by apache.
If anyone cares, I have revived php_threading (not the same as threads, but similar) and I actually have it to the point where it works (somewhat) well!
Project page
Download (for Windows PHP 5.3 VC9 TS)
Examples
README
pcntl_fork() is what you are searching for, but its process forking not threading.
so you will have the problem of data exchange. to solve them you can use phps semaphore functions ( http://www.php.net/manual/de/ref.sem.php ) message queues may be a bit easier for the beginning than shared memory segments.
Anyways, a strategy i am using in a web framework that i am developing which loads resource intensive blocks of a web page (probably with external requests) parallel:
i am doing a job queue to know what data i am waiting for and then i fork off the jobs for every process. once done they store their data in the apc cache under a unique key the parent process can access. once every data is there it continues.
i am using simple usleep() to wait because inter process communication is not possible in apache (children will loose the connection to their parents and become zombies...).
so this brings me to the last thing:
its important to self kill every child!
there are as well classes that fork processes but keep data, i didn't examine them but zend framework has one, and they usually do slow but reliably code.
you can find it here:
http://zendframework.com/manual/1.9/en/zendx.console.process.unix.overview.html
i think they use shm segments!
well last but not least there is an error on this zend website, minor mistake in the example.
while ($process1->isRunning() && $process2->isRunning()) {
sleep(1);
}
should of course be:
while ($process1->isRunning() || $process2->isRunning()) {
sleep(1);
}
There is a Threading extension being activley developed based on PThreads that looks very promising at https://github.com/krakjoe/pthreads
Just an update, its seem that PHP guys are working on supporting thread and its available now.
Here is the link to it:
http://php.net/manual/en/book.pthreads.php
I have a PHP threading class that's been running flawlessly in a production environment for over two years now.
EDIT: This is now available as a composer library and as part of my MVC framework, Hazaar MVC.
See: https://git.hazaarlabs.com/hazaar/hazaar-thread
I know this is a way old question, but you could look at http://phpthreadlib.sourceforge.net/
Bi-directional communication, support for Win32, and no extensions required.
Ever heard about appserver from techdivision?
It is written in php and works as a appserver managing multithreads for high traffic php applications. Is still in beta but very promesing.
There is the rather obscure, and soon to be deprecated, feature called ticks. The only thing I have ever used it for, is to allow a script to capture SIGKILL (Ctrl+C) and close down gracefully.
Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques.
I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well).
Databases
At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server?
Caching
I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load.
My question on this:
Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached.
Finally
Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for?
No two sites are alike. You really need to get a tool like jmeter and benchmark to see where your problem points will be. You can spend a lot of time guessing and improving, but you won't see real results until you measure and compare your changes.
For example, for many years, the MySQL query cache was the solution to all of our performance problems. If your site was slow, MySQL experts suggested turning the query cache on. It turns out that if you have a high write load, the cache is actually crippling. If you turned it on without testing, you'd never know.
And don't forget that you are never done scaling. A site that handles 10req/s will need changes to support 1000req/s. And if you're lucking enough to need to support 10,000req/s, your architecture will probably look completely different as well.
Databases
Don't use MySQLi -- PDO is the 'modern' OO database access layer. The most important feature to use is placeholders in your queries. It's smart enough to use server side prepares and other optimizations for you as well.
You probably don't want to break your database up at this point. If you do find that one database isn't cutting, there are several techniques to scale up, depending on your app. Replicating to additional servers typically works well if you have more reads than writes. Sharding is a technique to split your data over many machines.
Caching
You probably don't want to cache in your database. The database is typically your bottleneck, so adding more IO's to it is typically a bad thing. There are several PHP caches out there that accomplish similar things like APC and Zend.
Measure your system with caching on and off. I bet your cache is heavier than serving the pages straight.
If it takes a long time to build your comments and article data from the db, integrate memcache into your system. You can cache the query results and store them in a memcached instance. It's important to remember that retrieving the data from memcache must be faster than assembling it from the database to see any benefit.
If your articles aren't dynamic, or you have simple dynamic changes after it's generated, consider writing out html or php to the disk. You could have an index.php page that looks on disk for the article, if it's there, it streams it to the client. If it isn't, it generates the article, writes it to the disk and sends it to the client. Deleting files from the disk would cause pages to be re-written. If a comment is added to an article, delete the cached copy -- it would be regenerated.
I'm a lead developer on a site with over 15M users. We have had very little scaling problems because we planned for it EARLY and scaled thoughtfully. Here are some of the strategies I can suggest from my experience.
SCHEMA
First off, denormalize your schemas. This means that rather than to have multiple relational tables, you should instead opt to have one big table. In general, joins are a waste of precious DB resources because doing multiple prepares and collation burns disk I/O's. Avoid them when you can.
The trade-off here is that you will be storing/pulling redundant data, but this is acceptable because data and intra-cage bandwidth is very cheap (bigger disks) whereas multiple prepare I/O's are orders of magnitude more expensive (more servers).
INDEXING
Make sure that your queries utilize at least one index. Beware though, that indexes will cost you if you write or update frequently. There are some experimental tricks to avoid this.
You can try adding additional columns that aren't indexed which run parallel to your columns that are indexed. Then you can have an offline process that writes the non-indexed columns over the indexed columns in batches. This way, you can control better when mySQL will need to recompute the index.
Avoid computed queries like a plague. If you must compute a query, try to do this once at write time.
CACHING
I highly recommend Memcached. It has been proven by the biggest players on the PHP stack (Facebook) and is very flexible. There are two methods to doing this, one is caching in your DB layer, the other is caching in your business logic layer.
The DB layer option would require caching the result of queries retrieved from the DB. You can hash your SQL query using md5() and use that as a lookup key before going to database. The upside to this is that it is pretty easy to implement. The downside (depending on implementation) is that you lose flexibility because you're treating all caching the same with regard to cache expiration.
In the shop I work in, we use business layer caching, which means each concrete class in our system controls its own caching schema and cache timeouts. This has worked pretty well for us, but be aware that items retrieved from DB may not be the same as items from cache, so you will have to update cache and DB together.
DATA SHARDING
Replication only gets you so far. Sooner than you expect, your writes will become a bottleneck. To compensate, make sure to support data sharding early as possible. You will likely want to shoot yourself later if you don't.
It is pretty simple to implement. Basically, you want to separate the key authority from the data storage. Use a global DB to store a mapping between primary keys and cluster ids. You query this mapping to get a cluster, and then query the cluster to get the data. You can cache the hell out of this lookup operation which will make it a negligible operation.
The downside to this is that it may be difficult to piece together data from multiple shards. But, you can engineer your way around that as well.
OFFLINE PROCESSING
Don't make the user wait for your backend if they don't have to. Build a job queue and move any processing that you can offline, doing it separate from the user's request.
I've worked on a few sites that get millions/hits/month backed by PHP & MySQL. Here are some basics:
Cache, cache, cache. Caching is one of the simplest and most effective ways to reduce load on your webserver and database. Cache page content, queries, expensive computation, anything that is I/O bound. Memcache is dead simple and effective.
Use multiple servers once you are maxed out. You can have multiple web servers and multiple database servers (with replication).
Reduce overall # of request to your webservers. This entails caching JS, CSS and images using expires headers. You can also move your static content to a CDN, which will speed up your user's experience.
Measure & benchmark. Run Nagios on your production machines and load test on your dev/qa server. You need to know when your server will catch on fire so you can prevent it.
I'd recommend reading Building Scalable Websites, it was written by one of the Flickr engineers and is a great reference.
Check out my blog post about scalability too, it has a lot of links to presentations about scaling with multiple languages and platforms:
http://www.ryandoherty.net/2008/07/13/unicorns-and-scalability/
Re: PDO / MySQLi / MySQLND
#gary
You cannot just say "don't use MySQLi" as they have different goals. PDO is almost like an abstraction layer (although it is not actually) and is designed to make it easy to use multiple database products whereas MySQLi is specific to MySQL conections. It is wrong to say that PDO is the modern access layer in the context of comparing it to MySQLi because your statement implies that the progression has been mysql -> mysqli -> PDO which is not the case.
The choice between MySQLi and PDO is simple - if you need to support multiple database products then you use PDO. If you're just using MySQL then you can choose between PDO and MySQLi.
So why would you choose MySQLi over PDO? See below...
#ross
You are correct about MySQLnd which is the newest MySQL core language level library, however it is not a replacement for MySQLi. MySQLi (as with PDO) remains the way you would interact with MySQL through your PHP code. Both of these use libmysql as the C client behind the PHP code. The problem is that libmysql is outside of the core PHP engine and that is where mysqlnd comes in i.e. it is a Native Driver which makes use of the core PHP internals to maximise efficiency, specifically where memory usage is concerned.
MySQLnd is being developed by MySQL themselves and has recently landed onto the PHP 5.3 branch which is in RC testing, ready for a release later this year. You will then be able to use MySQLnd with MySQLi...but not with PDO. This will give MySQLi a performance boost in many areas (not all) and will make it the best choice for MySQL interaction if you do not need the abstraction like capabilities of PDO.
That said, MySQLnd is now available in PHP 5.3 for PDO and so you can get the advantages of the performance enhancements from ND into PDO, however, PDO is still a generic database layer and so will be unlikely to be able to benefit as much from the enhancements in ND as MySQLi can.
Some useful benchmarks can be found here although they are from 2006. You also need to be aware of things like this option.
There are a lot of considerations that need to be taken into account when deciding between MySQLi and PDO. It reality it is not going to matter until you get to rediculously high request numbers and in that case, it makes more sense to be using an extension that has been specifically designed for MySQL rather than one which abstracts things and happens to provide a MySQL driver.
It is not a simple matter of which is best because each has advantages and disadvantages. You need to read the links I've provided and come up with your own decision, then test it and find out. I have used PDO in past projects and it is a good extension but my choice for pure performance would be MySQLi with the new MySQLND option compiled (when PHP 5.3 is released).
General
Do not try to optimize before you start to see real world load. You might guess right, but if you don't, you've wasted your time.
Use jmeter, xdebug or another tool to benchmark the site.
If load starts to be an issue, either object or data caching will likely be involved, so generally read up on caching options (memcached, MySQL caching options)
Code
Profile your code so that you know where the bottleneck is, and whether it's in code or the database
Databases
Use MYSQLi if portability to other databases is not vital, PDO otherwise
If benchmarks reveal the database is the issue, check the queries before you start caching. Use EXPLAIN to see where your queries are slowing down.
After the queries are optimized and the database is cached in some way, you may want to use multiple databases. Either replicating to multiple servers or sharding (splitting the data over multiple databases/servers) may be appropriate, depending on the data, the queries, and the kind of read/write behavior.
Caching
Plenty of writing has been done on caching code, objects, and data. Look up articles on APC, Zend Optimizer, memcached, QuickCache, JPCache. Do some of this before you really need to, and you'll be less concerned about starting off unoptimized.
APC and Zend Optimizer are opcode caches, they speed up PHP code by avoiding reparsing and recompilation of code. Generally simple to install, worth doing early.
Memcached is a generic cache, that you can use to cache queries, PHP functions or objects, or entire pages. Code must be specifically written to use it, which can be an involved process if there are no central points to handle creation, update and deletion of cached objects.
QuickCache and JPCache are file caches, otherwise similar to Memcached. The basic concept is simple, but also requires code and is easier with central points of creation, update and deletion.
Miscellaneous
Consider alternative web servers for high load. Servers like lighthttp and nginx can handle large amounts of traffic in much less memory than Apache, if you can sacrifice Apache's power and flexibility (or if you just don't need those things, which often, you don't).
Remember that hardware is surprisingly cheap these days, so be sure to cost out the effort to optimize a large block of code versus "let's buy a monster server."
Consider adding the "MySQL" and "scaling" tags to this question
APC is an absolute must. Not only does it make for a great caching system, but the gain from the auto-cached PHP files is a godsend. As for the multiple database idea, I don't think you would get much out of having different databases on the same server. It may give you a bit of a gain in speed during query time, but I doubt the effort it would take to deploy and maintain the code for all three while making sure they are in sync would be worth it.
I also highly recommend running Xdebug to find bottlenecks in your program. It made optimization a breeze for me.
Firstly, as I think Knuth said, "Premature optimization is the root of all evil". If you don't have to deal with these issues right now then don't, focus on delivering something that works correctly first. That being said, if the optimizations can't wait.
Try profiling your database queries, figure out what's slow and what happens alot and come up with an optimization strategy from that.
I would investigate Memcached as it's what a lot of the higher load sites use for efficiently caching content of all types, and the PHP object interface to it is quite nice.
Splitting up databases among servers and using some sort of load balancing technique (e.g. generate a random number between 1 and # redundant databases with necessary data - and use that number to determine which database server to connect to) can also be an excellent way to increase efficiency.
These have all worked out pretty well in the past for some fairly high load sites. Hope this helps to get you started :-)
Profiling your app with something like Xdebug (like tj9991 recommended) is definitely going to be a must. It doesn't make a whole lot of sense to just go around optimizing things blindly. Xdebug will help you find the real bottlenecks in your code so you can spend your optimization time wisely and fix chunks of code that are actually causing slow downs.
If you're using Apache, another utility that can help in testing is Siege. It will help you anticipate how your server and application will react to high loads by really putting it through its paces.
Any kind of opcode cache for PHP (like APC or one of the many others) will help a lot as well.
I run a website with 7-8 million page views a month. Not terribly much, but enough that our server felt the load. The solution we chose was simple: Memcache at the database level. This solution works well if the database load is your main problem.
We started out using Memcache to cache entire objects and the database results that were most frequently used. It did work, but it also introduced bugs (we might have avoided some of those if we had been more careful).
So we changed our approach. We built a database wrapper (with the exact same methods as our old database, so it was easy to switch), and then we subclassed it to provide memcached database access methods.
Now all you have to do is decide whether a query can use cached (and possibly out of date) results or not. Most of the queries run by the users are now fetched directly from Memcache. The exceptions are updates and inserts, which for the main website only happens because of logging. This rather simple measure reduced our server load by about 80%.
For what it's worth, caching is DIRT SIMPLE in PHP even without an extension/helper package like memcached.
All you need to do is create an output buffer using ob_start().
Create a global cache function. Call ob_start, pass the function as a callback. In the function, look for a cached version of the page. If exists, serve it and end.
If it doesn't exist, the script will continue processing. When it reaches the matching ob_end() it will call the function you specified. At that time, you just get the contents of the output buffer, drop them in a file, save the file, and end.
Add in some expiration/garbage collection.
And many people don't realize you can nest ob_start()/ob_end() calls. So if you're already using an output buffer to, say, parse in advertisements or do syntax highlighting or whatever, you can just nest another ob_start/ob_end call.
Thanks for the advice on PHP's caching extensions - could you explain reasons for using one over another? I've heard great things about memcached through IRC but have never heard of APC - what are your opinions on them? I assume using multiple caching systems is pretty counter-effective.
Actually, many do use APC and memcached together...
It looks like I was wrong. MySQLi is still being developed. But according to the article, PDO_MySQL is now being contributed to by the MySQL team. From the article:
The MySQL Improved Extension - mysqli
- is the flagship. It supports all features of the MySQL Server including
Charsets, Prepared Statements and
Stored Procedures. The driver offers a
hybrid API: you can use a procedural
or object-oriented programming style
based on your preference. mysqli comes
with PHP 5 and up. Note that the End
of life for PHP 4 is 2008-08-08.
The PHP Data Objects (PDO) are a
database access abstraction layer. PDO
allows you to use the same API calls
for various databases. PDO does not
offer any degree of SQL abstraction.
PDO_MYSQL is a MySQL driver for PDO.
PDO_MYSQL comes with PHP 5. As of PHP
5.3 MySQL developers actively contribute to it. The PDO benefit of a
unified API comes at the price that
MySQL specific features, for example
multiple statements, are not fully
supported through the unified API.
Please stop using the first MySQL
driver for PHP ever published:
ext/mysql. Since the introduction of
the MySQL Improved Extension - mysqli
- in 2004 with PHP 5 there is no reason to still use the oldest driver
around. ext/mysql does not support
Charsets, Prepared Statements and
Stored Procedures. It is limited to
the feature set of MySQL 4.0. Note
that the Extended Support for MySQL
4.0 ends at 2008-12-31. Don't limit yourself to the feature set of such
old software! Upgrade to mysqli, see
also Converting_to_MySQLi. mysql is in
maintenance only mode from our point
of view.
To me, it seems the article is biased towards MySQLi. I suppose I'm biased towards PDO.
I really like PDO over MySQLi. It's straight forward to me. The API is a lot closer to other languages I've programmed in. OO Database interfaces seem to work better.
I haven't come across any specific MySQL features that weren't available through PDO. I would be surprised if I ever did.
PDO is also very slow and its API is pretty complicated. No one in their sane mind should use it if portability is not a concern. And let's face it, in 99% of all webapps it is not. You just stick with MySQL or PostrgreSQL, or whatever it is you are working with.
As for the PHP question and what to take into account. I think premature optimization is the root of all evil. ;) Get your application done first, try to keep it clean when it comes to programming, do a little documentation and write unit tests. With all of the above you will have no issues refactoring code when the time comes. But first you want to be done and push it out to see how people react to it.
Sure pdo is nice, but there has been some controversy about it's performance versus mysql and mysqli, although it seems fixed now.
You should use pdo if you envision portability, but if not, mysqli should be the way. It has an OO interface, prepared statements, and most of what pdo offers (except, well, portability).
Plus, if performance is really needed, prepare for the (native mysql) MysqLnd driver in PHP 5.3, who will be much more tightly integrated with php, with better performance and improved memory usage (and statistics for performance tuning).
Memcache is nice if you have clustered servers (and YouTube-like load), but i'd try out APC first too.
A lot of good answers were given already, but I would like to point you to an alternate opcode cache called XCache. It is created by a lighty contributor.
Also, if you may need load balancing your database server in future, MySQL Proxy could very well help you to achieve this.
Both of those tools should plug into an existing application quite easily, so this optimization can be done when you need it, without too much hassle.
First question is how big do you really expect it to be? And how much do you plan on investing in your infrastructure. Since you feel the need to ask the question here, I'm guessing that you expect to start small on a limited budget.
Performance is irrelevant if the site is not available. And for availability you need horizontal scaling. The minimum you can sensibly get away with is 2 servers, both running apache, php and mysql. Set up one DBMS as a slave to the other. Do all the writes on the master, and all the reads on the local database (whatever that is) - unless for some reason you need to read back the data you've just read (use master). Make sure you've got the machinery in place to automatically promote the slave and fence the master. Use round-robin DNS for the webserver addresses to give more affinity for the slave node.
Partitioning your data across different database nodes at this stage is a very bad idea - however you might want to consider splitting it across different databases on the same server (which will facilitate partitioning across nodes when you overtake facebook).
Do make sure you've got the monitoring and data analysis tools in place to measure your sites performance and identify bottlenecks. Most performance problems can be fixed by writing better SQL / fixing the database schema.
Keeping your template cache on the database is a dumb idea - the database should be a central common repository for structured data. Keep your template cache on the local filesystem of your webservers - it will be available faster and won't slow down your database access.
Do use a op-code cache.
Spend plenty of time studying your site and its logs to understand why its going so slow.
Push as much caching as possible onto the client.
Use mod_gzip to compress everything you can.
C.
My first piece of advice is to think about this issue and keep it in mind when designing the site but don't go overboard. It's often difficult to predict the success of a new site and I your time will be better spent getting up finished early and optimising it later.
In general, Simple is fast.
Templates slow you down. Databases slow you down. Complex libraries slow you down. Layering templates over each other retrieving them from databases and parsing it in a complex library --> the time delays multiply with each other.
Once you have the basic site up and running do tests to show you where to spend your efforts. It's difficult to see where to target. Often to speed things up you will have to unravel the complexity of the code, this makes it larger and harder to maintain, so you only want to do it where necessary.
In my experience establishing the database connection was relatively expensive. If you can get away with it, don't connect to the database for general visitors on the most trafficed pages like the front page to the site. Creating multiple database connections is madness with very little benefit.
#Gary
Don't use MySQLi -- PDO is the 'modern' OO database access layer. The most important feature to use is placeholders in your queries. It's smart enough to use server side prepares and other optimizations for you as well.
I'm loking over PDO at the moment and it looks like you're right - however I know that MySQL are developing the MySQLd extension for PHP - I think to succeed either MySQL or MySQLi - what do you think about that?
#Ryan, Eric, tj9991
Thanks for the advice on PHP's caching extensions - could you explain reasons for using one over another? I've heard great things about memcached through IRC but have never heard of APC - what are your opinions on them? I assume using multiple caching systems is pretty counter-effective.
I will definitely be sorting out some profiling testers - thank you very much for your recommendations on those.
I don't see myself switching from MySQL anytime soon - so I guess I don't need the abstraction capabilities of PDO. Thanks for those articles DavidM, they've helped me a lot.
Look into mod_cache, an output cache for the Apache web server, simillar to the output caching in ASP.NET.
Yes, I can see that it's still experimental but it will be final someday.
I can't believe no-one has already mentioned this: Modularisation and Abstraction. If you think your site is going to have to grow to lots of machines, you must design it so it can! That means stupid things like don't assume the database is on localhost. It also means things that are going to be a bother at first, like writing a database abstraction layer (like PDO, but much much lighter because it only does what you need it to do).
And it means things like working with a framework. You will need layers to your code so that you can later gain performance by refactoring the data-abstraction layer, for example, by teaching it that some objects are in a different database -- and the code doesn't have to know or care.
Finally, be careful of memory-intensive operations, for example, unnecessary string copying. If you can keep PHP's memory usage down, then you will get more performance out of your webserver and this is something that will scale when you go to a load-balanced solution.
If you are working with large amounts of data, and caching isn't cutting it, look into Sphinx. We've had great results with using SphinxSearch not only for better text searching, but also as a data retrieval replacement for MySQL when dealing larger tables. If you use SphinxSE (MySQL plugin), it surpassed our performance gains we had from caching several times over, and application-implementation is a sinch.
The points made about cache are spot-on; it is the least complicated and most important part of building an efficient application. I'd like to add that while memcached is great, APC is about five times faster if your application lives on a single server.
The "Cache Performance Comparison" post at the MySQL performance blog has some interesting benchmarks on the subject - http://www.mysqlperformanceblog.com/2006/08/09/cache-performance-comparison/.