Question of PHP cache vs compile - php

from my understanding, if you use a PHP caching program like APC, eAccelerator, etc. then opcodes will be stored in memory for faster execution upon subsequent requests. My question is, why wouldn't it ALWAYS be better/faster to compile your scripts, assuming you're using a compiler like phc or even HPHP (although I know they have issues with dynamic constructs)? Why bother storing opcodes since they have to be re-read by the Zend Engine, which uses C functions to execute it, when you can just compile and skip that step?

You cannot simply compile to c and have your php script execute the same way. HPHP does real compilation, but it doesn't support the whole superset of php features.
Other compilers actually just embed a php interpreter in the binary so you aren't really compiling the code anyway.
PHP is not meant to be compiled. opcode caching is very fast and good enough for 99% of applications out there. If you have facebook level of traffic, and you have already optimized your back end db, compilation might be the only way to increase performance.
PHP is not a thin layer to the std c library.

If PHP didn't have eval(), it probably would be possible to do a straight PHP->compiled binary translation with (relative) ease. But since PHP can itself dynamically build/execute scripts on the fly via eval(), it's not possible to do a full-on binary. Any binary would necessarily have to contain the entirety of PHP because the compiler would have no idea what your dynamic code could do. You'd go from a small 1 or 2k script into a massive multi-megabyte binary.

Related

Why does PHP use opcode caches while Java compiles to bytecode files?

From my point of view, both PHP and Java have a similar structure. At first you write some high-level code, which then must be translated in a simpler code format to be executed by a VM. One difference is, that PHP works directly from the source code files, while Java stores the bytecode in .class files, from where the VM can load them.
Nowadays the requirements for speedy PHP execution grow, which leads people to believe that it would be better to directly work with the opcodes and not go through the compiling step each time a user hits a file.
The solution seem to be a load of so called Accelerators, which basically store the compiled results in cache and then use the cached opcodes instead of compiling again.
Another approach, done by Facebook, is to completely compile the PHP code to a different language.
So my question is, why is nobody in the PHP world doing what Java does? Are there some dynamic elements that really need to be recompiled each time or something like that? Otherwise it would be really smarter to compile everything when the code goes into production and then just work with that.
The most important difference is that the JVM has an explicit specification that covers the bytecode completely. That makes bytecode files portable and useful for more than just execution by a specific JVM implementation.
PHP doesn't even have a language specification. PHP opcodes are an implementation detail of a specific PHP engine, so you can't really do anything interesting with them and there's little point in making them more visible.
PHP opcodes are not the same as Java classfiles. Java classfiles are well specified, and are portable between machines. PHP opcodes are not portable in any way. They have memory addresses baked into them, for example. They are strictly an implementation detail of the PHP interpreter, and shouldn't be considered anything like Java bytecode.
Does it have to be this way? No, probably not. But the PHP source code is a mess, and there is neither the desire, nor the political will in the PHP internals community to make this happen. I think there was talk of baking an opcode cache into PHP 6, but PHP 6 died, and I don't know the status of that idea.
Reference: I wrote phc so I was pretty knee deep in PHP implementation/compilation for a few years.
It's not quite true that nobody in the PHP world is doing what java does. Projects such as Alexey Zakhlestin's appserver provide a degree of persistence more akin to a java servlet container (though his inspiration is more Ruby’s Rack and Python’s WSGI than Java)
PHP does not use a standard mechanism for opcodes. I wish it either stuck to a stack VM (python,java) or a register VM (x86, perl6 etc). But it uses something absolutely homegrown and there in lies the rub.
It uses a connected list in memory which results in each opcode having a ->op1 ->op2 and ->result. Now each of those are either constants or entries in a temp table etc. These pointers cannot be serialized in any sane fashion.
Now, people have accomplished this using items like pecl/bcompiler which does dump the stream into the disk.
But the classes make this even more complicated, which means that there are potential code fragments like
if(<conditon>)
{
class XYZ() { }
}
else
{
class XYZ() { }
}
class ABC extends XYZ {}
Which means that a large number of decisions about classes & functions can only be done at runtime - something like Java would choke on two classes with the same name, which are defined conditionally at runtime. Basically, APC's inheritance & class caching code is perhaps the most complicated & bug-prone part of the codebase. Whenever a class is cached, all parent inherited members have to be scrubbed out before it can be saved to the opcode cache.
The pointer problem is not insurmountable. There is an apc_bindump which I have never bothered to fix up to load entire cache entries off disk directly whenever a restart is done. But it's painful to debug all that to get something that still needs to locate all system pointers - the apache case is too easy, because all php processes have the same system pointers because of the fork behaviour. The old fastcgi versions were slower because they used to fork first & init php later - the php-fpm fixed that by doing it the other way around.
But eventually, what's really missing in PHP is the will to invent a bytecode format, throw away the current engine & all modules - to rewrite it using a stack VM & build a JIT. I wish I had the time - the fb guys are almost there with their hiphop HHVM. Which sacrifies eval() for faster performance - which is a fair sacrifice :)
PS: I'm the guy who can't find time to update APC for 5.4 properly
I think all of you are misinformed. HHVM is not a compiler to another languague is a virtual machine itself. The confusion is because facebook use to compile to c++, but that approach was to slowly for the requirements of the developers (ten minutes compiling only for test some tiny things).

Is there a point to minifying PHP?

I know you can minify PHP, but I'm wondering if there is any point. PHP is an interpreted language so will run a little slower than a compiled language. My question is: would clients see a visible speed improvement in page loads and such if I were to minify my PHP?
Also, is there a way to compile PHP or something similar?
PHP is compiled into bytecode, which is then interpreted on top of something resembling a VM. Many other scripting languages follow the same general process, including Perl and Ruby. It's not really a traditional interpreted language like, say, BASIC.
There would be no effective speed increase if you attempted to "minify" the source. You would get a major increase by using a bytecode cache like APC.
Facebook introduced a compiler named HipHop that transforms PHP source into C++ code. Rasmus Lerdorf, one of the big PHP guys did a presentation for Digg earlier this year that covers the performance improvements given by HipHop. In short, it's not too much faster than optimizing code and using a bytecode cache. HipHop is overkill for the majority of users.
Facebook also recently unveiled HHVM, a new virtual machine based on their work making HipHop. It's still rather new and it's not clear if it will provide a major performance boost to the general public.
Just to make sure it's stated expressly, please read that presentation in full. It points out numerous ways to benchmark and profile code and identify bottlenecks using tools like xdebug and xhprof, also from Facebook.
2021 Update
HHVM diverged away from vanilla PHP a couple versions ago. PHP 7 and 8 bring a whole bunch of amazing performance improvements that have pretty much closed the gap. You now no longer need to do weird things to get better performance out of PHP!
Minifying PHP source code continues to be useless for performance reasons.
Forgo the idea of minifying PHP in favor of using an opcode cache, like PHP Accelerator, or APC.
Or something else like memcached
Yes there is one (non-technical) point.
Your hoster can spy your code on his server. If you minify and uglify it, it is for spys more difficult to steal your ideas.
One reason for minifying and uglifying php may be spy-protection. I think uglyfing code should one step in an automatic deployment.
With some rewriting (shorter variable names) you could save a few bytes of memory, but that's also seldomly significant.
However I do design some of my applications in a way that allows to concatenate include scripts together. With php -w it can be compacted significantly, adding a little speed gain for script startup. On an opcode-enabled server this however only saves a few file mtime checks.
This is less an answer than an advertisement. I'm been working on a PHP extension that translates Zend opcodes to run on a VM with static typing. It doesn't accelerate arbitrary PHP code. It does allow you to write code that run way faster than what regular PHP allows. The key here is static typing. On a modern CPU, a dynamic language eats branch misprediction penalty left and right. Fact that PHP arrays are hash tables also imposes high cost: lot of branch mispredictions, inefficient use of cache, poor memory prefetching, and no SIMD optimization whatsoever. Branch misprediction and cache misses in particular are achilles' heel for today's processors. My little VM sidesteps those problem by using static types and C array instead of hash table. The result ends up running roughly ten times faster. This is using bytecode interpretation. The extension can optionally compile a function through gcc. In that case, you get two to five times more speed.
Here's the link for anyone interested:
https://github.com/chung-leong/qb/wiki
Again, the extension is not a general PHP accelerator. You have to write code specific for it.
There are PHP compilers... see this previous question for a list; but (unless you're the size of Facebook or are targetting your application to run client-side) they're generally a lot more trouble than they're worth
Simple opcode caching will give you more benefit for the effort involved. Or profile your code to identify the bottlenecks, and then optimise it.
You don't need to minify PHP.
In order to get a better performance, install an Opcode cache; but the ideal solution would be to upgrade your PHP to the 5.5 version or above because the newer versions have an opcode cache by default called Zend Optimiser that is performing better than the other ones http://massivescale.blogspot.com/2013/06/php-55-zend-optimiser-opcache-vs-xcache.html.
The "point" is to make the file smaller, because smaller files load faster than bigger files. Also, removing whitespace will make parsing a tiny bit faster since those characters don't need to be parsed out.
Will it be noticeable? Almost never, unless the file is huge and there's a big difference in size.

Use APC to cache source files in PHP, does it work?

I was browsing thru the docs of APC (Alternative PHP Cache) and I've seen that it has a function called apc_compile_file. The Docs say that this function is to:
Stores a file in the bytecode cache,
bypassing all filters.
Is this like HipHop's idea, to store PHP code in more optimized code? If is not, can someone educate me because Im kinda lost in that. If is indeed like that, then why APC is older than HipHop and doesn't get all the fuzz that HipHop gets.
Best regards!
The two are very, very different.
APC isn't a bytecode optimizer, just a bytecode cache. It saves the need for the PHP script to be parsed (or even read from the .php file on disk) on subsequent accesses, but it's still being executed as PHP bytecode.
HipHop doesn't just optimize PHP code, it transforms it to compilable C++ code, ten compiles it into an native executable on the server. By its nature as compiled code, it then runs significantly faster than any scripted language.

What makes PHP slower than Java or C#?

This is something I've always wondered: Why is PHP slower than Java or C#, if all 3 of these languages get compiled down to bytecode and then executed from there? I know that normally PHP recompiles each file with each request, but even when you bring APC (a bytecode cache) into the picture, the performance is nowhere near that of Java or C# (although APC greatly improves it).
Edit:
I'm not even talking about these languages on the web level. I am talking about the comparison of them when they're number crunching. Not even including startup time or anything like that.
Also, I am not making some kind of decision based on the replies here. PHP is my language of choice; I was simply curious about its design.
One reason is the lack of a JIT compiler in PHP, as others have mentioned.
Another big reason is PHP's dynamic typing. A dynamically typed language is always going to be slower than a statically typed language, because variable types are checked at run-time instead of compile-time. As a result, statically typed languages like C# and Java are going to be significantly faster at run-time, though they typically have to be compiled ahead of time. A JIT compiler makes this less of an issue for dynamically typed languages, but alas, PHP does not have one built-in. (Edit: PHP 8 will come with a built-in JIT compiler.)
I'm guessing you are a little bit into the comparing of apples and oranges here - assuming that you are using all these languages to create web applications there is quite a bit more to it than just the language. (And lots of the time it is the database that is slowing you down ;-)
I would never suggest choosing one of these languages over the other on the basis of a speed argument.
Both Java and C# have JIT compilers, which take the bytecode and compile into true machine code. The act of compiling it can take time, hence C# and Java can suffer from slower startup times, but once the code is JIT compiled, its performance is in the same ballpark as any "truly compiled" language like C++.
The biggest single reason is that Java's HotSpot JVM and C#'s CLR both use Just-In-Time (JIT) compilation. JIT compilation compiles the bytecodes down to native code that runs directly on the processor.
Also I think Java bytecode and CIL are lower-level than PHP's internal bytecode which might make alot of JIT optimizations easier and more effective.
A wild guess might be that JAVA depends on some kind of "application" server, while PHP doesn't -- which means a new environnement has to be created each time a PHP page is called.
(This was especially true when PHP was/is used as a CGI, and not as an Apache module or via FastCGI)
Another idea might be that C# and JAVA compilers can do some heavy optimisations at compile time -- on the other side, as PHP scripts are compiled (at least, if you don't "cheat" with an opcode cache) each time a page is called, the compilation phase has to be real quick ; which means it's not possible to spend much time optimizing.
Still : Each version of PHP generally comes with some amelioration of the performances ; for instance, you can gain between 15% and 25% of CPU, when switching from PHP 5.2 to 5.3.
For instance, take a look at those benchmarks :
Benchmark of PHP Branches 3.0 through 5.3-CVS
Performance PHP 5.2 vs PHP 5.3 - huge gain
Bench PHP 5.2 vs PHP 5.3 -- disclaimer : it's in french, and I'm the one who did it.
One important thing, also, is that PHP is quite easy to scale : just add a couple of web servers, and voila !
The problem you often meet when going from 1 to several servers is with sessions -- store those in DB or memcached (very easy), and problem solved !
As a sidenote : I would not recommend choosing a technology because there is a couple of percent difference of speed on some benchmark : there are far more important factors, like how well your team know each technology -- or, even, the algorithms you are going to use !
There is no way an interpreted language can be faster than a compiled language or even a JIT language under trivial conditions.
Unless your test program consists of printing out "Hello Worlds" if you are concerned about speed, stick with C# or Java.
Depends on what you want to do. In some cases, PHP is definitely faster. PHP is (pretty) good at file manipulation and other basic stuff (also XML stuff). Java or C# might be slower in those cases (though I didn't benchmark).
Also, the PHP output (HTML or whatever) needs to be downloaded to the browser, which also consumes time.
Also, the speed of Java / C# is very much depending on the machine it runs on (which could be multiple). Java / C# could be slow on your computer, while PHP just runs on one server from which it is available and is always as fast as the server is (except for download times, etc.).
I don't think they are comparable in a general manner. I think you need to take a task, which you could be accomplished with those three programming languages, and then compare that. That is basically always what you should do when choosing a programming language; find the one that fits the task. Don't shape the task until it fits the programming language.
According to wikipedia, PHP uses The Zend Engine, which does not have a JIT.

Simple Facebook HipHop Performance Question

If I write a hello world app using a PHP web framework such as CodeIgniter and then I compile it and run it using HipHop. Will it run faster than if I write the same hello world app in django or rails?
HIPHOP converts php code into C++ code, which needs to be compiled to run. Since pre-compiled code runs faster and uses less memory then scriping languages like python/php it will probably run faster in the example you have given.
However, HIPHOP does not convert all code. A lot of code in php is dynamic and can not be changed to c++, this means you will have to write your code with this in mind. If codeigniter can even be compiled using HIPHOP is another question.
Terry Chay wrote a big article about HIPHOP, covering when to use it, it's limitations and future. I would recomment reading this, as it will most likely answer most of your questions and give you some insight into how it works :)
http://terrychay.com/article/hiphop-for-faster-php.shtml
At that point the run time is inconsequential. HipHop was designed for scaling... meaning billions of requests. There's absolutely no need to use something like HipHop for even a medium size website.
But more to the point of your question... I don't think there have been comparison charts available for us to see, but I doubt the run time would be faster at that level.
i don't know about django or rails, so this is a bit off-topic.
with plain php, the request goes to apache, then to mod_php. mod_php loads the helloworld.php script from disk, parses & tokenizes it, compiles it to bytecode, then interprets the bytecode, passes the output back to apache, apache serves it to the user.
with php and an optimizer the first run is about the same as with plain php, but the compiled source code is stored in ram. then, for the second request: goes to apache, apache to mod_php, apc loads bytecode from ram, interprets it, passes it back to apache, back to the user.
with hiphop there is no apache, but hiphop itself and there's no interpreter, so request goes directly to hiphop and back to the user. so yes, it's faster, because of several reasons:
faster startup because there's no bytecode compilation needed - the program is already in machine-readable code. so no per-request compilation and no source file reading.
no interpreter. machine code is not necessarily faster - that depends on the quality of source translation (hiphop) and the quality of the static compiler (g++). hiphop translated code is not fast compared to hand-written c code, because there's a bit of overhead because of type handling and such.
with node.js, there's also no apache. the script is started and directly compiled to machine code (because the V8 compiler does that), so it's kind of AOT (ahead of time) compiling (or is it still called JIT? i don't really know). every request is then directly handled by the already compiled machine code; so node.js is actually very comparable to hiphop. i assume hiphop to be multithreaded or something like this, while node does evented IO.
facebook claims a 50% speed gain, which is not really that much; if you compare the results of the language shootout, you'll see for the execution speed of assorted algorithms, php is 5 to 250 times slower.
so why only 50%? because ...
web apps depend on much more than just execution speed, e.g. IO
php's type system prevents hiphop to make the best use of c++'s static types
in practice, a lot of php is already C, because most of the functionality is either built in or comes from extensions. extensions are programmed in C and statically compiled.
i'm not sure if there was a huge performance gain for hello world, because hello world, even with a good framework, is still so small execution speed could be negligible in comparison to all the other overhead (network latency and stuff).
imo: if you want speed and ease of use, go for node.js :)
Running a simple application is always faster in any language. When it's become as complex as facebook, then you will face numerous of problems. PHP slowness will be show it's face. In same times, converting existing code to another language is not an options, since all logic and code is not so easy to translated to other language's syntax. That's why facebook developer decide to keep the old code, and make PHP faster. That's the reason they create their own PHP compiler, called HipHop.
Read this story from the perspective one of Facebook developer, so you know the history of HipHop.
That is not really an apple to apples comparison. In the most level playing field you might have something like:
Django running behind apache
Django rendering an HTML template to say hello world (no caching)
AND
HPHP running behind apache
HPHP rendring an HTML template to say hello world (again, no caching)
There is no database, almost no file I/O, and no caching. If you hit the page 10,000 times with a load generator at varying concurrency levels you will probably find that HPHP will outperform Django or rails - that is to say it can serve render more pages per second and keep up with your traffic a bit better.
The question is, will you ever have this many concurrent users? If you will, will they likely be hitting a database or a cached page?
HPHP sounds cool, but IMHO there is no reason to jump ship just yet (unless you are getting lots of traffic, in which case it might make sense to check it out).
Will it run faster than if I write the
same hello world app in django or
rails?
It probably will, but don't fret. If we're talking prospective speed improvements from yet unreleased projects, Pythonistas have pypy-jit and unladen-swallow to look forward to ;)

Categories