as my webprojects are getting bigger I wonder if PHP interprets this code,
<?php
function helloWorldOutput($helloworldVariable) {
echo 'Hello World' . $helloworldVariable;
}
helloWorldOutput("I am PHP");
?>
slower than this:
<?php
function a($b) { echo 'Hello World'+$b; }
a("I am PHP");
?>
Because PHP is an interpreted language without compiled binary I think the second sample should be a bit faster. Is that true, and is there any kind of pre-interpreting mechanic which caches a faster version of the code in PHP?
Yes, it will take some extra time to parse/compile the larger code to byte code. The time is usually negligible, so you should probably just not worry about it since there are better ways to deal with the time spent compiling.
What you would do for quite a bit more performance boost, is to use a PHP accelerator like for example APC that will cache compiled code and eliminate the whole compile step except for at the first access to a page.
Using an accelerator will remove any possible downside with keeping your code commented and clear, and lets you concentrate on functionality instead of shortening your code.
Parsing the first version and the calls to it will take longer. So if you decide on using the first version and call a function with a name that lone many, many times just because of the parsing the second version will be slightly faster. As of the actual function execution - no both functions will be equally faster.
Still my advice is do not ever attempt to do such micro-optimizations. Performance will improve just slightly, readability will suffer greatly.
the first example has fewer characters which means it's farset to parse,
php runs some sort of bytecode internally so execution speed will not differ much.
the slowest bit is probably reading the file from disk, and short code will win that race easily.
Related
I know this question has been asked before but I haven't been able to find a definitive answer.
Does the overly use of the echo statement slow down end user load times?
By having more echo statements in the file the file size increases so I know this would be a factor. Correct me if I'm wrong.
I know after some research that using php's ob_start() function along with upping Apaches SendBufferSize can help decrease load times, but from what I understand this is more of decrease in php execution time by allowing php to finish/exit sooner, which in turn allows Apache to exit sooner.
With that being said, php does exit sooner, but does that mean php actually took less time to execute and in turn speed things up on the end user side ?
To be clear, what I mean by this is if I had 2 files, same content, and one made use of the echo statement for every html tag and the other file used the standard method of breaking in and out of php, aside for the difference in file size from the "overly" use of the echo statement (within reason I'm guessing?), which one would be faster? Or would there really not be any difference?
Maybe I'm going about this or looking at this wrong?
Edit: I have done a bit of checking around and found a way to create a stop watch to check execution time of a script and seems to work quit well. If anybody is interested in doing the same here is the link to the method I have chosen to use for now.
http://www.phpjabbers.com/measuring-php-page-load-time-php17.html
Does the overly use of the echo statement slow down end user load times?
No.
By having more echo statements in the file the file size increases so I know this would be a factor. Correct me if I'm wrong.
You are wrong.
does that mean php actually took less time to execute and in turn speed things up on the end user side?
No.
Or would there really not be any difference?
Yes.
Maybe I'm going about this or looking at this wrong?
Definitely.
There is a common problem with performance related questions.
Most of them coming up not from the real needs but out of imagination.
While one have to solve only real problems, not imaginable ones.
This is not an issue.
You are overthinking things.
This is an old question, but the problem with the logic presented here is it assumes that “More commands equals slower performance…” when—in terms of modern programming and modern systems—this is an utterly irrelevant issue. These concerns are only of concerns of someone who—for some reason—programs at an extremely low level in something like assembler and such,.
The reason why is there might be a slowdown… But nothing anyone would ever humanly be able to perceive. Such as a slowdown of such a small fraction of a second that the any effort you make to optimize that code would not result in anything worth anything.
That said, speed and performance should always be a concern when programming, but not in terms of how many of a command you use.
As someone who uses PHP with echo statements, I would recommend that you organize your code for readability. A pile of echo statements is simply hard to read and edit. Depending on your needs you should concatenate the contents of those echo statements into a string that you then echo later on.
Or—a nice technique I use—is to create an array of values I need to echo and then run echo implode('', $some_array); instead.
The benefit of an array over string concatenation is it’s naturally easier to understand that some_array[] = 'Hello!'; will be a new addition to that array where something like $some_string .= 'Hello!'; might seem simple but it might be confusing to debug when you have tons of concatenation happening.
But at the end of the day, clean code that is easy to read is more important to all involved than shaving fractions of a second off of a process. If you are a modern programmer, program with an eye towards readability as a first draft and then—if necessary—think about optimizing that code.
Do not worry about having 10 or 100 calls to echo. When optimizing these shouldn't be even take in consideration.
Think that on a normal server you can run an echo simple call faster than 1/100,000 part of a second.
Always worry about code readability and maintenance than those X extra echo calls.
Didn't made any benchmark. All I can say is, in fact when your echo strings (HTML or not) and use double quotes (") it's slower than single quotes (').
For strings with double quotes PHP has to parse those strings. You could know the possibility to get variables inside of strings by just insert them into your string:
echo "you're $age years old!";
PHP has to parse your string to lookup those variables and automatically replace them. When you're sure, you don't have any variables inside your string use single quotes.
Hope this would help you.
Even when you use a bunch of echo calls, I don't think it would slow down your loading time. Loading time depends on reaction time of your server and execution time. When your loading time would be to high for the given task, check the whole code not only the possibility of echoes could slow down your server. I think there would be something wrong inside your code.
I am working on website and I am trying to make it fast as much as possible - especially the small things that can make my site a little bit quicker.
So, my to my question - I got loop that run 5 times and in each time it echo something, If I'll make variable and the loop will add the text I want to echo into the variable and just in the end I'll echo the variable - will it be faster?
loop 1 (with the echo inside the loop)
for ($i = 0;$i < 5;$i++)
{
echo "test";
}
loop 2 (with the echo outside [when the loop finish])
$echostr = "";
for ($i = 0;$i < 5;$i++)
{
$echostr .= "test";
}
echo $echostr;
I know that loop 2 will increase a bit the file size and therfore the user will have to download more bytes but If I got huge loop will it be better to use second loop or not?
Thanks.
The difference is negligible. Do whatever is more readable (which in this case is definitely the first case). The first approach is not a "naive" approach so there will be no major performance difference (it may actually be faster, I'm not sure). The first approach will also use less memory. Also, in many languages (not sure about PHP), appending to strings is expensive, and therefore so is concatenation (because you have to seek to the end of the string, reallocate memory, etc.).
Moreover, file size does not matter because PHP is entirely server-side -- the user never has to download your script (in fact, it would be scary if they did/could). These types of things may matter in Javascript but not in PHP.
Long story short -- don't write code constantly trying to make micro-optimizations like this. Write the code in the style that is most readable and idiomatic, test to see if performance is good, and if performance is bad then profile and rewrite the sections that perform poorly.
I'll end on a quote:
"premature emphasis on efficiency is a big mistake which may well be the source of most programming complexity and grief."
- Donald Knuth
This is a classic case of premature optimization. The performance difference is negligible.
I'd say that in general you're better off constructing a string and echoing it at the end, but because it leads to cleaner code (side effects are bad, mkay?) not because of performance.
If you optimize like this, from the ground up, you're at risk of obfuscating your code for no perceptable benefit. If you really want your script to be as fast as possible then profile it to find out where the real bottlenecks are.
Someone else mentioned that using string concatenation instead of an immediate echo will use more memory. This isn't true unless the size of the string exceeds the size of output buffer. In any case to actually echo immediately you'd need to call flush() (perhaps preceded by ob_flush()) which adds the overhead of a function call*. The web server may still keep its own buffer which would thwart this anyway.
If you're spending a lot of time on each iteration of the loop then it may make sense to echo and flush early so the user isn't kept waiting for the next iteration, but that would be an entirely different question.
Also, the size of the PHP file has no effect on the user - it may take marginally longer to parse but that would be negated by using an opcode cache like APC anyway.
To sum up, while it may be marginally faster to echo each iteration, depending on circumstance, it makes the code harder to maintain (think Wordpress) and it's most likely that your time for optimization would be better spent elsewhere.
* If you're genuinely worried about this level of optimization then a function call isn't to be sniffed at. Flushing in pieces also implies extra protocol overhead.
The size of your PHP file does not increase the size of the download by the user. The output of the PHP file is all that matters to the user.
Generally, you want to do the first option: echo as soon as you have the data. Assuming you are not using output buffering, this means that the user can stream the data while your PHP script is still executing.
The user does not download the PHP file, but only its output, so the second loop has no effect on the user's download size.
It's best not to worry about small optimizations, but instead focus on quickly delivering working software. However, if you want to improve the performance of your site, Yahoo! has done some excellent research: developer.yahoo.com/performance/rules.html
The code you identify as "loop 2" wouldn't be any larger of a file size for users to download. String concatination is faster than calling a function like echo so I'd go with loop 2. For only 5 iterations of a loop I don't think it really matters all that much.
Overall, there are a lot of other areas to focus on such as compiling PHP instead of running it as a scripted language.
http://phplens.com/lens/php-book/optimizing-debugging-php.php
Your first example would, in theory, be fastest. Because your provided code is so extremely simplistic, I doubt any performance increase over your second example would be noticed or even useful.
In your first example the only variable PHP needs to initialize and utilize is $i.
In your second example PHP must first create an empty string variable. Then create the loop and its variable, $i. Then append the text to $echostr and then finally echo $echostr.
So I have 16 GB worth of XML files to process (about 700 files total), and I already have a functional PHP script to do that (With XMLReader) but it's taking forever. I was wondering if parsing in Python would be faster (Python being the only other language I'm proficient in, I'm sure something in C would be faster).
I think that both of them can rely over wrappers for fast C libraries (mostly libxml2) so there's shouldn't be too much difference in parsing per se.
You could try if there are differences caused by overhead, then it depends what are you gonna do over that XML. Parsing it for what?
There's actually three differing performance problems here:
The time it takes to parse a file, which depends on the size of individual files.
The time it takes to handle the files and directories in the filesystem, if there's a lot of them.
Writing the data into your databases.
Where you should look for performance improvements depends on which one of these is the biggest bottleneck.
My guess is that the last one is the biggest problem because writes is almost always the slowest: writes can't be cached, they requires writing to disk and if the data is sorted it can take a considerable time to find the right spot to write it.
You presume that the bottleneck is the first alternative, the XML parsing. If that is the case, changing language is not the first thing to do. Instead you should see if there's some sort of SAX parser for your language. SAX parsing is much faster and memory effective than DOM parsing.
I can't tell you for sure if Python will end up performing better than PHP (because I'm not terribly familiar with the performance characteristics of PHP). I can, however, give you a few suggestions.
If there's a huge difference between your understanding of Python and PHP (i.e. you know way more PHP than Python, stick with PHP. The worst thing for performance in any language is a lack of mastery.
If you want to implement a Python solution, there's a lot in the library to work with, and depending on what you're looking for, you can find it here.
Write a Python script to process the XML, and then use it on one item. Compare that script's running time to the PHP script. If the Python script is much faster and you have faith that it is bugfree, use Python.
Also, if you have some knowledge of C, in Python you can identify bottlenecks in the code and easily reimplement them in C (though I suspect you won't have a chance to do this).
So I'm working on a project written in old-style (no OOP) PHP with no full rewrite in the near future. One of the problems with it currently is that its slow—much of the time is spent requireing over 100 files based on where it is in the boot process.
I was wondering if we could condense this (on deployment, not development of course) into a single file or two with all the require'd text just built in. However, since there are so many lines of code that aren't used for each page, I'm wondering if doing this would backfire.
At its core, I think, it's a question of whether:
<?php
echo 'hello world!';
?>
is any faster than
<?php
if(FALSE) {
// thousands of lines of code here
}
echo 'hello world!';
?>
And if so, how much slower?
(Also, if what I've outlined above is a bad idea for some other reasons, please let me know.)
The difference between the two will be negligible. If most of the execution time is currently spent requiring files you're likely to see a significant boost by using an optcode cache like APC, if you are not already.
Other than that - benchmark, find out exactly where the bottlenecks are. In my experience requires are often the slowest part of an old-style procedural PHP app, but even with many included files I'd be surprised if these all added up to a 'slow' app.
Edit: ok, a quick 'n dirty benchmark. I created three 'hello world' PHP scripts like the example. The first (basic.php) was just echoing the string. The second (complex.php) included an if false statement that contained ~5000 lines of PHP code pasted in from another app. The third (require.php) included the same if statement but required in the ~5000 lines of code from another file.
Page generation time (as measured by microtime()) between basic.php and complex.php was around ~0.000004 seconds, so really not significant. Some more comprehensive results from apache bench:
without APC with APC
req/sec avg (ms) req/sec avg (ms)
basic.php: 7819.87 1.277 6960.49 1.437
complex.php: 346.82 2.883 352.12 2.840
require.php: 6819.24 1.446 5995.49 1.668
APC's not doing a lot here but using up memory, but it's likely to be a different picture in a real world app.
require does have some overhead. 100 requires is probably a lot. Parsing an entire file that has the 100 includes is probably slow too. The overhead from require might cost you more, but it is hard to say. It might not cost you enough.
All benchmarks are evil, but here is what I did:
ran a single include of a file that was about 8000 lines (didn't do anything useful each line, just declares a variable). Compared to the time it takes to run an include of an 80 line file (same declarations) 100 times. Results were inconclusive.
Is the including of the files really causing the problem? Is there not something in the script execution that can be optimized? Caching may be an option..
Keep in mind that PHP will parse all the code it sees, even if it's not run.
It will still take relatively long to process the a file too, and from experience, lots of code will eat up considerable amounts of memory even though they're not executed.
Opcode caching as suggested by #Tim should be your first port of call.
If that is out of the question (e.g. due to server limitations): If the functions are somehow separable into categories, one possibility to make things a bit faster and lighter could be (ab)using PHP's Autoloading by putting the functions into separate files as methods of static classes.
function xyz() { ... }
would become
class generic_tools
{
public static function xyz() { ... }
}
and any call to xyz() is replaced by generic_tools::xyz();
The call would then trigger the inclusion of (e.g.) generic_tools.class.php on demand, instead of including everything at once.
This would require rewriting the function calls to static method calls, which may be dead easy or a bit more difficult (if function calls are cooked up dynamically or something). But beyond that, no refactoring would be needed, because you're not really using any OOP mechanisms.
How much this will actually help strongly depends on the app's architecture and how intertwined the functions are with each other.
What kind of ideas or tips and tricks do you have in order to boost PHP performance.
Something like,
I use:
$str = 'my string';
if(isset($str[3])
Instead of:
if(strlen($str) > 3)
Which is a bit faster.
Or storing values as keys instead of vars in array, makes searching if key exists much faster. Hence using isset($arr[$key]) instead of array_exists($arr, $key)
Shoot your ideas, i would love to hear them.
Use a profiler and measure your performance.
Optimise the areas that need it.
Typical areas that will give you the most bang for effort in a typical php website.
think about database queries carefully. They often take up most of the execution time.
Don't include code you don't need
Don't write you own versions of the built in functions - the built in ones are compiled C and will be faster that you L33T php version.
Use an OpCode cache.
Most PHP accelerators work by caching the compiled bytecode of PHP scripts to avoid the overhead of parsing and compiling source code on each request (some or all of which may never even be executed). To further improve performance, the cached code is stored in shared memory and directly executed from there, minimizing the amount of slow disk reads and memory copying at runtime.
Dont do as list of this things, you'll make your code unreadable... or harder to read, even by yourself.
Leave this things to the Zend Engine, or to the accelerator of your choice( actually a opcode cache).
Optimizations like these may be faster now, but they may actually get slower if the guys from the zend engine starts to auto-optimize things like these.
Ex: one may speed up the strlen() function a lot by giving up on z-strings and using l-strings(the length being in the first char, or word). This in turn will end up making your (pre-optimized) script slower if you optimize like this.
Use parameterized SQL instead of mysql_query(). And reduce the overall number of database queries. Everything else are shallow optimizations.