Everyone knows that function calls in PHP hit the performance badly. This script demonstats the problem:
// Plain variable assignment.
$time = microtime(true);
$i = 100000;
while ($i--)
{
$x = 'a';
}
echo microtime(true) - $time."\n\n";
// 0.017973899841309
$time = microtime(true);
function f() { $a = "a"; return $a; }
$i = 100000;
while ($i--)
{
$x = f();
}
echo microtime(true) - $time."\n\n";
//0.18558096885681
By the way anonymous functions are the worst. thy are 10 times slower.
Is there a PHP-Script-Optimizer that reduces the amount of function calls and minifies the script?
There is also this post: Why are PHP function calls *so* expensive? related to this article
You only really call functions that you need at any given time, therefore no.
The thing you could do to optimize your code is using as minimal anonymous functions, reducing the amount of spaces (e.g. use a php minifier) and rename your functions to 1 letter names,
this would atleast tokenize your script which would allow for faster reading of the functions.
But in terms of optimalisation you are better off not doing so due to the readability being completely gone down the drain.
Related
So i need to check if amount of chars from specific set in a string is higher than some number, what a fastest way to do that?
For example i have a long string "some text & some text & some text + a lot more + a lot more ... etc." and i need to check if there r more than 3 of next symbols: [&,.,+]. So when i encounter 4th occurrence of one of these chars i just need to return false, and stop the loop. So i think to create a simple function like that. But i wonder is there any native method in php to do such a thing? But i need some function which will not waste time parsing the string till the end, cuz the string may be pretty long. So i think regexp and functions like count_chars r not suited for that kind of job...
Any suggestions?
I don't know about a native method, I think count_chars is probably as close as you're going to get. However, rolling a custom solution would be relatively simple:
$str = 'your text here';
$chars = ['&', '.', '+'];
$count = [];
$length = strlen($str);
$limit = 3;
for ($i = 0; $i < $length; $i++) {
if (in_array($str[$i], $chars)) {
$count[$str[$i]] += 1;
if ($count[$str[$i]] > $limit) {
break;
}
}
}
Where the data is actually coming from might also make a difference. For example, if it's from a file then you could take advantage of fread's 2nd parameter to only read x number of bytes at a time within a while loop.
Finding the fastest way might be too broad of a question as PHP has a lot of string related functions; other solutions might use strstr, strpos, etc...
Not benchmarked the other solutions but http://php.net/manual/en/function.str-replace.php passing an array of options will be fast. There is an optional parameter which returns the count of replacements. Check that number
str_replace ( ['&','.','+'], '' , $subject , $count )
if ($count > $number ) {
Well, all my thoughts were wrong and my expectations were crushed by real tests. RegExp seems to work from 2 to 7 times faster (with different strings) than self-made function with simple symbol-checking loop.
The code:
// self-made function:
function chk_occurs($str,$chrs,$limit){
$r=false;
$count = 0;
$length = strlen($str);
for($i=0; $i<$length; $i++){
if(in_array($str[$i], $chrs)){
$count++;
if($count>$limit){
$r=true;
break;
}
}
}
return $r;
}
// RegExp i've used for tests:
preg_match('/([&\\.\\+]|[&\\.\\+][^&\\.\\+]+?){3,}?/',$str);
Of course it works faster because it's a single call to native function, but even same code wrapped into function works from 2 to ~4.8 times faster.
//RegExp wrapped into the function:
function chk_occurs_preg($str,$chrs,$limit){
$chrs=preg_quote($chrs);
return preg_match('/(['.$chrs.']|['.$chrs.'][^'.$chrs.']+?){'.$limit.',}?/',$str);
}
P.S. i wasn't bothered to check cpu-time, just was testing walltime measured via microtime(true); of the 200k iteration loop, but it's enough for me.
I have this really newbie question :)
Despite the the fact that
$lastInvoiceNumber
$lastInvNum
or:
last_invoice_number (int 10)
last_inv_num (int 10)
Save a bit of time to write. Do they have any benefits (even the slightest)
performance-wise?
Long vs short?
Is there any chance php and MySQL more importantly will consume
less memory if the query had a shorter table column name?
For example if I have to fetch 500 rows on a single query I imagine
the query would run 500 times and running
last_invoice_number 500 times
vs running
last_inv_num can save some memory or make things slightly faster.
Thanks.
No, there is really no noticeable difference in performance whatsoever, and you'll gain a huge improvement in readability by using descriptive variable names. Internally, these variables are referred to by memory addresses (to put it simply), not by their ASCII/Unicode names. The impact it may have on performance, in nearly any language, is so infinitesimal that it would never be noticed.
Edit:
I've added a benchmark. It shows that there is really no difference at all between using a single letter as a variable name and using a 17-character variable name. The single letter might even be a tiny bit slower. However, I do notice a slight consistent increase in time when using a 90-character variable name, but again, the difference is too small to ever notice for practical purposes. Here's the benchmark and output:
<?php
# To prevent any startup-costs from skewing results of the first test.
$start = microtime(true);
for ($i = 0; $i<1000; $i++)
{
$noop = null;
}
$end = microtime(true);
# Let's benchmark!
$start = microtime(true);
for ($i = 0; $i<1000000; $i++)
{
$thisIsAReallyLongAndReallyDescriptiveVariableNameInFactItIsJustWayTooLongHonestlyWtf = mt_rand(0, 1000);
}
$end = microtime(true);
printf("Using a long name took %f seconds.\n", ($end - $start));
$start = microtime(true);
for ($i = 0; $i<1000000; $i++)
{
$thisIsABitTooLong = mt_rand(0, 1000);
}
$end = microtime(true);
printf("Using a medium name took %f seconds.\n", ($end - $start));
$start = microtime(true);
for ($i = 0; $i<1000000; $i++)
{
$t = mt_rand(0, 1000);
}
$end = microtime(true);
printf("Using a short name took %f seconds.\n", ($end - $start));
Output:
$ php so-test.php
Using a long name took 0.148200 seconds.
Using a medium name took 0.142286 seconds.
Using a short name took 0.145952 seconds.
The same should be true for MySQL as well; I would almost guarantee it, but it's not as easy to benchmark. With MySQL, you will have far more overhead from the network and IO than anything to do with symbol naming in the code. Just as with PHP, internally, column names aren't just strings that are iterated over; data is stored in memory-efficient formats.
Regarding performance, is there any difference between doing:
$message = "The request $request has $n errors";
and
$message = sprintf('The request %s has %d errors', $request, $n);
in PHP?
I would say that calling a function involves more stuff, but I do not know what's PHP doing behind the scenes to expand variables names.
Thanks!
It does not matter.
Any performance gain would be so minuscule that you would see it (as an improvement in the hundreths of seconds) only with 10000s or 100000s of iterations - if even then.
For specific numbers, see this benchmark. You can see it has to generate 1MB+ of data using 100,000 function calls to achieve a measurable difference in the hundreds of milliseconds. Hardly a real-life situation. Even the slowest method ("sprintf() with positional params") takes only 0.00456 milliseconds vs. 0.00282 milliseconds with the fastest. For any operation requiring 100,000 string output calls, you will have other factors (network traffic, for example) that will be an order of magniture slower than the 100ms you may be able to save by optimizing this.
Use whatever makes your code most readable and maintainable for you and others. To me personally, the sprintf() method is a neat idea - I have to think about starting to use that myself.
In all cases the second won't be faster, since you are supplying a double-quoted string, which have to be parsed for variables as well. If you are going for micro-optimization, the proper way is:
$message = sprintf('The request %s has %d errors', $request, $n);
Still, I believe the seconds is slower (as #Pekka pointed the difference actually do not matter), because of the overhead of a function call, parsing string, converting values, etc. But please, note, the 2 lines of code are not equivalent, since in the second case $n is converted to integer. if $n is "no error" then the first line will output:
The request $request has no error errors
While the second one will output:
The request $request has 0 errors
A performance analysis about "variable expansion vs. sprintf" was made here.
As #pekka says, "makes your code most readable and maintainable for you and others". When the performance gains are "low" (~ less than twice), ignore it.
Summarizing the benchmark: PHP is optimized for Double-quoted and Heredoc resolutions. Percentuals to respect of average time, to calculating a very long string using only,
double-quoted resolution: 75%
heredoc resolution: 82%
single-quote concatenation: 93%
sprintf formating: 117%
sprintf formating with indexed params: 133%
Note that only sprintf do some formating task (see benchmark's '%s%s%d%s%f%s'), and as #Darhazer shows, it do some difference on output. A better test is two benchmarks, one only comparing concatenation times ('%s' formatter), other including formatting process — for example '%3d%2.2f' and functional equivalents before expand variables into double-quotes... And more one benchmark combination using short template strings.
PROS and CONS
The main advantage of sprintf is, as showed by benchmarks, the very low-cost formatter (!). For generic templating I suggest the use of the vsprintf function.
The main advantages of doubled-quoted (and heredoc) are some performance; and some readability and maintainability of nominal placeholders, that grows with the number of parameters (after 1), when comparing with positional marks of sprintf.
The use of indexed placeholders are at the halfway of maintainability with sprintf.
NOTE: not use single-quote concatenation, only if really necessary. Remember that PHP enable secure syntax, like "Hello {$user}_my_brother!", and references like "Hello {$this->name}!".
I am surprised, but for PHP 7.* "$variables replacement" is the fastest approach:
$message = "The request {$request} has {$n} errors";
You can simply prove it yourself:
$request = "XYZ";
$n = "0";
$mtime = microtime(true);
for ($i = 0; $i < 1000000; $i++) {
$message = "The request {$request} has {$n} errors";
}
$ctime = microtime(true);
echo '
"variable $replacement timing": '. ($ctime-$mtime);
$request = "XYZ";
$n = "0";
$mtime = microtime(true);
for ($i = 0; $i < 1000000; $i++) {
$message = 'The request '.$request.' has '.$n.' errors';
}
$ctime = microtime(true);
echo '
"concatenation" . $timing: '. ($ctime-$mtime);
$request = "XYZ";
$n = "0";
$mtime = microtime(true);
for ($i = 0; $i < 1000000; $i++) {
$message = sprintf('The request %s has %d errors', $request, $n);
}
$ctime = microtime(true);
echo '
sprintf("%s", $timing): '. ($ctime-$mtime);
The result for PHP 7.3.5:
"variable $replacement timing": 0.091434955596924
"concatenation" . $timing: 0.11175799369812
sprintf("%s", $timing): 0.17482495307922
Probably you already found recommendations like 'use sprintf instead of variables contained in double quotes, it’s about 10x faster.' What are some good PHP performance tips?
I see it was the truth but one day. Namely before the PHP 5.2.*
Here is a sample of how it was those days PHP 5.1.6:
"variable $replacement timing": 0.67681694030762
"concatenation" . $timing: 0.24738907814026
sprintf("%s", $timing): 0.61580610275269
For Injecting Multiple String variables into a String, the First one will be faster.
$message = "The request $request has $n errors";
And For a single injection, dot(.) concatenation will be faster.
$message = 'The request '.$request.' has 0 errors';
Do the iteration with a billion loop and find the difference.
For eg :
<?php
$request = "XYZ";
$n = "0";
$mtime = microtime(true);
for ($i = 0; $i < 1000000; $i++) {
$message = "The request {$request} has {$n} errors";
}
$ctime = microtime(true);
echo ($ctime-$mtime);
?>
Ultimately the 1st is the fastest when considering the context of a single variable assignment which can be seen by looking at various benchmarks. Perhaps though, using the sprintf flavor of core PHP functions could allow for more extensible code and be better optimized for bytecode level caching mechanisms like opcache or apc. In other words, a particular sized application could use less code when utilizing the sprintf method. The less code you have to cache into RAM, the more RAM you have for other things or more scripts. However, this only matters if your scripts wouldn't properly fit into RAM using evaluation.
Is there a PHP equivalent to setting timeouts in JavaScript?
In JavaScript you can execute code after certain time has elapsed using the set time out function.
Would it be possible to do this in PHP?
PHP is single-threaded, and in general PHP is focused on the HTTP request cycle, so this would be tricky to allow a timeout to run code, potentially after the request is done.
I can suggest you look into Gearman as a solution to delegate work to other PHP processes.
You can use the sleep() function:
int sleep ( int $seconds )
// Delays the program execution for the given number of seconds.
Example:
public function sleep(){
sleep(1);
return 'slept for 1 second';
}
This is ugly, but basically works:
<?php
declare(ticks=1);
function setInterval($callback, $ms, $max = 0)
{
$last = microtime(true);
$seconds = $ms / 1000;
register_tick_function(function() use (&$last, $callback, $seconds, $max)
{
static $busy = false;
static $n = 0;
if ($busy) return;
$busy = true;
$now = microtime(true);
while ($now - $last > $seconds)
{
if ($max && $n == $max) break;
++$n;
$last += $seconds;
$callback();
}
$busy = false;
});
}
function setTimeout($callback, $ms)
{
setInterval($callback, $ms, 1);
}
// user code:
setInterval(function() {
echo microtime(true), "\n";
}, 100); // every 10th of a second
while (true) usleep(1);
The interval callback function will only be called after a tickable PHP statement. So if you try to call a function 10 times per second, but you call sleep(10), you'll get 100 executions of your tick function in a batch after the sleep has finished.
Note that there is an additional parameter to setInterval that limits the number of times it is called. setTimeout just calls setInterval with a limit of one.
It would be better if unregister_tick_function was called after it expired, but I'm not sure if that would even be possible unless there was a master tick function that monitored and unregistered them.
I didn't attempt to implement anything like that because this is not how PHP is designed to be used. It's likely that there's a much better way to do whatever it is you want to do.
Without knowing a use-case for your question it's hard to answer it:
If you want to send additional data to the client a bit later you can do a JS timeout on the client side with a handler that will make a new HTTP request to PHP.
If you want to schedule some task for a later time you can store that in a database and poll the DB in regular intervalls. It's not the best peforming solution but relatively easy to implement.
if ($currenturl != $urlto)
exit( wp_redirect( $urlto ) );
You can replace above two line with below code inside your function
if ($currenturl != $urlto)
header( "refresh:10;url=$urlto" );
This is justa a performance question.
What is faster, access a local PHP variable or try to access to session variable?
I do not think that this makes any measurable difference. $_SESSION is filled by PHP before your script actually runs, so this is like accessing any other variable.
Superglobals will be slightly slower to access than non-superglobal variables. However, this difference will only be noticeable if you are doing millions of accesses in a script and, even then, such difference doesn't warrant change in your code.
$_SESSION['a'] = 1;
$arr['a'] = 1;
$start = 0; $end = 0;
// A
$start = microtime(true);
for ($a=0; $a<1000000; $a++) {
$arr['a']++;
}
$end = microtime(true);
echo $end - $start . "<br />\n";
// B
$start = microtime(true);
for ($b=0; $b<1000000; $b++) {
$_SESSION['a']++;
}
$end = microtime(true);
echo $end - $start . "<br />\n";
/* Outputs:
0.27223491668701
0.40177798271179
0.27622604370117
0.37337398529053
0.3008668422699
0.39706206321716
0.27507615089417
0.40228199958801
0.27182102203369
0.40200400352478
*/
It depends, are you talking about setting the $_SESSION variable to a local variable for use throughout the file or simple talking about the inherent differences between the two types of variables?
One is declared by you and another is core functionality. It will always be just a smidge slower to set the $_SESSION variable to a local variable, but the differenceenter code here is negligible compared to the ease of reading and re-using.
There is nothing performance-related in this question.
The only those performance questions are real ones, which have a profiling report in it.
Otherwise it's just empty chatter.
Actually such a difference will never be a bottleneck.
And there is no difference at all.