PDO object performance - php

I tested today PDO performance like this :
<?php
$start = microtime(true);
new PDO("mysql:host=localhost;dbname=mw", 'root', '');
$stop = microtime(true);
echo $stop - $start;
And the result was pretty surprising (running locally on my Windows 8.1 Laptop)
ELLAPSED : 1.0117259025574
During a the script execution, I cache the PDO object in a static variable so I don't have to create a new one for another query.
But this caching method only works during the script execution.
My script runs in 1.25 seconds of which 1.01 are used to create the PDO object.
Is there a way to cache the PDO object for the whole session or for multiple users ?
Am I missing something ?

I tested same code on my server and connecting with db Via PDO .
$start time is 1414590258.7735
$end time is 1414590258.7736
And $end time- $start time is 0.0001530647277832
My code is
enter code here
$start = microtime(true);
new PDO("mysql:host=$mysqlnd_appname1;dbname=$db1",$user1,$pass1);
$stop = microtime(true);
echo($start);
echo "<br />";
echo $stop;
echo "<br />";
echo $stop - $start;
So its PDO connection performance is good , its also depend on your sever and how my sql server respond to your php server.
for more details visit this http://archive.jnrbsn.com/2010/06/mysqli-vs-pdo-benchmarks

If connection overhead is an issue, you can do this:
$dbh = new PDO("mysql:host=localhost;dbname=mw", 'root', '', array(
PDO::ATTR_PERSISTENT => true
));
http://php.net/manual/en/pdo.connections.php

Related

How can I get the time of a remote server?

I am currently working on a websever that will create a quick diagnose dashboard of other servers.
My requirement is to display the time of these remote servers, it seems that the NTP create some issues and I would like to see that.
I currently have a bat file on my desktop that simply send
net time \\SRV*******
I have also tried:
echo exec('net time \\\\SRV****');
=> result is '0'
But I would like to find a better solution in PHP so anybody of a team can read it on a webpage.
Any idea what I would do?
Note: this is not related to How to get date and time from server ad I want to get the time of a REMOTE server and not local server
You can use NTP protocol to retrieve datetime from a remote server.
Try this code:
error_reporting(E_ALL ^ E_NOTICE);
ini_set("display_errors", 1);
date_default_timezone_set("Europe/...");
function query_time_server ($timeserver, $socket)
{
$fp = fsockopen($timeserver,$socket,$err,$errstr,5);
# parameters: server, socket, error code, error text, timeout
if ($fp) {
fputs($fp, "\n");
$timevalue = fread($fp, 49);
fclose($fp); # close the connection
} else {
$timevalue = " ";
}
$ret = array();
$ret[] = $timevalue;
$ret[] = $err; # error code
$ret[] = $errstr; # error text
return($ret);
}
$timeserver = "10.10.10.10"; #server IP or host
$timercvd = query_time_server($timeserver, 37);
//if no error from query_time_server
if (!$timercvd[1]) {
$timevalue = bin2hex($timercvd[0]);
$timevalue = abs(HexDec('7fffffff') - HexDec($timevalue) - HexDec('7fffffff'));
$tmestamp = $timevalue - 2208988800; # convert to UNIX epoch time stamp
$datum = date("Y-m-d (D) H:i:s",$tmestamp - date("Z",$tmestamp)); /* incl time zone offset */
$doy = (date("z",$tmestamp)+1);
echo "Time check from time server ",$timeserver," : [<font color=\"red\">",$timevalue,"</font>]";
echo " (seconds since 1900-01-01 00:00.00).<br>\n";
echo "The current date and universal time is ",$datum," UTC. ";
echo "It is day ",$doy," of this year.<br>\n";
echo "The unix epoch time stamp is $tmestamp.<br>\n";
echo date("d/m/Y H:i:s", $tmestamp);
} else {
echo "Unfortunately, the time server $timeserver could not be reached at this time. ";
echo "$timercvd[1] $timercvd[2].<br>\n";
}
You can use a powershell script along with WMI (assuming your servers are windows machines)
This will be way faster than the old net time style.
The script needs to be executed with a Domain-Admin (or any other user that has WMI-query permissions)
$servers = 'dc1', 'dc2', 'dhcp1', 'dhcp2', 'wsus', 'web', 'file', 'hyperv1', 'hyperv2'
ForEach ($server in $servers) {
$time = "---"
$time = ([WMI]'').ConvertToDateTime((gwmi win32_operatingsystem -computername $server).LocalDateTime)
$server + ', ' + $time
}
You can achieve this with a scheduled task every 5 minutes.
If you want to display the time "with seconds / minutes" (I wouldn't query each server too often, as time does not change THAT fast), you could add a little maths:
When executing the script, don't store the actual time, but just the offset of each server, compared to your webservers time. (i.e. +2.2112, -15.213)
When your users are visiting the "dashboard" load these offsets and display the webservers current time +/- the offset per server. (Could be "outdated" by some microseconds then, but do you care?)
Why so complicated?
Just found that this is working:
echo exec('net time \\\\SRV****, $output);
echo '<pre>';
print_r($output);
echo '</pre>';

Slow query using MongoDB and PHP

I have two collections. The first one, $bluescan, has 345 documents and some ['company'] values missing. The second one, $maclistResults, has 20285 documents.
I'm running $bluescan against $maclistResults to fill missing ['company'] values.
I created three indexes for $maclistResults using the following PHP code:
$db->$jsonCollectionName->ensureIndex(array('mac' => 1));
$db->$jsonCollectionName->ensureIndex(array('company' => 1));
$db->$jsonCollectionName->ensureIndex(array('mac'=>1, 'company'=>1));
I'm using a MongoLab free account for the DB and running my PHP application locally.
The code below demonstrates the process. It works, does what I need but it takes almost 64 seconds to perform the task.
Is there anything I can do to improve this execution time?
Code:
else
{
$maclist = 'maclist';
$time_start = microtime(true);
foreach ($bluescan as $key => $value)
{
$maclistResults = $db->$maclist->find(array('mac'=>substr($value['mac'], 0, 8)), array('mac'=>1, 'company'=>1));
foreach ($maclistResults as $value2)
{
if (empty($value['company']))
{
$bluescan[$key]['company'] = $value2['company'];
}
}
}
}
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "<pre>";
echo "maclistResults execution time: ".$time." seconds";
echo "</pre>";
Echo output from $time:
maclistResults execution time: 63.7298750877 seconds
Additional info: PHP version 5.6.2 (MAMP PRO on OSX Yosemite)
As stated by SolarBear, you're actually executing find as often as you have elements in $bluescan.
You should first fetch the list from the server and the iterate over that result set instead, basically exactly the opposite order of your current code.

Slow performance for rabbitmq amqp_login (PHP client)

I'm using php-amqp to read and write from a local rabbitmq server. This is for a high traffic website. Following the example at http://code.google.com/p/php-amqp/, I haven't found a way to avoid calling amqp_login with every web request. The call to amqp_login is, by far, the slowest in the sequence. Is there an easy way to bypass the need to call this with every web request? We're using Apache on SuSE linux.
$time = microtime(true);
$connection = amqp_connection_popen('localhost', 5672);
print "connect: ".(microtime(true) - $time) . "\n";
$time = microtime(true);
amqp_login($connection, 'guest', 'guest', '/');
print "login: ".(microtime(true) - $time) . "\n";
$time = microtime(true);
$channel = 1;
amqp_channel_open($connection, $channel);
print "channel open: ".(microtime(true) - $time) . "\n";
$time = microtime(true);
$exchange = 'amq.fanout';
amqp_basic_publish($connection, $channel, $exchange, $routing_key, 'junk', $mandatory = false, $immediate = false, array());
print "publish: ".(microtime(true) - $time) . "\n";
Example Results:
connect: 0.00019311904907227
login: 0.041217088699341
channel open: 0.00034213066101074
publish: 5.6028366088867E-5
Is there an easy way to bypass the need to call this with every web request?
Not specifically familiar with this package, however....
It's suprising that it allows persistent connection, but not persistent authentication - one way to fix this would be to key the connection pool based on server+port+username+password rather than just the server+port, which wouldn't require too much change to the code.
If you don't feel like modifying the C code / arguing with the maintainers, then you could run a proxy daemon which maintains an authenticated connection.

php slow lag when login

i am developing a new section on my site and ive notice a small latency when login. on my computer it works great but when i put it to th eserver it is slower. the login process is slower on the server and not on my cmoputer.
half second to 1 second slower
i have doubt on my hosting that is not as fast as they say since on my computer its fast.
is there a way i can monitor the speed of the server command line or php script i can run to find out what's wrong?
Put these three lines of code in various places in your script (replacing "foo" with a description of where you place it in the code):
$h = fopen('log.txt', 'a');
fwrite($h, 'foo: ' . microtime(true));
fclose();
Then, run your script, and you can see which part is slow.
At the top of the script, put
<?php
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
return ((float)$usec + (float)$sec);
}
$start_time = microtime_float();
and at the end
$exec_time = microtime_float() - $start_time;
echo 'Page loaded in: ' . $exec_time . 'seconds';
?>
Compare your local copy with the remote copy.

Access a local PHP variable is faster that access a session PHP variable?

This is justa a performance question.
What is faster, access a local PHP variable or try to access to session variable?
I do not think that this makes any measurable difference. $_SESSION is filled by PHP before your script actually runs, so this is like accessing any other variable.
Superglobals will be slightly slower to access than non-superglobal variables. However, this difference will only be noticeable if you are doing millions of accesses in a script and, even then, such difference doesn't warrant change in your code.
$_SESSION['a'] = 1;
$arr['a'] = 1;
$start = 0; $end = 0;
// A
$start = microtime(true);
for ($a=0; $a<1000000; $a++) {
$arr['a']++;
}
$end = microtime(true);
echo $end - $start . "<br />\n";
// B
$start = microtime(true);
for ($b=0; $b<1000000; $b++) {
$_SESSION['a']++;
}
$end = microtime(true);
echo $end - $start . "<br />\n";
/* Outputs:
0.27223491668701
0.40177798271179
0.27622604370117
0.37337398529053
0.3008668422699
0.39706206321716
0.27507615089417
0.40228199958801
0.27182102203369
0.40200400352478
*/
It depends, are you talking about setting the $_SESSION variable to a local variable for use throughout the file or simple talking about the inherent differences between the two types of variables?
One is declared by you and another is core functionality. It will always be just a smidge slower to set the $_SESSION variable to a local variable, but the differenceenter code here is negligible compared to the ease of reading and re-using.
There is nothing performance-related in this question.
The only those performance questions are real ones, which have a profiling report in it.
Otherwise it's just empty chatter.
Actually such a difference will never be a bottleneck.
And there is no difference at all.

Categories