Executing MySQL Queries and Creating Instances several Times within Loop - php

I have an Web Application that requires SELECT and INSERT Querying to MySQL database and Instantiating a PHP class using new operator almost more that thousand times within a loop. May be there are alternatives to my present logic, but my point is that is there any harm if I carry on this logic?. I don't bother about the time complexity associated with the algorithm presently but **worrying much about if anything goes wrong during transaction or memory usage. I am giving the piece of code for reference
$stm_const = "select ce.TIMETAKEN, qm.QMATTER as STRING1, ce.SMATTER as STRING2 from w_clkexam ce, clkmst cm, qsmst qm where ce.QID=qm.QID and cm.ROLLNO=ce.ROLLNO";
for ($c=0; $c < count($rollnos); $c++) {
$stm3 =$stm_const." "."and ce.ROLLNO='$rollnos[$c]'";
$qry3 = mysql_query($stm3) or die("ERROR 3:".mysql_error());
while($row1 = mysql_fetch_array($qry3)) {
echo $string1=$row1['STRING1'];
echo $string2=$row1['STRING2'];
$phpCompareStrings=new PhpCompareStrings($string2, $string1);
$percent=$phpCompareStrings->getSimilarityPercentage();
$percent2=$phpCompareStrings->getDifferencePercentage();
echo '$string1 and $string2 are '.$percent.'% similar and '.$percent2.'% differnt<br/>';
}// end while
}// end for
Please help, I am waiting for opinions from you so that I can move further. Thanks in advance.

I don't see any problem there. You just get all the rows from database and for each row compare the strings. As you assign the object to the same variable each time, the old object is destroyed before the new object is created. So you have only one instance of the object in memory at all times. The question is what you want to do with the results? Only print them as in your example, or to store the results for further processing?
Anyway, I think it is not possible to optimize your code without modifying the class. If you are using this class, you can try to modify it so that it can accept multiple strings. Using this, you can create only 1 instance of the class and you avoid destroying/creating the object for every row. It will save you some CPU time, but not memory (as at all times, only 1 instance of the class is active).
Untested modification below:
Modify this function inside the class:
function __construct($str1,$str2){
$str1=trim($str1);
$str2=trim($str2);
if($str1==""){ trigger_error("First parameter can not be left blank", E_USER_ERROR); }
elseif($str2==""){ trigger_error("Second parameter can not be left blank", E_USER_ERROR); }
else{
$this->str1=$str1;
$this->str2=$str2;
$this->arr1=explode(" ",$str1);
$this->arr2=explode(" ",$str2);
}
}
To these 2 functions:
function init($str1,$str2){
$str1=trim($str1);
$str2=trim($str2);
if($str1==""){ trigger_error("First parameter can not be left blank", E_USER_ERROR); }
elseif($str2==""){ trigger_error("Second parameter can not be left blank", E_USER_ERROR); }
else{
$this->str1=$str1;
$this->str2=$str2;
$this->arr1=explode(" ",$str1);
$this->arr2=explode(" ",$str2);
}
}
function __construct($str1,$str2){ $this->init($str1,$str2); }
Then create the object outside the loop and only call $phpCompareStrings->init($string2,$string1) inside the loop.

Related

Running PHP & Mysqli queries in Parallel

I'm tying to extract data from thousands of premade sql files. I have a script that does what I need using the Mysqli driver in PHP, but it's really slow since it's one sql file at a time. I modified the script to create unique temp database names, which each sql file is loaded into. Data is extracted to an archive database table, then the temp database is dumped. In an effort to speed things up, I created a script structured 4 scripts similar to the one below, where each for loop is stored in it's own unique PHP file (the code below is only for a quick demo of what's going on in 4 separate files), they are setup to grab only 1/4 of the files from the source file folder. All of this works perfectly, the scripts run, there is zero interference with file handling. The issue is that I seem to get almost zero performance boost. Maybe 10 seconds faster :( I quickly refreshed my PHPmyadmin database listing page and could see the 4 different databases loaded at anytime, but I also noticed that it looked like it was still running more or less sequentially as the DB names were changing on the fly. I went the extra step of creating an unique user for each script with it's own connection. No improvement. Can I get this to work with mysqli / PHP or do I need to look into some other options? I'd prefer to do this all in PHP if I can (version 7.0). I tested by running the PHP scripts in my browser. Is that the issue? I haven't written any code to execute them on the command line and set them to the background yet. One last note, all the users in my mysql database have no limits on connections, etc.
$numbers = array('0','1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20');
$numCount = count($numbers);
$a = '0';
$b = '1';
$c = '2';
$d = '3';
$rebuild = array();
echo"<br>";
for($a; $a <= $numCount; $a+=4){
if(array_key_exists($a, $numbers)){
echo $numbers[$a]."<br>";
}
}
echo "<br>";
for($b; $b <= $numCount; $b+=4){
if(array_key_exists($b, $numbers)){
echo $numbers[$b]."<br>";
}
}
echo "<br>";
for($c; $c <= $numCount; $c+=4){
if(array_key_exists($c, $numbers)){
echo $numbers[$c]."<br>";
}
}
echo "<br>";
for($d; $d <= $numCount; $d+=4){
if(array_key_exists($d, $numbers)){
echo $numbers[$d]."<br>";
}
}
Try this:
<?php
class BackgroundTask extends Thread {
public $output;
protected $input;
public function run() {
/* Processing here, use $output for... well... outputting data */
// Here you would implement your for() loops, for example, using $this->input as their data
// Some dumb value to demonstrate
$output = "SOME DATA!";
}
function __construct($input_data) {
$this->input = $input_data;
}
}
// Create instances with different input data
// Each "quarter" will be a quarter of your data, as you're trying to do right now
$job1 = new BackgroundTask($first_quarter);
$job1->start();
$job2 = new BackgroundTask($second_quarter);
$job2->start();
$job3 = new BackgroundTask($third_quarter);
$job3->start();
$job4 = new BackgroundTask($fourth_quarter);
$job4->start();
// ==================
// "join" the first job, i.e. "wait until it's finished"
$job1->join();
echo "First output: " . $job1->output;
$job2->join();
echo "Second output: " . $job2->output;
$job3->join();
echo "Third output: " . $job3->output;
$job4->join();
echo "Fourth output: " . $job4->output;
?>
When using four calls to your own script through HTTP, you're overloading your connections for no useful reason. Instead, you're taking away spaces for other users who may be trying to access your website.

Pass By Reference to COM Object in PHP

So I'm hoping someone can help and I'm sure this is probably something simple I'm missing. I'm using PHP to access a .net API for a third party software.
Based on the very minimalist documentation on the API I have a working vbsript that connects to the object, performs a login and then does a query which results in the output of the query being dumped to a message box.
Here's the vbscript sample:
'Test device status
Set xxx = CreateObject("The.API.Object.Goes.Here")
'Login
Result = Xxx.LoginToHost("xxx.xxx.xxx.xxx","8989","Administrator","")
if (Result = true) then
MsgBox("OK")
else
MsgBox("Error - " & Xxx.LastError)
WScript.Quit
end if
'Get Status
Result = Xxx.GetDeviceStatus("", out)
if (Result = true) then
MsgBox(out)
else
MsgBox("Error - " & Xxx.LastError)
end if
'Logout
Result = Xxx.Logout()
if (Result = true) then
MsgBox("Logout OK")
else
MsgBox("Error - " & Xxx.LastError)
end if
The Xxx.GetDeviceStatus has two perimeters, the first being a device target or if left blank returns all devices, the second is the string variable to dump the result in.
When the script executes, the second message box contains a list of all devices as I would expect.
In PHP I have:
$obj = new DOTNET("XxxScripting, Version=1.0.XXXX.XXXXXX, Culture=neutral, PublicKeyToken=XXXXXXXXXXXXXXXX","Here.Goes.The.Api");
$obj->LoginToHost('xxx.xxx.xxx.xxx','8989','Administrator','');
$result = $obj->GetDeviceStatus('','out');
echo $result."<br />";
echoing result gives 1 because the value of result is a boolean value and GetDeviceStatus is successful. What I can't figure out is how to get the value of 'out' which is the actual query result.
Any help would be greatly appreciated.
The second parameter of GetDeviceStatus() method call according to the VBScript should pass a variable that will be populated with the output.
However in the PHP example you are just passing the string 'out' which isn't equivalent to what is being done in the VBScript.
Instead try passing a PHP variable to the method and then echoing that variable to screen, like this;
$result = $obj->GetDeviceStatus('', $out);
if ($result)
echo $out."<br />";
After a bit of digging it appears according to the PHP Reference that you need to pass By Reference variables to COM using the VARIANT data type.
Quote from ferozzahid [at] usa [dot] com on PHP - COM Functions
"To pass a parameter by reference to a COM function, you need to pass VARIANT to it. Common data types like integers and strings will not work for it."
With this in mind maybe this will work;
$out = new VARIANT;
$result = $obj->GetDeviceStatus('', $out);
if ($result)
echo $out."<br />";

PHP: mysqli clearing stored results

I was using this function to clear stored results after calling a procedure etc.
function clearStoredResults($mysqli_link){
#------------------------------------------
while($mysqli_link->next_result()){
if($l_result = $mysqli_link->store_result()){
$l_result->free();
}
}
}
But I keep getting the following message/error
mysqli::next_result(): There is no next result set. Please, call
mysqli_more_results()/mysqli::more_results() to check whether to call
this function/method in
If I change next_result() to more_results() the page keeps loading for a while and I get a time out error:
Maximum execution time of 30 seconds exceeded
any ideas on how to fix this?
You need to use both methods:
function clearStoredResults($mysqli_link) {
#------------------------------------------
while($mysqli_link->more_results()) {
$mysqli_link->next_result();
if($l_result = $mysqli_link->store_result()) {
$l_result->free();
}
}
}
Use more_results to control the loop, and next_result to actually move the cursor.

Array Insert Time Jump

During deep researching about hash and zval structure and how arrays are based on it, faced with strange insert time.
Here is example:
$array = array();
$someValueToInsert = 100;
for ($i = 0; $i < 10000; ++$i) {
$time = microtime(true);
array_push($array, $someValueToInsert);
echo $i . " : " . (int)((microtime(true) - $time) * 100000000) . "</br>";
}
So, I found that every 1024, 2024, 4048... element will be inserted using much more time(>~x10).
It doesn't depends will I use array_push, array_unshift, or simply $array[] = someValueToInsert.
I'm thinking about that in Hash structure:
typedef struct _hashtable {
...
uint nNumOfElements;
...
} HashTable;
nNumOfElements has default max value, but it doesn't the answer why does it took more time to insert in special counters(1024, 2048...).
Any thoughts ?
While I would suggest double checking my answer on the PHP internals list, I believe the answer lay in zend_hash_do_resize(). When more elements are needed in the hash table, this function is called and the extant hash table is doubled in size. Since the table starts life at 1024, this doubling explains the results you've observed. Code:
} else if (ht->nTableSize < HT_MAX_SIZE) { /* Let's double the table size */
void *old_data = HT_GET_DATA_ADDR(ht);
Bucket *old_buckets = ht->arData;
HANDLE_BLOCK_INTERRUPTIONS();
ht->nTableSize += ht->nTableSize;
ht->nTableMask = -ht->nTableSize;
HT_SET_DATA_ADDR(ht, pemalloc(HT_SIZE(ht), ht->u.flags & HASH_FLAG_PERSISTENT));
memcpy(ht->arData, old_buckets, sizeof(Bucket) * ht->nNumUsed);
pefree(old_data, ht->u.flags & HASH_FLAG_PERSISTENT);
zend_hash_rehash(ht);
HANDLE_UNBLOCK_INTERRUPTIONS();
I am uncertain if the remalloc is the performance hit, or if the rehashing is the hit, or the fact that the whole block is uninterruptable. Would be interesting to put a profiler on it. I think some might have already done that for PHP 7.
Side note, the Thread Safe version does things differently. I'm not overly familiar with that code, so there may be a different issue going on if your using ZTS.
I think it is related to implementation of dynamic arrays.
See here "Geometric expansion and amortized cost" http://en.wikipedia.org/wiki/Dynamic_array
To avoid incurring the cost of resizing many times, dynamic arrays resize by a large amount, **such as doubling in size**, and use the reserved space for future expansion
You can read about arrays in PHP here as well https://nikic.github.io/2011/12/12/How-big-are-PHP-arrays-really-Hint-BIG.html
It is a standard practice for dynamic arrays. E.g. check here C++ dynamic array, increasing capacity
capacity = capacity * 2; // doubles the capacity of the array

PHP returns only half of my results from mysql_query

I have a fairly large call I have to make to join 4 tables and get 50 rows. When I do the pull on my PHP server it does the pull in 0.0013 seconds, which is good. So I try to call the PHP from my iOS device to get the records and sometimes I get them, sometimes I get half.
Here is my PHP SQL code.
$sql = "SELECT NF.id, NF.type, NF.groupid, NF.classid, NF.videoid, NF.lessonid, NF.text, NF.path, NF.datetime, Groups.name as groupname, Groups.imagepath as groupimage, Classes.title as classname, Classes.imagepath as classimage, Videos.title as videoname, Videos.videopath as videopath, Lessons.title as lessonname, Lessons.imagepath as lessonimage
FROM NewsFeed as NF
LEFT OUTER JOIN Groups on (NF.groupid=Groups.id)
LEFT OUTER JOIN Classes on (NF.classid=Classes.id)
LEFT OUTER JOIN Videos on (NF.videoid=Videos.id)
LEFT OUTER JOIN Lessons on (NF.lessonid=Lessons.id)
WHERE $searchstring
ORDER BY NF.datetime DESC LIMIT 50";
$request = mysql_query($sql) or die(mysql_error());
$arr = array();
if(mysql_num_rows($request)){
while($obj = mysql_fetch_object($request)) {
$arr[] = $obj;
}
}
// Return Array
echo '{"newsfeed":'.json_encode($arr).'}';
mysql_close($link);
Now when I echo the json_encoded array, sometimes i get the entire array starting from "newsfeed": and other times I get something like this:
2014-04-24 23:07:09.574 ProgramName[6573:60b] our programming theory...","path":"http://site.com/LMS/NewsFeedPictures/IMG2014170220465189.PNG","datetime":"2014-02-17 20:46:56","groupname":"LMS for iOS","groupimage":"http://site.com/LMS/GroupThumbnails/LMS.jpg","classname":null,"classimage":null,"videoname": ... (and more and more records until the end.)
As you can see, that spot where the content string starts is the middle of my data!
So the question is why is the PHP only returning partial data?
I have seen this before.
My main culprit at the time was a double quote (") in a string within the returned data from mysql. It invalidate the json string when it arrive at js call.
try to validate the json generated from the php through many json validator. My favourite is jslint
After doing some checking, double checking and some more testing I came to the conslusion that it was not my PHP after all, it was my NSURLConnection.
-(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data{
You CANNOT rely on this method bringing back whole data, it will break on some occasions. Therefore you must implement an NSMutableData set and append the data like this...
if(arrData != nil){
[arrData appendData:data];
}
And in the connection did finish delegate, do this...
-(void)connectionDidFinishLoading:(NSURLConnection *)connection{
if (arrData != nil) {
//Some method to return or use arrData
arrData = nil;
}
}
That way you will always receive the full data. Thanks #Robotys for the help and new tools!

Categories