Good evening all.
I'm currently working on a small personal project. It's purpose is to retrieve numerous values from a database on my backend and store them as variables. These variables are then used to modify the appearance of some HTML5 Canvas objects (in this case, i'm using arcs).
Please note that the values in the database are Text and thus my bind statements refer to that. The queries i'm calling (AVG, MIN, MAX) work fine with the values i've got as the fields store numerical data (this is merely due to another script that deals with adding or updating the data -- that's already running MySQLi, and using Text was the best solution for my situation).
Now, i achieved what i wanted with standard MySQL queries, but it's messy code and the performance of it could prove to be terrible as the database grows. For that reason, i want to use loops. I also feel that bind_param of MySQLi would be much better for security. The page doesn't accept ANY user input, it's merely for display and so injection is less of a concern, but at some point in the future, i'll be looking to expand it to allow users to control what is displayed.
Here's a sample of my original MySQL PHP code sample;
$T0A = mysql_query('SELECT AVG(Temp0) FROM VTempStats'); // Average
$T0B = mysql_query('SELECT MIN(Temp0) FROM VTempStats'); // Bottom/MIN
$T0T = mysql_query('SELECT MAX(Temp0) FROM VTempStats'); // Top/MAX
$T1A = mysql_query('SELECT AVG(Temp1) FROM VTempStats'); // Average
$T1B = mysql_query('SELECT MIN(Temp1) FROM VTempStats'); // Bottom/MIN
$T1T = mysql_query('SELECT MAX(Temp1) FROM VTempStats'); // Top/MAX
$r_T0A = mysql_result($T0A, 0);
$r_T0T = mysql_result($T0T, 0);
$r_T0B = mysql_result($T0B, 0);
$r_T1A = mysql_result($T1A, 0);
$r_T1T = mysql_result($T1T, 0);
$r_T1B = mysql_result($T1B, 0);
if ($r_T0A == "" ) {$r_T0A = 0;}
if ($r_T1A == "" ) {$r_T1A = 0;}
if ($r_T0B == "" ) {$r_T0B = 0;}
if ($r_T1B == "" ) {$r_T1B = 0;}
if ($r_T0T == "" ) {$r_T0T = 0;}
if ($r_T1T == "" ) {$r_T1T = 0;}
That's shorter than the original, as there's 4x3 sets of queries (Temp0,Temp1,Temp2,Temp3, and min,max,avg for each). Note that the last 6 if statements are merely there to ensure that fields that are null are automatically set to 0 before my canvas script attempts to work with them (see below).
To show that value on the arc, i'd use this in my canvas script (for example);
var endAngle = startAngle + (<?= $r_T0A ?> / 36+0.02);
It worked for me, and what was displayed was exactly what i expected.
Now, in trying to clean up my code and move to a loop and MySQLi, i'm running into problems. Being very new to both SQL and PHP, i could use some assistance.
This is what i tried;
$q_avg = "SELECT AVG(Temp?) FROM VTempStats";
for ($i_avg = 0; $i_avg <= 3; ++$i_avg)
{
if ($s_avg = $mysqli->prepare($q_avg))
{
$s_avg->bind_param('s',$i_avg);
$s_avg->execute();
$s_avg->bind_result($avg);
$s_avg->fetch();
echo $avg;
}
}
Note: mysqli is the MySQLi connection. I've cut the code down to only show the AVG query loop, but the MIN and MAX loops are nearly identical.
Obviously, that won't work as it's only assigning one variable for each set of queries, instead of 4 variables for each loop.
As you can imagine, what i want to do is assign all 12 values to individual variables so that i can work with them in my canvas script. I'm not entirely sure how i go about this though.
I can echo individual values out through MySQLi, or i can query the database to change or add data through MySQLi, but trying to make a loop that does what i intend with MySQLi (or even MySQL), that's something i need help with.
From my reading of your code, you have a fixed number of columns and know their names, and you are applying the AVG(), MIN(), MAX() aggregates to the same table over the same aggregate group, with no WHERE clause applied. Therefore, they can all be done in one query from which you just need to fetch one single row.
SELECT
AVG(Temp0) AS a0,
MIN(Temp0) AS min0,
MAX(Temp0) AS max0,
AVG(Temp1) AS a1,
MIN(Temp1) AS min1,
MAX(Temp1) AS max1,
AVG(Temp2) AS a2,
MIN(Temp2) AS min2,
MAX(Temp2) AS max2,
AVG(Temp3) AS a3,
MIN(Temp3) AS min3,
MAX(Temp3) AS max3
FROM VTempStats
This can be done in a single call to $mysqli->query(), and no parameter binding is necessary so you don't need the overhead of prepare(). One call to fetch_assoc() is needed to retrieve a single row, with columns aliased like a0, min0, max0, etc... as I have done above.
// Fetch one row
$values = $result_resource->fetch_assoc();
print_r($values);
printf("Avg 0: %s, Min 0: %s, Max 0: %s... etc....", $values['a0'], $values['min0'], $values['max0']);
These can be pulled into the global scope with extract(), but I recommend against that. Keeping them in their $values array makes their source more explicit.
As you can imagine, what i want to do is assign all 12 values to individual variables so that i can work with them in my canvas script. I'm not entirely sure how i go about this though.
Understood. Here is what I would do.
<?php // RAY_temp_scottprichard.php
error_reporting(E_ALL);
echo '<pre>';
// RANGE OF TEMPS
$temps = range(0,3);
// RANGE OF VALUES
$funcs = array
( 'A' => 'AVG'
, 'B' => 'MIN'
, 'T' => 'MAX'
)
;
// CONSTRUCT THE QUERY STRING
$query = 'SELECT ';
foreach ($temps as $t)
{
foreach ($funcs as $key => $func)
{
$query .= PHP_EOL
. $func
. '(Temp'
. $t
. ') AS '
. 'T'
. $t
. $key
. ', '
;
}
}
// DECLOP THE UNWANTED TRAILING COMMA
$query = rtrim($query, ', ');
// ADD THE TABLE NAME
$query .= ' FROM VTempStats';
// ADD ANY ORDER, LIMIT, WHERE CLAUSES HERE
$query .= ' WHERE 1=1';
// SHOW THE WORK PRODUCT
var_dump($query);
See the output query string here: http://www.laprbass.com/RAY_temp_scottpritchard.php
When you run this query, you will fetch one row with *mysql_fetch_assoc()* or equivalent, and it will have all the variables you want in that row, with named keys. Then you can use something like this to inject the variable names and values into your script. http://php.net/manual/en/function.extract.php
PHP extract() allows the use of a prefix, so you should be able to avoid having to make too many changes to your existing script.
HTH, ~Ray
Related
I'm wondering whether this kind of logic would improve query performance, say for example rather then checking a user likes a post on each element in an array and firing a query for each.
Instead i could push the primary id's into an array and then perform an IN query on them, this would reduce 15 nth term queries, and batch it into 2 query including the initial one.
I'm using PHP PDO, MYSQL.
Any advice? Am i on the right track people? :D
$items is the result set from the database, in this case they are questions that users are asking, i get a response in about 140ms and i've set a limit on how many items are loaded at once with pagination.
$questionIds = [];
foreach ($items as $item) {
array_push($questionIds, $item->question_id);
}
$items = loggedInUserLikesQuestions($questionIds, $items, $user_id);
Definitely the IN clause is faster on execution of the SQL query. However, you will only see significant actual clock-speed benefits once the number of items in your IN clause (on average) gets high.
The reason there is a speed difference, even though the individual update may be lightning-fast, is the setup, executing, tear-down, and response of each query, send/receive to the server. When you are doing thousands (or millions) of these as fast as you can, I've seen, instead of 500/sec, getting 200,000/sec. This may give you some idea.
However, with the IN-clause method, you need to make sure your IN clause does not become too big, and hitting the max query size (see variable max_allowed_packet)
Here is a simple set of functions that will automatically batch up into IN clauses of 1000 items each:
<?php
$db = new PDO('...');
$__q = [];
$flushQueue = function() use ($db, &$__q) {
if ( count($__q) > 0 ) {
$sanitized_ids = [];
foreach ( $__q as $id ) { $sanitized_ids[] = (int) $id; }
$db->query("UPDATE question SET linked = 1 WHERE id IN (". join(',',$sanitized_ids) .")");
$__q = [];
}
};
$queuedUpdate = function($question_id) use (&$__q, $flushQueue){
$__q[] = $question_id;
if ( count( $__q) > 1000 ) { $flushQueue(); }
};
// Then your code...
foreach ($items as $item) {
$queuedUpdate($item->question_id);
}
$flushQueue();
Obviously, you don't have to use anon functions, if you are in a class. But the above will work anywhere (assuming you are on >= PHP 5.3).
I have a mysql query:
$analyse_ot1="SELECT COUNT(*) AS ot_count FROM first, base, users
WHERE base.base_id=$base
AND
users.base_id=$base
AND
users.id=first.user_id
AND
first.$sub='OT'";
$result_ot1=mysqli_query($con,$analyse_ot1);
$row_ot1= mysqli_fetch_assoc($result_ot1);
$total_ot1= $row_ot1['ot_count'];
$otper1=($total_ot1/$total)*100;
I will be re-using this code many times on a web-page and want to be able to run it as part of a loop.
I don't want to have to rename my variables each time (where $result_ot1 become $result_ot2 etc...)
I've tried introducing an indexing variable $x and then appending it to another variable name:
$x=2;
$result_ot.$x=....
But , it doesn't want to work.
Any suggestions? I have an idea that arrays might be needed but I'm concerned that the queries will be throwing up arrays and I'm not sure an array in an array isn't going to set my computer on fire...
A hint to use your query everywhere.
//declare this in a folder, 'functions.php', for example
function analyse($base, $sub, $con)
{
$analyse_ot1 = 'SELECT COUNT(*) AS ot_count FROM first, base, users'
." WHERE base.base_id=$base AND users.base_id=$base"
." AND users.id=first.user_id AND first.$sub='OT'";
$result_ot1 = mysqli_query($con, $analyse_ot1);
$row_ot1 = mysqli_fetch_assoc($result_ot1);
return $row_ot1['ot_count'];
}
And in any other place of your project:
include 'functions.php';
$total_ot1 = analyse($base, $sub, $con);
$otper1 = ($total_ot1 / $total) * 100;
I did not understand well the loop thing. But in order to dave your results in an array, you could just do this:
$totalCounts = [];
foreach(loop){
$total_current = analyse($base, $sub, $con);
$totalCounts[] = $total_current;
}
And you will have $totalCounts that will be an array of the query result that we find in our function. That could be a way of saving your results in an array.
You could do it associative also:
$x = 0;
foreach(loop){
$total_current = analyse($base, $sub, $con);
$totalCounts['count_'.$x] = $total_current;
$x++;
}
So, if you now access: $totalCounts['count_3'] you would be accessing the result of the query on the third loop, for example.
I don't know if this is your answer. But maybe it helped a bit.
Unfortunately I can't show you the code but I can give you an idea of what it looks like, what it does and what the problem I have...
<?php
include(db.php);
include(tools.php);
$c = new GetDB(); // Connection to DB
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){
$nv = json_decode($_POST['var'])
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$q = $c->query("SELECT * FROM table WHERE id = '$id'");
$r = $q->fetch_row();
if ($r[1] > 0) {
// Item exist in DB then just UPDATE
$q1 = $c->query(UPDATE TABLE1);
$q4 = $c->query(UPDATE TABLE2);
if ($x == 1) {
$q2 = $c->query(SELECT);
$rq = $q2->fetch_row();
if ($rq[0] > 0) {
// Item already in table just update
$q3 = $c->query(UPDATE TABLE3);
} else {
// Item not in table then INSERT
$q3 = $c->query(INSERT TABLE3);
}
}
} else {
// Item not in DB then Insert
$q1 = $c->query(INSERT TABLE1);
$q4 = $c->query(INSERT TABLE2);
$q3 = $c->query(INSERT TABLE4);
if($x == 1) {
$q5 = $c->query(INSERT TABLE3);
}
}
}
}
As you can see is a very basic INSERT, UPDATE tables script, so before we release to full production we did some test to see that the script is working as it should, and the "result" where excellent...
So, we ran this code against 100 requests, everything when just fine... less than 1.7seconds for the 100 requests... but then we saw the amount of data that needed to be send/post it was a jaw drop for me... over 20K items it takes about 3 to 5min to send the post but the script always crash the "data" is an array in json
array (
[0] => array (
[id] => 1,
[val2] => 1,
[val3] => 1,
[val4] => 1,
[val5] => 1,
[val6] => 1,
[val7] => 1,
[val8] => 1,
[val8] => 1,
[val9] => 1,
[val10] => 1
),
[1] => array (
[id] => 2,
[val2] => 2,
[val3] => 2,
[val4] => 2,
[val5] => 2,
[val6] => 2,
[val7] => 2,
[val8] => 2,
[val8] => 2,
[val9] => 2,
[val10] => 2
),
//... about 10 to 20K depend on the day and time
)
but in json... any way, sending this information is not a problem, like I said it can take about 3 to 5mins the problem is the code that does the job receiving the data and do the queries... in a normal shared hosting we get a 503 error which by doing a debug it turn out to be a time out, so for our VPS we can increment the max_execution_time to whatever we need to, to process 10K+ it takes about 1hr in our VPS, but in a shared hosting we can't use max_execution_time... So I ask the other developer the one that is sending the information that instead of sending 10K+ in one blow to send a batch of 1K and let it rest for a second then send another batch..and so on ... so far I haven't got any answer... so I was thinking to do the "pause" on my end, say, after process 1K of items wait for a sec then continue but I don't see it as efficient as receiving the data in batches... how would you solve this?
Sorry, I don't have enough reputation to comment everywhere, yet, so I have to write this in an answer. I would recommend zedfoxus' method of batch processing above. In addition, I would highly recommend figuring out a way of processing those queries faster. Keep in mind that every single PHP function call, etc. gets multiplied by every row of data. Here are just a couple of the ways you might be able to get better performance:
Use prepared statements. This will allow MySQL to cache the memory operation for each successive query. This is really important.
If you use prepared statements, then you can drop the $c->real_escape_string() calls. I would also scratch my head to see what you can safely leave out of the $t->clean() method.
Next I would evaluate the performance of evaluating every single row individually. I'd have to benchmark it to be sure, but I think running a few PHP statements beforehand will be faster than making umpteen unnecessary MySQL SELECT and UPDATE calls. MySQL is much faster when inserting multiple rows at a time. If you expect multiple rows of your input to be changing the same row in the database, then you might want to consider the following:
a. Think about creating a temporary, precompiled array (depending on memory usage involved) that stores the unique rows of data. I would also consider doing the same for the secondary TABLE3. This would eliminate needless "update" queries, and make part b possible.
b. Consider a single query that selects every id from the database that's in the array. This will be the list of items to use an UPDATE query for. Update each of these rows, removing them from the temporary array as you go. Then, you can create a single, multi-row insert statement (prepared, of course), that does all of the inserts at a single time.
Take a look at optimizing your MySQL server parameters to better handle the load.
I don't know if this would speed up a prepared INSERT statement at all, but it might be worth a try. You can wrap the INSERT statement within a transaction as detailed in an answer here: MySQL multiple insert performance
I hope that helps, and if anyone else has some suggestions, just post them in the comments and I'll try to include them.
Here's a look at the original code with just a few suggestions for changes:
<?php
/* You can make sure that the connection type is persistent and
* I personally prefer using the PDO driver.
*/
include(db.php);
/* Definitely think twice about each tool that is included.
* Only include what you need to evaluate the submitted data.
*/
include(tools.php);
$c = new GetDB(); // Connection to DB
/* Take a look at optimizing the code in the Tools class.
* Avoid any and all kinds of loops–this code is going to be used in
* a loop and could easily turn into O(n^2) performance drain.
* Minimize the amount of string manipulation requests.
* Optimize regular expressions.
*/
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){ // !empty() catches more cases than isset()
$nv = json_decode($_POST['var'])
/* LOOP LOGIC
* Definitely test my hypothesis yourself, but this is similar
* to what I would try first.
*/
//Row in database query
$inTableSQL = "SELECT id FROM TABLE1 WHERE id IN("; //keep adding to it
foreach ($nv as $k) {
/* I would personally use specific methods per data type.
* Here, I might use a type cast, plus valid int range check.
*/
$id = $t->cleanId($k->id); //I would include a type cast: (int)
// Similarly for other values
//etc.
// Then save validated data to the array(s)
$data[$id] = array($values...);
/* Now would also be a good time to add the id to the SELECT
* statement
*/
$inTableSQL .= "$id,";
}
$inTableSQL .= ");";
// Execute query here
// Then step through the query ids returned, perform UPDATEs,
// remove the array element once UPDATE is done (use prepared statements)
foreach (.....
/* Then, insert the remaining rows all at once...
* You'll have to step through the remaining array elements to
* prepare the statement.
*/
foreach(.....
} //end initial POST data if
/* Everything below here becomes irrelevant */
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$q = $c->query("SELECT * FROM table WHERE id = '$id'");
$r = $q->fetch_row();
if ($r[1] > 0) {
// Item exist in DB then just UPDATE
$q1 = $c->query(UPDATE TABLE1);
$q4 = $c->query(UPDATE TABLE2);
if ($x == 1) {
$q2 = $c->query(SELECT);
$rq = $q2->fetch_row();
if ($rq[0] > 0) {
// Item already in table just update
$q3 = $c->query(UPDATE TABLE3);
} else {
// Item not in table then INSERT
$q3 = $c->query(INSERT TABLE3);
}
}
} else {
// Item not in DB then Insert
$q1 = $c->query(INSERT TABLE1);
$q4 = $c->query(INSERT TABLE2);
$q3 = $c->query(INSERT TABLE4);
if($x == 1) {
$q5 = $c->query(INSERT TABLE3);
}
}
}
}
The key is to minimize queries. Often, where you are looping over data doing one or more queries per iteration, you can replace it with a constant number of queries. In your case, you'll want to rewrite it into something like this:
include(db.php);
include(tools.php);
$c = new GetDB(); // Connection to DB
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){
$nv = json_decode($_POST['var'])
$table1_data = array();
$table2_data = array();
$table3_data = array();
$table4_data = array();
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$table1_data[] = array( ... );
$table2_data[] = array( ... );
$table4_data[] = array( ... );
if ($x == 1) {
$table3_data[] = array( ... );
}
}
$values = array_to_sql($table1_data);
$c->query("INSERT INTO TABLE1 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table2_data);
$c->query("INSERT INTO TABLE2 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table3_data);
$c->query("INSERT INTO TABLE3 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table4_data);
$c->query("INSERT IGNORE INTO TABLE4 (...) VALUES $values");
}
While your original code executed between 3 and 5 queries per row of your input data, the above code only executes 4 queries in total.
I leave the implementation of array_to_sql to the reader, but hopefully this should explain the idea. TABLE4 is an INSERT IGNORE here since you didn't have an UPDATE in the "found" clause of your original loop.
I have a bunch of photos on a page and using jQuery UI's Sortable plugin, to allow for them to be reordered.
When my sortable function fires, it writes a new order sequence:
1030:0,1031:1,1032:2,1040:3,1033:4
Each item of the comma delimited string, consists of the photo ID and the order position, separated by a colon. When the user has completely finished their reordering, I'm posting this order sequence to a PHP page via AJAX, to store the changes in the database. Here's where I get into trouble.
I have no problem getting my script to work, but I'm pretty sure it's the incorrect way to achieve what I want, and will suffer hugely in performance and resources - I'm hoping somebody could advise me as to what would be the best approach.
This is my PHP script that deals with the sequence:
if ($sorted_order) {
$exploded_order = explode(',',$sorted_order);
foreach ($exploded_order as $order_part) {
$exploded_part = explode(':',$order_part);
$part_count = 0;
foreach ($exploded_part as $part) {
$part_count++;
if ($part_count == 1) {
$photo_id = $part;
} elseif ($part_count == 2) {
$order = $part;
}
$SQL = "UPDATE article_photos ";
$SQL .= "SET order_pos = :order_pos ";
$SQL .= "WHERE photo_id = :photo_id;";
... rest of PDO stuff ...
}
}
}
My concerns arise from the nested foreach functions and also running so many database updates. If a given sequence contained 150 items, would this script cry for help? If it will, how could I improve it?
** This is for an admin page, so it won't be heavily abused **
you can use one update, with some cleaver code like so:
create the array $data['order'] in the loop then:
$q = "UPDATE article_photos SET order_pos = (CASE photo_id ";
foreach($data['order'] as $sort => $id){
$q .= " WHEN {$id} THEN {$sort}";
}
$q .= " END ) WHERE photo_id IN (".implode(",",$data['order']).")";
a little clearer perhaps
UPDATE article_photos SET order_pos = (CASE photo_id
WHEN id = 1 THEN 999
WHEN id = 2 THEN 1000
WHEN id = 3 THEN 1001
END)
WHERE photo_id IN (1,2,3)
i use this approach for exactly what your doing, updating sort orders
No need for the second foreach: you know it's going to be two parts if your data passes validation (I'm assuming you validated this. If not: you should =) so just do:
if (count($exploded_part) == 2) {
$id = $exploded_part[0];
$seq = $exploded_part[1];
/* rest of code */
} else {
/* error - data does not conform despite validation */
}
As for update hammering: do your DB updates in a transaction. Your db will queue the ops, but not commit them to the main DB until you commit the transaction, at which point it'll happily do the update "for real" at lightning speed.
I suggest making your script even simplier and changing names of the variables, so the code would be way more readable.
$parts = explode(',',$sorted_order);
foreach ($parts as $part) {
list($id, $position) = explode(':',$order_part);
//Now you can work with $id and $position ;
}
More info about list: http://php.net/manual/en/function.list.php
Also, about performance and your data structure:
The way you store your data is not perfect. But that way you will not suffer any performance issues, that way you need to send less data, less overhead overall.
However the drawback of your data structure is that most probably you will be unable to establish relationships between tables and make joins or alter table structure in a correct way.
i've got a script which is supposed to run through a mysql database and preform a certain 'test'on the cases. Simplified the database contains records which represent trips that have been made by persons. Each record is a singel trip. But I want to use only roundway trips. So I need to search the database and match two trips to each other; the trip to and the trip from a certain location.
The script is working fine. The problem is that the database contains more then 600.000 cases. I know this should be avoided if possible. But for the purpose of this script and the use of the database records later on, everything has to stick together.
Executing the script takes hours right now, when executing on my iMac using MAMP. Off course I made sure that it can use a lot of memory etcetare.
My question is how could I speed things up, what's the best approach to do this?
Here's the script I have right now:
$table = $_GET['table'];
$output = '';
//Select all cases that has not been marked as invalid in previous test
$query = "SELECT persid, ritid, vertpc, aankpc, jaar, maand, dag FROM MON.$table WHERE reasonInvalid != '1' OR reasonInvalid IS NULL";
$result = mysql_query($query)or die($output .= mysql_error());
$totalCountValid = '';
$totalCountInvalid = '';
$totalCount = '';
//For each record:
while($row = mysql_fetch_array($result)){
$totalCount += 1;
//Do another query, get all the rows for this persons ID and that share postal codes. Postal codes revert between the two trips
$persid = $row['persid'];
$ritid = $row['ritid'];
$pcD = $row['vertpc'];
$pcA = $row['aankpc'];
$jaar = $row['jaar'];
$maand = $row['maand'];
$dag = $row['dag'];
$thecountquery = "SELECT * FROM MON.$table WHERE persid=$persid AND vertpc=$pcA AND aankpc=$pcD AND jaar = $jaar AND maand = $maand AND dag = $dag";
$thecount = mysql_num_rows(mysql_query($thecountquery));
if($thecount >= 1){
//No worries, this person ID has multiple trips attached
$totalCountValid += 1;
}else{
//Ow my, the case is invalid!
$totalCountInvalid += 1;
//Call the markInvalid from functions.php
$totalCountValid += 1;
markInvalid($table, '2', 'ritid', $ritid);
}
}
//Echo the result
$output .= 'Total cases: '.$totalCount.'<br>Valid: '.$totalCountValid.'<br>Invalid: '.$totalCountInvalid; echo $output;
Your basic problem is that you are doing the following.
1) Getting all cases that haven't been marked as invalid.
2) Looping through the cases obtained in step 1).
What you can easily do is to combine the queries written for 1) and 2) in a single query and loop over the data. This will speed up the things a bit.
Also bear in mind the following tips.
1) Selecting all columns is not at all a good thing to do. It takes ample amount of time for the data to traverse over the network. I would recommend replacing the wild-card with all columns that you really need.
SELECT * <ALL_COlumns>
2) Use indexes - sparingly, efficiently and appropriately. Understand when to use them and when not to.
3) Use views if you can.
4) Enable MySQL slow query log to understand which queries you need to work on and optimize.
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
log-queries-not-using-indexes
5) Use correct MySQL field types and the storage engine (Very very important)
6) Use EXPLAIN to analyze your query - EXPLAIN is a useful command in MySQL which can provide you some great details about how a query is ran, what index is used, how many rows it needs to check through and if it needs to do file sorts, temporary tables and other nasty things you want to avoid.
Good luck.