Queries slow down website and must make them execute faster - php

I have 2 queries in wordpress that slow significantly my website. the first one is
global $wpdb;
$tbl = $wpdb->prefix . "my_tournament_matches_events";
$result = $wpdb->get_var("SELECT count(*) FROM $tbl WHERE match_id = '$match' AND player_id = '$player' AND event_id = 'app'");
if($result) { return 'checked="checked"';
The problem is that it checks for every single player (0.33sec) if he attended the game, which means 20 queries for each team (6.6sec and for both teams=13,2sec). I wonder if i could make it with only two queries (One for each team).
The second query is
$output.="<td colspan='2'>".my_fetch_data_from_id($homeplayer->player_id,'data_value')."</td>";
foreach($events as $event){
$evcount = $wpdb->get_results("SELECT count(*) as total_events
FROM $table2
WHERE match_id='$mid'
AND player_id='$homeplayer->player_id'
AND event_id='$event->id'" );
$total= $evcount[0]->total_events;
$output.="<td><input type='text' name=hm-$homeplayer->player_id-$event->id value='$total'/></td>";
}
In the second query it checks for each player, but since i have available about 11 events, that means, it executes 11events*20players*2teams=440 queries. Is it possible to alter the code to reduse number of queries and time of them? Average time of each of the second query is about 0,3sec, so every improvement will be noticable

To sum it up:
Add LIMIT 1 to the queries this will make sure that if the query finds a result it doesn't continue looking for others;
Avoid using * in count but only count on the required fields;
Add indexes to columns where possible;
For readability, consistency and some errors you can change the single and double quotes:
$output.='<td colspan="2">'.my_fetch_data_from_id( $homeplayer->player_id, 'data_value' ).'</td>';
$output.='<td><input type="text" name="hm-'.$homeplayer->player_id-$event->id.'"value="'.$total.'"/></td>';
// Seems odd here otherwise ^

I suggest adding two indexes in the database. (and ofc. checking if they are present)
ALTER TABLE my_tournament_matches_events
ADD INDEX index1 (`player_id`, `match_id`, `event_id`);
ALTER TABLE total_events
ADD INDEX index2 (`player_id`, `match_id`, `event_id`);
This should speed things up, considerably. 0.33s is a slow query response time.

Related

Using count_all_results or get_compiled_select and $this->db->get('table') lists table twice in query?

How do I use get_compiled_select or count_all_results before running the query without getting the table name added twice? When I use $this->db->get('tblName') after either of those, I get the error:
Not unique table/alias: 'tblProgram'
SELECT * FROM (`tblProgram`, `tblProgram`) JOIN `tblPlots` ON `tblPlots`.`programID`=`tblProgram`.`pkProgramID` JOIN `tblTrees` ON `tblTrees`.`treePlotID`=`tblPlots`.`id` ORDER BY `tblTrees`.`id` ASC LIMIT 2000
If I don't use a table name in count_all_results or $this->db->get(), then I get an error that no table is used. How can I get it to set the table name just once?
public function get_download_tree_data($options=array(), $rand=""){
//join tables and order by tree id
$this->db->reset_query();
$this->db->join('tblPlots','tblPlots.programID=tblProgram.pkProgramID');
$this->db->join('tblTrees','tblTrees.treePlotID=tblPlots.id');
$this->db->order_by('tblTrees.id', 'ASC');
//get number of results to return
$allResults=$this->db->count_all_results('tblProgram', false);
//chunk data and write to CSV to avoid reaching memory limit
$offset=0;
$chunk=2000;
$treePath=$this->config->item('temp_path')."$rand/trees.csv";
$tree_handle=fopen($treePath,'a');
while (($offset<$allResults)) {
$this->db->limit($chunk, $offset);
$result=$this->db->get('tblProgram')->result_array();
foreach ($result as $row) {
fputcsv($tree_handle, $row);
}
$offset=$offset+$chunk;
}
fclose($tree_handle);
return array('resultCount'=>$allResults);
}
To count how many rows would be returned by a query, essentially all the work must be performed. That is, it is impractical to get the count, then perform the query; you may as well just do the query.
If your goal is to "paginate" by getting some of the rows, plus the total count, that is essentially two separate actions (that may be combined to look like one.)
If the goal is to estimate the number of rows, then SHOW TABLE STATUS or SELECT Rows FROM information_schema.TABLES WHERE ... gives you an estimate.
If you want to see if there are, say "at least 100 rows", then this may be practical:
SELECT 1 FROM ... WHERE ... ORDER BY ... LIMIT 99,1
and see if you get a row back. However, this may or may not be efficient, depending on the indexes and the WHERE and the ORDER BY. (Show us the query and I can elaborate.)
Using OFFSET for chunking is grossly inefficient. If there is not a usable index, then it is performing essentially the entire query for each chunk. If there is a usable index, the chunks are slower and slower. Here is a discussion of why OFFSET is not good for "pagination", plus an efficient workaround: Pagination . It talks about how to "remember where you left off " as an efficient technique for chunking. Fetch between 100 and 1000 rows per chunk.
The flaw in your code is that it aims to select a subset of some records and their total count in the same query. This is impossible in MySQL, so you cannot generate such a query, hence, you get the error as mentioned. The problem is that if you do a
select ... from t where ... limit 0, 2000
then you get maximum 2000 records, so, if the total records matching the criteria have a count that is greater than the limit, then you will not get accurately the count from above, so, in that case you need a
select count(1) from t where ...
This means that you need to build your actual query (the code below your count_all_results call), see whether the number of results reaches the limit. If the number of results does not reach the limit, then you do not need to perform a separate query in order to get the count, because you can compute $offset * $chunk + $recordCount. However, if you get as many records as they can be, then you will need to build another query, without the order_by call, since the count is independent of your sort and get the counts.
$this->db->count_all_results()
Counting the number of returned results with count_all_results()
It's useful to count the number of results returned—often bugs can arise if a section of code which expects to have at least one row is passed zero rows. Without handling the eventuality of a zero result, an application may become unpredictably unstable and may give away hints to a malicious user about the architecture of the app. Ensuring correct handling of zero results is what we're going to focus on here.
Permits you to determine the number of rows in a particular Active Record query. Queries will accept Query Builder restrictors such as where(), or_where(), like(), or_like(), etc. Example:
echo $this->db->count_all_results('my_table'); // Produces an integer, like 25
$this->db->like('title', 'match');
$this->db->from('my_table');
echo $this->db->count_all_results(); // Produces an integer, like 17
However, this method also resets any field values that you may have passed to select(). If you need to keep them, you can pass FALSE as the second parameter:
echo $this->db->count_all_results('my_table', FALSE);
get_compiled_select()
The method $this->db->get_compiled_select(); is introduced in codeigniter v3.0 and compiles active records query without actually executing it. But this is not a completely new method. In older versions of CI it is like $this->db->_compile_select(); but the method has been made protected in later versions making it impossible to call back.
// Note that the second parameter of the get_compiled_select method is FALSE
$sql = $this->db->select(array('field1','field2'))
->where('field3',5)
->get_compiled_select('mytable', FALSE);
// ...
// Do something crazy with the SQL code... like add it to a cron script for
// later execution or something...
// ...
$data = $this->db->get()->result_array();
// Would execute and return an array of results of the following query:
// SELECT field1, field1 from mytable where field3 = 5;
NOTE:- Double calls to get_compiled_select() while you’re using the Query Builder Caching functionality and NOT resetting your queries will results in the cache being merged twice. That in turn will i.e. if you’re caching a select() - select the same field twice.
Rick James got me on the right track. I ended up having to chunk the results using pagination AND a nested query. Using LIMIT on even 1 chunk of 2000 records was timing out. This is the code I ended up with, which uses get_compiled_select('tblProgram') and then get('tblTrees O1'). Since I didn't use FALSE as the second argument to get_compiled_select, the query was cleared before the get() was run.
//grab the data in chunks, write it to CSV chunk by chunk
$offset=0;
$chunk=2000;
$i=10; //counter for the progress bar
$this->db->limit($chunk);
$this->db->select('tblTrees.id');
//nesting the limited query and then joining the other field later improved performance significantly
$query1=' ('.$this->db->get_compiled_select('tblProgram').') AS O2';
$this->db->join($query1, 'O1.id=O2.id');
$result=$this->db->get('tblTrees O1')->result_array();
$allResults=count($result);
$putHeaders=0;
$treePath=$this->config->item('temp_path')."$rand/trees.csv";
$tree_handle=fopen($treePath,'a');
//while select limit returns the limit
while (count($result)===$chunk) {
$highestID=max(array_column($result, 'id'));
//update progres bar with estimate
if ($i<90) {
$this->set_runStatus($qcRunId, $status = "processing", $progress = $i);
$i=$i+1;
}
//only get the fields the first time
foreach ($result as $row) {
if ($offset===0 && $putHeaders===0){
fputcsv($tree_handle, array_keys($row));
$putHeaders=1;
}
fputcsv($tree_handle, $row);
}
//get the next chunk
$offset=$offset+$chunk;
$this->db->reset_query();
$this->make_query($options);
$this->db->order_by('tblTrees.id', 'ASC');
$this->db->where('tblTrees.id >', $highestID);
$this->db->limit($chunk);
$this->db->select('tblTrees.id');
$query1=' ('.$this->db->get_compiled_select('tblProgram').') AS O2';
$this->db->join($query1, 'O1.id=O2.id');
$result=$this->db->get('tblTrees O1')->result_array();
$allResults=$allResults+count($result);
}
//write out last chunk
foreach ($result as $row) {
fputcsv($tree_handle, $row);
}
fclose($tree_handle);
return array('resultCount'=>$allResults);

function render makes website 500% slow! can anyone fix that please?

Function render makes website 500% slow! Can anyone fix that please ?
Someone told me :
because it sends a database request on each iteration of the loop (it's not the only problem with this chunk of code but it's the most taxing one)
Yes I understand what that means. His way is:
you need to get all of the data before you start building the menu,
then you just insert the data instead of requesting more data on each
iteration
But i don't know how i must do it!
<?php
$menu_html='';
function render_menu($parent_id,$actmenuid)
{
$obj = new Database();
$con = $obj->dbconnectt();
global $menu_html;
$result=mysqli_query($con, "select * from tbl_menu where parent_id='$parent_id'");
if(mysqli_num_rows($result)==0) return;
if($parent_id==0){
$menu_html.='<ul class="topnav">';
}else{
$menu_html.='<ul>';
}
while($row=mysqli_fetch_array($result)) {
$childnum = $obj->recordcount("SELECT * FROM tbl_menu WHERE parent_id='".$row['id']."'");
if($childnum == 0){
$linkvalue='/category/'.$row['id'].'.html';
} else{
$linkvalue='#';
}
if($row['id']==$actmenuid && $actmenuid !=NULL){
$actv='class="active"';
}else{
$actv='';
}
$menu_html.='<li '.$actv.'>'.$row['title'].'';
render_menu($row['id'],$actmenuid);
$menu_html.='</li>';
}
$menu_html.='</ul>';return $menu_html;
}
if($isDsh==false){
echo render_menu(0,$actmenuid);
}
?>
Depending on how many records you have, try removing this query from inside the loop since it's running for every record on the first query.
$childnum = $obj->recordcount("SELECT * FROM tbl_menu WHERE parent_id='".$row['id']."'");
Change it a single query like this where it returns counts for each parent idea, and place it outside of the loop:
$parentcount = mysqli_query($con, ("SELECT parent_id, count(*) FROM tbl_menu GROUP BY parent_id");
There may be other issues, so please post the database structure and number of records that you're working with too.
Don't make recursive queries.
Having "more than 1000" rows is not too big. You can simply call everything from the table into php, then perform the recursive html build in php this will have a memory overhead, but far less processing overhead because you only ever make one trip to the db.
Alternatively (when your db table is prohibitively large), you should avoid gathering rows unnecessarily by adding a new column. The new column will store all "descendants" for the respective row when the row is INSERTed or update it when it is UPDATEd. Then you only need to reference this column when needing to call specific rows. In other words, do the recursive processing only once (when writing to the db) AND not when needing to display the data. This will, again, produce a finite result set in one query which can then be recursively traversed to build the desired output.
basically you need to do what #spudly has suggested.
But there is a small catch in his solution which depending on the number of the rows in yous tbl_menu table you may use a big chunk of memory to fetch all the records.
you can optimise it more with using his solution but changing the query to:
select
parent_tbl_menu.id,
count(child_tbl_menu.id) as cnt
from
tbl_menu as parent_tbl_menu
left join
tbl_menu as child_tbl_menu
on parent_tbl_menu.id = child_tbl_menu.parent_id
where
parent_tbl_menu.parent_id = ?
group by
parent_tbl_menu.id
This way you will only fetch the child records of a specific parent.
And please consider using prepared statements as your code has sql injection vulnerability.
Connect (from PHP to MySQL) only once for the entire web page.
Don't put a SELECT inside a loop if you can do all the work in a single SELECT, such as with a JOIN. (Exception: A "hierarchical" table needs the nested SELECT. Exception to the exception: MySQL 8.0 and MariaDB 10.2 can do it with a "recursive CTE".)
Don't fetch all the columns (SELECT *) when all you want it is a recordcount. Instead, SELECT COUNT(*) ... and use the number returned.
1000 of anything is probably excessive for a web page. Re-think the UI.

MySQL and PDO, speed up query and get result/output from MySQL function (routine)?

Getting the Value:
I've got the levenshtein_ratio function, from here, queued up in my MySQL database. I run it in the following way:
$stmt = $db->prepare("SELECT r_id, val FROM table WHERE levenshtein_ratio(:input, someval) > 70");
$stmt->execute(array('input' => $input));
$result = $stmt->fetchAll();
if(count($result)) {
foreach($result as $row) {
$out .= $row['r_id'] . ', ' . $row['val'];
}
}
And it works a treat, exactly as expected. But I was wondering, is there a nice way to also get the value that levenshtein_ratio() calculates?
I've tried:
$stmt = $db->prepare("SELECT levenshtein_ratio(:input, someval), r_id, val FROM table WHERE levenshtein_ratio(:input, someval) > 70");
$stmt->execute(array('input' => $input));
$result = $stmt->fetchAll();
if(count($result)) {
foreach($result as $row) {
$out .= $row['r_id'] . ', ' . $row['val'] . ', ' . $row[0];
}
}
and it does technically work (I get the percentage from the $row[0]), but the query is a bit ugly, and I can't use a proper key to get the value, like I can for the other two items.
Is there a way to somehow get a nice reference for it?
I tried:
$stmt = $db->prepare("SELECT r_id, val SET output=levenshtein_ratio(:input, someval) FROM table WHERE levenshtein_ratio(:input, someval) > 70");
modelling it after something I found online, but it didn't work, and ends up ruining the whole query.
Speeding It Up:
I'm running this query for an array of values:
foreach($parent as $input){
$stmt = ...
$stmt->execute...
$result = $stmt->fetchAll();
... etc
}
But it ends up being remarkably slow. Like 20s slow, for an array of only 14 inputs and a DB with about 350 rows, which is expected to be in the 10,000's soon. I know that putting queries inside loops is naughty business, but I'm not sure how else to get around it.
EDIT 1
When I use
$stmt = $db->prepare("SELECT r_id, val SET output=levenshtein_ratio(:input, someval) FROM table WHERE levenshtein_ratio(:input, someval) > 70");
surely that's costing twice the time as if I only calculated it once? Similar to having $i < sizeof($arr); in a for loop?
To clean up the column names you can use "as" to rename the column of the function. At the same time you can speed things up by using that column name in your where clause so the function is only executed once.
$stmt = $db->prepare("SELECT r_id, levenshtein_ratio(:input, someval) AS val FROM table HAVING val > 70");
If it is still too slow you might consider a c library like https://github.com/juanmirocks/Levenshtein-MySQL-UDF
doh - forgot to switch "where" to "having", as spencer7593 noted.
I'm assuming that `someval` is an unqalified reference to a column in the table. While you may understand that without looking at the table definition, someone else reading the SQL statement can't tell. As an aid to future readers, consider qualifying your column references with the name of the table or (preferably) a short alias assigned to the table in the statement.
SELECT t.r_id
, t.val
FROM `table` t
WHERE levenshtein_ratio(:input, t.someval) > 70
That function in the WHERE clause has to be evaluated for every row in the table. There's no way to get MySQL to build an index on that. So there's no way to get MySQL to perform an index range scan operation.
It might be possible to get MySQL to use an index for the query, for example, if the query had an ORDER BY t.val clause, or if there is a "covering index" available.
But that doesn't get around the issue of needing to evaluate the function for every row. (If the query had other predicates that excluded rows, then the function wouldn't necessarily need be evaluated for the excluded rows.)
Adding the expression to the SELECT list really shouldn't be too expensive if the function is declared to be DETERMINISTIC. A second call to a DETERMINISTIC function with the same arguments can reuse the value returned for the previous execution. (Declaring a function DETERMINISTIC essentially means that the function is guaranteed to return the same result when given the same argument values. Repeated calls will return the same value. That is, the return value depends only the argument values, and doesn't depend on anything else.
SELECT t.r_id
, t.val
, levenshtein_ratio(:input, t.someval) AS lev_ratio
FROM `table` t
WHERE levenshtein_ratio(:input2, t.someval) > 70
(Note: I used a distinct bind placeholder name for the second reference because PDO doesn't handle "duplicate" bind placeholder names as we'd expect. (It's possible that this has been corrected in more recent versions of PDO. The first "fix" for the issue was an update to the documentation noting that bind placeholder names should appear only once in statement, if you needed two references to the same value, use two different placeholder names and bind the same value to both.)
If you don't want to repeat the expression, you could move the condition from the WHERE clause to the HAVING, and refer to the expression in the SELECT list by the alias assigned to the column.
SELECT t.r_id
, t.val
, levenshtein_ratio(:input, t.someval) AS lev_ratio
FROM `table` t
HAVING lev_ratio > 70
The big difference between WHERE and HAVING is that the predicates in the WHERE clause are evaluated when the rows are accessed. The HAVING clause is evaluated much later, after the rows have been accessed. (That's a brief explanation of why the HAVING clause can reference columns in the SELECT list by their alias, but the WHERE clause can't do that.)
If that's a large table, and a large number of rows are being excluded, there might be a significant performance difference using the HAVING clause.. there may be a much larger intermediate set created.
To get an "index used" for the query, a covering index is the only option I see.
ON `table` (r_id, val, someval)
With that, MySQL can satisfy the query from the index, without needing to lookup pages in the underlying table. All of the column values the query needs are available from the index.
FOLLOWUP
To get an index created, we would need to create a column, e.g.
lev_ratio_foo FLOAT
and pre-populate with the result from the function
UPDATE `table` t
SET t.lev_ratio_foo = levenshtein_ratio('foo', t.someval)
;
Then we could create an index, e.g.
... ON `table` (lev_ratio_foo, val, r_id)
And re-write the query
SELECT t.r_id
, t.val
, t.lev_ratio_foo
FROM `table` t
WHERE t.lev_ratio_foo > 70
With that query, MySQL can make use of an index range scan operation on an index with lev_ratio_foo as the leading column.
Likely, we would want to add BEFORE INSERT and BEFORE UPDATE triggers to maintain the value, when a new row is added to the table, or the value of the someval column is modified.
That pattern could be extended, additional columns could be added for values other than 'foo'. e.g. 'bar'
UPDATE `table` t
SET t.lev_ratio_bar = levenshtein_ratio('bar', t.someval)
Obviously that approach isn't going to be scalable for a broad range of input values.

joomla-php mysql not updating a record with data from previous query

I'm counting the right answers field of a table and saving that calculated value on another table. For this I'm using two queryes, first one is the count query, i retrieve the value using loadResult(). After that i'm updating another table with this value and the date/time. The problem is that in some cases the calculated value is not being saved, only the date/time.
queries look something like this:
$sql = 'SELECT count(answer)
FROM #_questionsTable
WHERE
answer = 1
AND
testId = '.$examId;
$db->setQuery($sql);
$rightAnsCount = $db->loadResult();
$sql = 'UPDATE #__testsTable
SET finish = "'.date('Y-m-d H:i:s').'", rightAns='.$rightAnsCount.'
WHERE testId = '.$examId;
$db->setQuery($sql);
$db->Query();
answer = 1 means that the question was answered ok.
I think that when the 2nd query is executed the first one has not finished yet, but everywhere i read says that it waits that the first query is finished to go to the 2nd, and i don't know how to make the 2nd query wait for the 1st one to end.
Any help will be appreciated. Thanks!
a PHP MySQL query is synchronous ie. it completes before returning - Joomla!'s database class doesn't implement any sort of asynchronous or call-back functionality.
While you are missing a ';' that wouldn't account for it working some of the time.
How is the rightAns column defined - eg. what happens when your $rightAnsCount is 0
Turn on Joomla!'s debug mode and check the SQL that's generated in out the profile section, it looks something like this
eg.
Profile Information
Application afterLoad: 0.002 seconds, 1.20 MB
Application afterInitialise: 0.078 seconds, 6.59 MB
Application afterRoute: 0.079 seconds, 6.70 MB
Application afterDispatch: 0.213 seconds, 7.87 MB
Application afterRender: 0.220 seconds, 8.07 MB
Memory Usage
8511696
8 queries logged.
SELECT *
FROM jos_session
WHERE session_id = '5cs53hoh2hqi9ccq69brditmm7'
DELETE
FROM jos_session
WHERE ( TIME < '1332089642' )
etc...
you may need to add a semicolon to the end of your sql queries
...testId = '.$examID.';';
ah, something cppl mentioned is the key I think. You may need to account for null values from your first query.
Changing this line:
$rightAnsCount = $db->loadResult();
To this might make the difference:
$rightAnsCount = ($db->loadResult()) ? $db->loadResult() : 0;
Basically setting to 0 if there is no result.
I am pretty sure you can do this in one query instead:
$sql = 'UPDATE #__testsTable
SET finish = NOW()
, rightAns = (
SELECT count(answer)
FROM #_questionsTable
WHERE
answer = 1
AND
testId = '.$examId.'
)
WHERE testId = '.$examId;
$db->setQuery($sql);
$db->Query();
You can also update all values in all rows in your table this way by slightly modifying your query, so you can do all rows in one go. Let me know if this is what you are trying to achieve and I will rewrite the example.

debugging a mysql insert fail in php

I'm having problems debugging a failing mysql 5.1 insert under PHP 5.3.4. I can't seem to see anything in the mysql error log or php error logs.
Based on a Yahoo presentation on efficient pagination, I was adding order numbers to posters on my site (order rank, not order sales).
I wrote a quick test app and asked it to create the order numbers on one category. There are 32,233 rows in that category and each and very time I run it I get 23,304 rows updated. Each and every time. I've increased memory usage, I've put ini setting in the script, I've run it from the PHP CLI and PHP-FPM. Each time it doesn't get past 23,304 rows updated.
Here's my script, which I've added massive timeouts to.
include 'common.inc'; //database connection stuff
ini_set("memory_limit","300M");
ini_set("max_execution_time","3600");
ini_set('mysql.connect_timeout','3600');
ini_set('mysql.trace_mode','On');
ini_set('max_input_time','3600');
$sql1="SELECT apcatnum FROM poster_categories_inno LIMIT 1";
$result1 = mysql_query($sql1);
while ($cats = mysql_fetch_array ($result1)) {
$sql2="SELECT poster_data_inno.apnumber,poster_data_inno.aptitle FROM poster_prodcat_inno, poster_data_inno WHERE poster_prodcat_inno.apcatnum ='$cats[apcatnum]' AND poster_data_inno.apnumber = poster_prodcat_inno.apnumber ORDER BY aptitle ASC";
$result2 = mysql_query($sql2);
$ordernum=1;
while ($order = mysql_fetch_array ($result2)) {
$sql3="UPDATE poster_prodcat_inno SET catorder='$ordernum' WHERE apnumber='$order[apnumber]' AND apcatnum='$cats[apcatnum]'";
$result3 = mysql_query($sql3);
$ordernum++;
} // end of 2nd while
}
I'm at a head-scratching loss. Just did a test on a smaller category and only 13,199 out of 17,662 rows were updated. For the two experiments only 72-74% of the rows are getting updated.
I'd say your problem lies with your 2nd query. Have you done an EXPLAIN on it? Because of the ORDER BY clause a filesort will be required. If you don't have appropriate indices that can slow things down further. Try this syntax and sub in a valid integer for your apcatnum variable during testing.
SELECT d.apnumber, d.aptitle
FROM poster_prodcat_inno p JOIN poster_data_inno d
ON poster_data_inno.apnumber = poster_prodcat_inno.apnumber
WHERE p.apcatnum ='{$cats['apcatnum']}'
ORDER BY aptitle ASC;
Secondly, since catorder is just an integer version of the combination of apcatnum and aptitle, it's a denormalization for convenience sake. This isn't necessarily bad, but it does mean that you have to update it every time you add a new title or category. Perhaps it might be better to partition your poster_prodcat_inno table by apcatnum and just do the JOIN with poster_data_inno when you need the actually need the catorder.
Please escape your query input, even if it does come from your own database (quotes and other characters will get you every time). Your SQL statement is incorrect because you're not using the variables correctly, please use hints, such as:
while ($order = mysql_fetch_array($result2)) {
$order = array_filter($order, 'mysql_real_escape_string');
$sql3 = "UPDATE poster_prodcat_inno SET catorder='$ordernum' WHERE apnumber='{$order['apnumber']}' AND apcatnum='{$cats['apcatnum']}'";
}

Categories