I need to come up with a way to make a large task faster to beat the timeout.
I have very limited access to the server due to the restrictions of the hosting company.
I have a system set up where a cron visits a PHP file that grabs a csv that contains data on some products. The csv does not contain all of the fields that the product would have. Just a handful of essential ones.
I've read a fair number of articles on timeouts and handling csv's and currently (in an attempt to shave time) I have made a table (let's call it csv_data) to hold the csv data. I have a script that truncates the csv_data table then inserts data from the csv so each night the latest recordset from the csv is in that table (the csv file gets updated nightly). So far, no timeout problems..the task only takes about 4-5 seconds.
The timeouts occur when I have to sift through the data to make updates to the products table. The steps that it is running right now is like this
1. Get the sku from csv_data table (that holds thousands of records)
2. Select * from Products where products.sku = csv.sku (products table also holds thousands of records to loop through)
3. Get numrows.
If numrows<0{no record in products, so skip}.
If numrows>1{duplicate entries, don't change anything, but later on report the sku}
If numrows==1{Update selected fields in the products table with csv data}
4. Go to the next record in csv_data all over again
(I figured outlining the process is shorter and easier than dropping in the code.)
I looked into MySQl views and stored procedures but I am not skilled enough in it to know if it will handle the 'if' statement portion.
Is there anything I can do to make this faster to avoid the timeouts?
edit:
I should mention that set_time_limit(0); isn't doing it. And if it helps, the server uses IIS7 and fastcgi
Thanks for your help.
Update after using suggestions from Jakob and Shawn:
I'm doing something wrong. The speed is definitely faster and the csv sku is incrementing,
but when I tried to implement Shawn's solution; the query is giving me a PHP Warning: mysql_result() expects parameter 1 to be resource, boolean error.
Can you help me spot what I am doing wrong?
Here is the section of code:
$csvdata="SELECT * FROM csv_update";
$csvdata_result=mysql_query($csvdata);
mysql_query($csvdata);
$csvdata_num = mysql_num_rows($csvdata_result);
$i=0;
while($i<$csvdata_num){
$csv_code=#mysql_result($csvdata_result,$i,"skucode");
$datacheck=NULL;
$datacheck=substr($csv_code,0,1);
if($datacheck>='0' && $datacheck<='9'){
$csv_price=#mysql_result($csvdata_result,$i,"price");
$csv_retail=#mysql_result($csvdata_result,$i,"retail");
$csv_stock=#mysql_result($csvdata_result,$i,"stock");
$csv_weight=#mysql_result($csvdata_result,$i,"weight");
$csv_manufacturer=#mysql_result($csvdata_result,$i,"manufacturer");
$csv_misc1=#mysql_result($csvdata_result,$i,"misc1");
$csv_misc2=#mysql_result($csvdata_result,$i,"misc2");
$csv_selectlist=#mysql_result($csvdata_result,$i,"selectlist");
$csv_level5=#mysql_result($csvdata_result,$i,"level5");
$csv_frontpage=#mysql_result($csvdata_result,$i,"frontpage");
$csv_level3=#mysql_result($csvdata_result,$i,"level3");
$csv_minquantity=#mysql_result($csvdata_result,$i,"minquantity");
$csv_quantity1=#mysql_result($csvdata_result,$i,"quantity1");
$csv_discount1=#mysql_result($csvdata_result,$i,"discount1");
$csv_quantity2=#mysql_result($csvdata_result,$i,"quantity2");
$csv_discount2=#mysql_result($csvdata_result,$i,"discount2");
$csv_quantity3=#mysql_result($csvdata_result,$i,"quantity3");
$csv_discount3=#mysql_result($csvdata_result,$i,"discount3");
$count_check="SELECT COUNT(*) AS totalCount FROM products WHERE skucode = '$csv_code'";
$count_result=mysql_query($count_check);
mysql_query($count_check);
$totalCount=#mysql_result($count_result,0,'totalCount');
$loopCount = ceil($totalCount / 25);
for($j = 0; $j < $loopCount; $j++){
$prod_check="SELECT skucode FROM products WHERE skucode = '$csv_code' LIMIT ($loopCount*25), 25;";
$prodresult=mysql_query($prod_check);
mysql_query($prod_check);
$prodnum =#mysql_num_rows($prodresult);
$prod_id=#mysql_result($prodresult,0,"catalogid");
if($prodnum<1){
echo "NOT FOUND:$csv_code<br>";
$count_sku_not_found=$count_sku_not_found+1;
$list_sku_not_found=$list_sku_not_found." $csv_code";}
if($prodnum>1){
echo "DUPLICATE:$csv_ccode<br>";
$count_duplicate_skus=$count_duplicate_skus+1;
$list_duplicate_skus=$list_duplicate_skus." $csv_code";}
if ($prodnum==1){
///This prevents an overwrite from happening if the csv file doesn't produce properly
if ($csv_price!="" OR $csv_price!=NULL)
{$sql_price='price="'.$csv_price.'"';}
if ($csv_retail!="" OR $csv_retail!=NULL)
{$sql_retail=',retail="'.$csv_retail.'"';}
if ($csv_stock!="" OR $csv_stock!=NULL)
{$sql_stock=',stock="'.$csv_stock.'"';}
if ($csv_weight!="" OR $csv_weight!=NULL)
{$sql_weight=',weight="'.$csv_weight.'"';}
if ($csv_manufacturer!="" OR $csv_manufacturer!=NULL)
{$sql_manufacturer=',manufacturer="'.$csv_manufacturer.'"';}
if ($csv_misc1!="" OR $csv_misc1!=NULL)
{$sql_misc1=',misc1="'.$csv_misc1.'"';}
if ($csv_misc2!="" OR $csv_misc2!=NULL)
{$sql_pother2=',pother2="'.$csv_misc2.'"';}
if ($csv_selectlist!="" OR $csv_selectlist!=NULL)
{$sql_selectlist=',selectlist="'.$csv_selectlist.'"';}
if ($csv_level5!="" OR $csv_level5!=NULL)
{$sql_level5=',level5="'.$csv_level5.'"';}
if ($csv_frontpage!="" OR $csv_frontpage!=NULL)
{$sql_frontpage=',frontpage="'.$csv_frontpage.'"';}
$import="UPDATE products SET $sql_price $sql_retail $sql_stock $sql_weight $sql_manufacturer $sql_misc1 $sql_misc2 $sql_selectlist $sql_level5 $sql_frontpage $sql_in_stock WHERE skucode='$csv_code'";
mysql_query($import) or die(mysql_error("error updating in products table"));
echo "Update ".$csv_code." successful ($i)<br>";
$count_success_update_skus=$count_success_update_skus+1;
$list_success_update_skus=$list_success_update_skus." $csv_code";
//empty out variables
$sql_price='';
$sql_retail='';
$sql_stock='';
$sql_weight='';
$sql_manufacturer='';
$sql_misc1='';
$sql_misc2='';
$sql_selectlist='';
$sql_level5='';
$sql_frontpage='';
$sql_in_stock='';
$prodnum=0;
}
}
$i++;
}
Is it timing out before the first row is returned or is it between rows during the read? One good practice bit would be to handle your query in chunks; do a count first to see how many records you are dealing with for the SKU, the loop through smaller chunks (the size of these chunks would depend on how many things you have to do with each row). Your updated workflow would look more like this:
Get next SKU from CSV
Get a total count: SELECT COUNT(*) AS totalCount FROM products WHERE products.sku = csv.sku
Determine chunk size (using 25 for this demo)
loopCount = ceil(totalCount / 25)
Loop through all results using a loop like this: for($i = 0; $i < loopCount; $i++)
Inside your loop you should be running a query like this: SELECT * FROM products WHERE products.sku = csv.sku LIMIT (loopCount*25), 25
You will want to use a constant order for your SELECT chunks; your unique ID would probably be best.
I think you can solve this problem with cron. http://en.wikipedia.org/wiki/Cron . It has never had timeout.
Related
How do I use get_compiled_select or count_all_results before running the query without getting the table name added twice? When I use $this->db->get('tblName') after either of those, I get the error:
Not unique table/alias: 'tblProgram'
SELECT * FROM (`tblProgram`, `tblProgram`) JOIN `tblPlots` ON `tblPlots`.`programID`=`tblProgram`.`pkProgramID` JOIN `tblTrees` ON `tblTrees`.`treePlotID`=`tblPlots`.`id` ORDER BY `tblTrees`.`id` ASC LIMIT 2000
If I don't use a table name in count_all_results or $this->db->get(), then I get an error that no table is used. How can I get it to set the table name just once?
public function get_download_tree_data($options=array(), $rand=""){
//join tables and order by tree id
$this->db->reset_query();
$this->db->join('tblPlots','tblPlots.programID=tblProgram.pkProgramID');
$this->db->join('tblTrees','tblTrees.treePlotID=tblPlots.id');
$this->db->order_by('tblTrees.id', 'ASC');
//get number of results to return
$allResults=$this->db->count_all_results('tblProgram', false);
//chunk data and write to CSV to avoid reaching memory limit
$offset=0;
$chunk=2000;
$treePath=$this->config->item('temp_path')."$rand/trees.csv";
$tree_handle=fopen($treePath,'a');
while (($offset<$allResults)) {
$this->db->limit($chunk, $offset);
$result=$this->db->get('tblProgram')->result_array();
foreach ($result as $row) {
fputcsv($tree_handle, $row);
}
$offset=$offset+$chunk;
}
fclose($tree_handle);
return array('resultCount'=>$allResults);
}
To count how many rows would be returned by a query, essentially all the work must be performed. That is, it is impractical to get the count, then perform the query; you may as well just do the query.
If your goal is to "paginate" by getting some of the rows, plus the total count, that is essentially two separate actions (that may be combined to look like one.)
If the goal is to estimate the number of rows, then SHOW TABLE STATUS or SELECT Rows FROM information_schema.TABLES WHERE ... gives you an estimate.
If you want to see if there are, say "at least 100 rows", then this may be practical:
SELECT 1 FROM ... WHERE ... ORDER BY ... LIMIT 99,1
and see if you get a row back. However, this may or may not be efficient, depending on the indexes and the WHERE and the ORDER BY. (Show us the query and I can elaborate.)
Using OFFSET for chunking is grossly inefficient. If there is not a usable index, then it is performing essentially the entire query for each chunk. If there is a usable index, the chunks are slower and slower. Here is a discussion of why OFFSET is not good for "pagination", plus an efficient workaround: Pagination . It talks about how to "remember where you left off " as an efficient technique for chunking. Fetch between 100 and 1000 rows per chunk.
The flaw in your code is that it aims to select a subset of some records and their total count in the same query. This is impossible in MySQL, so you cannot generate such a query, hence, you get the error as mentioned. The problem is that if you do a
select ... from t where ... limit 0, 2000
then you get maximum 2000 records, so, if the total records matching the criteria have a count that is greater than the limit, then you will not get accurately the count from above, so, in that case you need a
select count(1) from t where ...
This means that you need to build your actual query (the code below your count_all_results call), see whether the number of results reaches the limit. If the number of results does not reach the limit, then you do not need to perform a separate query in order to get the count, because you can compute $offset * $chunk + $recordCount. However, if you get as many records as they can be, then you will need to build another query, without the order_by call, since the count is independent of your sort and get the counts.
$this->db->count_all_results()
Counting the number of returned results with count_all_results()
It's useful to count the number of results returned—often bugs can arise if a section of code which expects to have at least one row is passed zero rows. Without handling the eventuality of a zero result, an application may become unpredictably unstable and may give away hints to a malicious user about the architecture of the app. Ensuring correct handling of zero results is what we're going to focus on here.
Permits you to determine the number of rows in a particular Active Record query. Queries will accept Query Builder restrictors such as where(), or_where(), like(), or_like(), etc. Example:
echo $this->db->count_all_results('my_table'); // Produces an integer, like 25
$this->db->like('title', 'match');
$this->db->from('my_table');
echo $this->db->count_all_results(); // Produces an integer, like 17
However, this method also resets any field values that you may have passed to select(). If you need to keep them, you can pass FALSE as the second parameter:
echo $this->db->count_all_results('my_table', FALSE);
get_compiled_select()
The method $this->db->get_compiled_select(); is introduced in codeigniter v3.0 and compiles active records query without actually executing it. But this is not a completely new method. In older versions of CI it is like $this->db->_compile_select(); but the method has been made protected in later versions making it impossible to call back.
// Note that the second parameter of the get_compiled_select method is FALSE
$sql = $this->db->select(array('field1','field2'))
->where('field3',5)
->get_compiled_select('mytable', FALSE);
// ...
// Do something crazy with the SQL code... like add it to a cron script for
// later execution or something...
// ...
$data = $this->db->get()->result_array();
// Would execute and return an array of results of the following query:
// SELECT field1, field1 from mytable where field3 = 5;
NOTE:- Double calls to get_compiled_select() while you’re using the Query Builder Caching functionality and NOT resetting your queries will results in the cache being merged twice. That in turn will i.e. if you’re caching a select() - select the same field twice.
Rick James got me on the right track. I ended up having to chunk the results using pagination AND a nested query. Using LIMIT on even 1 chunk of 2000 records was timing out. This is the code I ended up with, which uses get_compiled_select('tblProgram') and then get('tblTrees O1'). Since I didn't use FALSE as the second argument to get_compiled_select, the query was cleared before the get() was run.
//grab the data in chunks, write it to CSV chunk by chunk
$offset=0;
$chunk=2000;
$i=10; //counter for the progress bar
$this->db->limit($chunk);
$this->db->select('tblTrees.id');
//nesting the limited query and then joining the other field later improved performance significantly
$query1=' ('.$this->db->get_compiled_select('tblProgram').') AS O2';
$this->db->join($query1, 'O1.id=O2.id');
$result=$this->db->get('tblTrees O1')->result_array();
$allResults=count($result);
$putHeaders=0;
$treePath=$this->config->item('temp_path')."$rand/trees.csv";
$tree_handle=fopen($treePath,'a');
//while select limit returns the limit
while (count($result)===$chunk) {
$highestID=max(array_column($result, 'id'));
//update progres bar with estimate
if ($i<90) {
$this->set_runStatus($qcRunId, $status = "processing", $progress = $i);
$i=$i+1;
}
//only get the fields the first time
foreach ($result as $row) {
if ($offset===0 && $putHeaders===0){
fputcsv($tree_handle, array_keys($row));
$putHeaders=1;
}
fputcsv($tree_handle, $row);
}
//get the next chunk
$offset=$offset+$chunk;
$this->db->reset_query();
$this->make_query($options);
$this->db->order_by('tblTrees.id', 'ASC');
$this->db->where('tblTrees.id >', $highestID);
$this->db->limit($chunk);
$this->db->select('tblTrees.id');
$query1=' ('.$this->db->get_compiled_select('tblProgram').') AS O2';
$this->db->join($query1, 'O1.id=O2.id');
$result=$this->db->get('tblTrees O1')->result_array();
$allResults=$allResults+count($result);
}
//write out last chunk
foreach ($result as $row) {
fputcsv($tree_handle, $row);
}
fclose($tree_handle);
return array('resultCount'=>$allResults);
Function render makes website 500% slow! Can anyone fix that please ?
Someone told me :
because it sends a database request on each iteration of the loop (it's not the only problem with this chunk of code but it's the most taxing one)
Yes I understand what that means. His way is:
you need to get all of the data before you start building the menu,
then you just insert the data instead of requesting more data on each
iteration
But i don't know how i must do it!
<?php
$menu_html='';
function render_menu($parent_id,$actmenuid)
{
$obj = new Database();
$con = $obj->dbconnectt();
global $menu_html;
$result=mysqli_query($con, "select * from tbl_menu where parent_id='$parent_id'");
if(mysqli_num_rows($result)==0) return;
if($parent_id==0){
$menu_html.='<ul class="topnav">';
}else{
$menu_html.='<ul>';
}
while($row=mysqli_fetch_array($result)) {
$childnum = $obj->recordcount("SELECT * FROM tbl_menu WHERE parent_id='".$row['id']."'");
if($childnum == 0){
$linkvalue='/category/'.$row['id'].'.html';
} else{
$linkvalue='#';
}
if($row['id']==$actmenuid && $actmenuid !=NULL){
$actv='class="active"';
}else{
$actv='';
}
$menu_html.='<li '.$actv.'>'.$row['title'].'';
render_menu($row['id'],$actmenuid);
$menu_html.='</li>';
}
$menu_html.='</ul>';return $menu_html;
}
if($isDsh==false){
echo render_menu(0,$actmenuid);
}
?>
Depending on how many records you have, try removing this query from inside the loop since it's running for every record on the first query.
$childnum = $obj->recordcount("SELECT * FROM tbl_menu WHERE parent_id='".$row['id']."'");
Change it a single query like this where it returns counts for each parent idea, and place it outside of the loop:
$parentcount = mysqli_query($con, ("SELECT parent_id, count(*) FROM tbl_menu GROUP BY parent_id");
There may be other issues, so please post the database structure and number of records that you're working with too.
Don't make recursive queries.
Having "more than 1000" rows is not too big. You can simply call everything from the table into php, then perform the recursive html build in php this will have a memory overhead, but far less processing overhead because you only ever make one trip to the db.
Alternatively (when your db table is prohibitively large), you should avoid gathering rows unnecessarily by adding a new column. The new column will store all "descendants" for the respective row when the row is INSERTed or update it when it is UPDATEd. Then you only need to reference this column when needing to call specific rows. In other words, do the recursive processing only once (when writing to the db) AND not when needing to display the data. This will, again, produce a finite result set in one query which can then be recursively traversed to build the desired output.
basically you need to do what #spudly has suggested.
But there is a small catch in his solution which depending on the number of the rows in yous tbl_menu table you may use a big chunk of memory to fetch all the records.
you can optimise it more with using his solution but changing the query to:
select
parent_tbl_menu.id,
count(child_tbl_menu.id) as cnt
from
tbl_menu as parent_tbl_menu
left join
tbl_menu as child_tbl_menu
on parent_tbl_menu.id = child_tbl_menu.parent_id
where
parent_tbl_menu.parent_id = ?
group by
parent_tbl_menu.id
This way you will only fetch the child records of a specific parent.
And please consider using prepared statements as your code has sql injection vulnerability.
Connect (from PHP to MySQL) only once for the entire web page.
Don't put a SELECT inside a loop if you can do all the work in a single SELECT, such as with a JOIN. (Exception: A "hierarchical" table needs the nested SELECT. Exception to the exception: MySQL 8.0 and MariaDB 10.2 can do it with a "recursive CTE".)
Don't fetch all the columns (SELECT *) when all you want it is a recordcount. Instead, SELECT COUNT(*) ... and use the number returned.
1000 of anything is probably excessive for a web page. Re-think the UI.
I am doing an example project for University and got a problem that I can't solve.
In general, the project is to create an automated pizza order system in PHP and MySQL on Apache. The system works through the following steps:
- Customer places order -> Baker receives order, proceeds -> Driver receives order at certain state, proceeds
- Customer can view order at all time through session
Now I hung up at the last step: The driver can see a page that has a table with the information that the baker worked with and passed on (all changes are on database side). The driver can only see a whole package (whenever all pizzas are marked as a certain status, also saved in DB).
For this, I have the following SQL statement
SELECT PizzaID, BestellungID, Adresse, PizzaName, Preis, Status FROM angebot, bestelltepizza, bestellung where bestellung.bestellungid = bestelltepizza.fbestellungid and angebot.PizzaName = bestelltepizza.fPizzaName and (select min(status) from bestelltepizza where bestellung.bestellungid = fbestellungid) >2 ORDER BY Status, BestellungID
Now, when I use var_dump() to get the mysqli_num_rows() output, I get no errors and the following output int 26. Compared to the database rows, it's the correct number. I fetch the sql:
while($row = mysqli_fetch_array($this->result)) {
var_dump(mysqli_num_rows($this->result));
var_dump($row);
...
}
Within the while() loop contains another query
$this->query = "SELECT fPizzaName FROM bestelltepizza WHERE fBestellungID = '$BestellID'";
var_dump($this->query);
$tmpResult = $this->_database->query($this->query);
$count = mysqli_num_rows($tmpResult);
Now here is the problem, the while() loop leaves out a random $BestellID which can contain x rows of data. But when I count the output of var_dump() everything is correct. However, var_dump($this->query); is not showing the query statement for the specific jump, too.
Any ideas what this could be? Full link to pastebin below.
To not extend this question to the fullest, I uploaded the whole code to pastebin here: http://pastebin.com/u888CPLw
Offtopic: Appreciate any help, thanks. If I failed clearing out my exact problem or if any questions pop up to my question, please comment and I will clarify. Thanks.
while($row = mysqli_fetch_array($this->result)) {
$count = mysqli_num_rows($tmpResult);
for($i = 0; $i < $count; $i++) {
$tmpVar = mysqli_fetch_array($this->result);
Ive snipped the code to show the problem
$count is based on $tmpResult you are then doing a fetch array on $this->result you should be doing it on $tmpResult
As Marc B says, Its a simple query to either inner join / left join on to the query. It would be better to use the join.
Having some difficulty pinpointing exactly what is wrong with this block of code. I am expecting it to run through a loop a set number of times and update some rows in the table tbl_games with some values received from the form.
I have tried running the code in phpMyAdmin without variables, which works fine (updates specified row). I assume the problem is something to do with the string in $insert_q.
gamecount will always be an int<30, game_ID will be a unique primary key integer value in tbl_games.
A little background: this code is part of a bigger project - which is centered around football games. An admin adds games to tbl_games (coded and finished), this current file now displays games to the admin which are unplayed (scores for team1 and team2 are NULL) and gives them a space to input scores for each team. This code takes those 2 scores, and the game_ID and updates each row.
It's having no effect on the DB rows though. Please point me in the right direction.
<?php
$lim=$_SESSION['gamecount'];
for ($i=1; $i<$lim; $i++) {
$game_ID = ${"_SESSION['game".$i."_ID']"};
$score_team_1 = ${"_REQUEST['".$i."_team1-score']"};
$score_team_2 = ${"_REQUEST['game".$i."_team2-score']"};
$insert_q = "UPDATE tbl_games SET team1_score = '$score_team_1', team2_score = '$score_team_2' WHERE game_ID = '$game_ID';";
mysql_query($insert_q);
}
session_destroy();
?>
I think the problem is with this line.
$game_ID = ${"POST['game".$i."_ID']"};
It should be something like this.
$game_ID = ${"_POST['game".$i."_ID']"}; or
$game_ID = $_POST['game'.$i.'_ID']; //much cleaner
You need to make use of the mysql reporting. Get it to output any errors, and affected rows. While you may think affected rows will be none, it might not be (always good to check when debugging just so you check everything).
Does your PHP error log have any warnings or other notices that might point to your query being an issue etc?
What is the value you're updating (echo out the var/session) and what is the DB value (look at it in phpmyadmin or mysql command line).
Could be there's nothing to update.
I'm having problems debugging a failing mysql 5.1 insert under PHP 5.3.4. I can't seem to see anything in the mysql error log or php error logs.
Based on a Yahoo presentation on efficient pagination, I was adding order numbers to posters on my site (order rank, not order sales).
I wrote a quick test app and asked it to create the order numbers on one category. There are 32,233 rows in that category and each and very time I run it I get 23,304 rows updated. Each and every time. I've increased memory usage, I've put ini setting in the script, I've run it from the PHP CLI and PHP-FPM. Each time it doesn't get past 23,304 rows updated.
Here's my script, which I've added massive timeouts to.
include 'common.inc'; //database connection stuff
ini_set("memory_limit","300M");
ini_set("max_execution_time","3600");
ini_set('mysql.connect_timeout','3600');
ini_set('mysql.trace_mode','On');
ini_set('max_input_time','3600');
$sql1="SELECT apcatnum FROM poster_categories_inno LIMIT 1";
$result1 = mysql_query($sql1);
while ($cats = mysql_fetch_array ($result1)) {
$sql2="SELECT poster_data_inno.apnumber,poster_data_inno.aptitle FROM poster_prodcat_inno, poster_data_inno WHERE poster_prodcat_inno.apcatnum ='$cats[apcatnum]' AND poster_data_inno.apnumber = poster_prodcat_inno.apnumber ORDER BY aptitle ASC";
$result2 = mysql_query($sql2);
$ordernum=1;
while ($order = mysql_fetch_array ($result2)) {
$sql3="UPDATE poster_prodcat_inno SET catorder='$ordernum' WHERE apnumber='$order[apnumber]' AND apcatnum='$cats[apcatnum]'";
$result3 = mysql_query($sql3);
$ordernum++;
} // end of 2nd while
}
I'm at a head-scratching loss. Just did a test on a smaller category and only 13,199 out of 17,662 rows were updated. For the two experiments only 72-74% of the rows are getting updated.
I'd say your problem lies with your 2nd query. Have you done an EXPLAIN on it? Because of the ORDER BY clause a filesort will be required. If you don't have appropriate indices that can slow things down further. Try this syntax and sub in a valid integer for your apcatnum variable during testing.
SELECT d.apnumber, d.aptitle
FROM poster_prodcat_inno p JOIN poster_data_inno d
ON poster_data_inno.apnumber = poster_prodcat_inno.apnumber
WHERE p.apcatnum ='{$cats['apcatnum']}'
ORDER BY aptitle ASC;
Secondly, since catorder is just an integer version of the combination of apcatnum and aptitle, it's a denormalization for convenience sake. This isn't necessarily bad, but it does mean that you have to update it every time you add a new title or category. Perhaps it might be better to partition your poster_prodcat_inno table by apcatnum and just do the JOIN with poster_data_inno when you need the actually need the catorder.
Please escape your query input, even if it does come from your own database (quotes and other characters will get you every time). Your SQL statement is incorrect because you're not using the variables correctly, please use hints, such as:
while ($order = mysql_fetch_array($result2)) {
$order = array_filter($order, 'mysql_real_escape_string');
$sql3 = "UPDATE poster_prodcat_inno SET catorder='$ordernum' WHERE apnumber='{$order['apnumber']}' AND apcatnum='{$cats['apcatnum']}'";
}