Updating the summary table based on triggers and stored procedures - php

I have a typical LAMP based site + Zend Framework where I have a base table and a summary table. Summary table is used to display data in reports.
Base table -
ID | Status
1 | 1
2 | 1
3 | 2
4 | 2
5 | 1
6 | 1
Summary table -
Status | Count
1 | 4
2 | 2
The base table will be changed(insert,update,delete) at an average of 20 times per day.
Currently, I am using triggers to call a stored procedure which will update the summary table based on the base table.
This is the stored procedure.
CREATE PROCEDURE UpdateSummary()
BEGIN
UPDATE summary a
INNER JOIN
(SELECT status, count(*) c from base group by status) b
ON a.status = b.status
SET a.count = b.c;
END
And I have 3 triggers (one for each - Insert, Delete and Update). I have shown the insert sample alone below. Other are similar to this.
CREATE TRIGGER S_T_TRIGGER_I
AFTER INSERT ON base
FOR EACH ROW
CALL UpdateSummary();
I want the summary table to be updated to the latest values always.
Using triggers and stored procedure like this is the best way or is there a elegant way to do this?

Well you are re-querying the DB over and over for data that you already know.
Why not just update the summary with only the changes.
DELIMITER $$
CREATE TRIGGER ai_base_each AFTER INSERT ON base FOR EACH ROW
BEGIN
INSERT INTO summary (status, count) VALUES (NEW.status,1)
ON DUPLICATE KEY UPDATE
SET count = count + 1;
END $$
CREATE TRIGGER ad_base_each AFTER DELETE ON base FOR EACH ROW
BEGIN
UPDATE summary s
SET s.count = s.count - 1
WHERE s.status = OLD.status;
END $$
CREATE TRIGGER au_base_each AFTER UPDATE ON base FOR EACH ROW
BEGIN
UPDATE summary s
SET s.count = s.count - 1
WHERE s.status = OLD.status;
INSERT INTO summary (status, count) VALUES (NEW.status,1)
ON DUPLICATE KEY UPDATE
SET count = count + 1;
END $$
DELIMITER ;
This will be much much faster and more to the point much more elegant.

Why don't you use a view like :
CREATE VIEW Summary AS
SELECT status, count(*)
FROM Base
GROUP BY status;
Each time you need, just do :
SELECT *
FROM Summary
And you'll get your result in real time (each call re-computed).
Views can be used the same way like table is used in Zend Framework. Just that you need to specify a primary key explicitly as explained here

Related

Mysql single query with subquery or two separate queries

I've table with following structure :
id | storeid | code
Where id is primary key
I want to insert data in this table with incremental order like this :
id | storeid | code
1 | 2 | 1
2 | 2 | 2
3 | 2 | 3
4 | 2 | 4
I've two solution to do this task.
1) Fire a query to get last record (code) from table and increment value of code with 1 using PHP and after that second query to insert that incremented value in database.
2) This single query : "INSERT INTO qrcodesforstore (storeid,code) VALUES (2,IFNULL((SELECT t.code+1 FROM (select code from qrcodesforstore order by id desc limit 1) t),1))"
I just want suggestion which approach is best and why for performance.
I'm currently using second method but confuse about performance as I'm using three level sub query.
You simply can use INSERT with SELECT and MAX():
INSERT INTO qrcodesforstore
(storeid, code)
(SELECT 2, IFNULL((MAX(code)+1),1) FROM qrcodesforstore)
SQLFiddle
Wrapping it up in a trigger:-
DELIMITER $$
DROP TRIGGER IF EXISTS bi_qrcodesforstore$$
CREATE TRIGGER bi_qrcodesforstore BEFORE INSERT ON qrcodesforstore
FOR EACH ROW
BEGIN
DECLARE max_code INT;
SELECT MAX(code) INTO max_code FROM qrcodesforstore;
IF max_code IS NULL THEN
SET max_code := 1;
ELSE
SET max_code := max_code + 1;
END IF
SET NEW.code := max_code;
END
you declare the field as primary key and unique and auto increment key they automatically increment the values
You can set the code column as AUTO_INCREMENT, you don't need to set it as primary key, see this.
Anyway, the second solution would be better, only one query is better than two.

Limiting maximum records per key in MySQL table

I've a table for storing products as per the following structure...
id shop_id product_id product_title
Every shop selects a plan, and accordingly it can stored N products in this tables. N is different for every shop.
Problem Statement: While doing insert operation in the table, total number of entries per shop_id can't be more than N.
I can count the #products before every insert operation, and then decide whether new entry should go in the table or be ignored. The operations are triggered by events received, and it may be in millions. So it doesn't seem to efficient. Performance is the key.
Is there a better way?
I Think you should use a stored procedure so you can delegate the validations to MySql and not to PHP, here is an example of what you might need, just be sure to replace the table names and columns names properly.
If you are worried about the performance you should check the indexes of the tables for better performance.
Procedure
DELIMITER //
DROP PROCEDURE IF EXISTS storeProduct//
CREATE PROCEDURE storeProduct(IN shopId INT, IN productId INT, IN productTitle VARCHAR(255))
LANGUAGE SQL MODIFIES SQL DATA SQL SECURITY INVOKER
BEGIN
/*Here we get the plan for the shop*/
SET #N = (SELECT plan FROM planTable WHERE shop_id = shopId);
/*NOW WE COUNT THE PRODUCTS THAT ARE STORED WITH THE shop_id*/
SET #COUNT = (SELECT COUNT(id) FROM storing_products WHERE shop_id = shopId);
/*NOW WE CHECK IF WE CAN STORE THE PRODUCTS OR NOT*/
IF #COUNT < #N THEN
/*YES WE CAN INSERT*/
INSERT INTO storing_products(shop_id, product_id, product_title)
VALUES (shopId, productId, productTitle);
/*1 means that the insert acording to the plan is ok*/
SELECT 1 AS 'RESULT';
ELSE
/*NO WE CAN NOT INSERT*/
/*0 means that the insert acording to the plan is not ok*/
SELECT 0 AS 'RESULT';
END IF;
END//
DELIMITER ;
Now you can call it from PHP just like
<?php
//....
$result = $link->query("CALL storeProduct($shop_id, $product_id, $product_title);");
//....
?>
or whatever you do
the answer is like
+--------+
| RESULT |
+--------+
| 1 |
+--------+
if its ok or
+--------+
| RESULT |
+--------+
| 0 |
+--------+
if not
I hope it will help
Greetings
If you don't want to count at insertion time, you can maintain count in another table than can be referred while insertion
shop_id max_product product_count insertion_allowed
------------------------------------------------------------
1 1000 50 1
2 2000 101 1
3 100 100 0
Two approaches:
compare product_count and max_product and insert when product_count is smaller than max_product. After successful insertion increment product_count of corresponding shop_id.
Alternatively, you may use insertion_allowed flag to check condition and after each successful insertion increment the product_count by 1 of corresponding shop_id.
Hope this will help you.
Please share performance improvement statistics of the approach(if you can). It may help others in choosing better approach.
Another approach without store procedure.
CREATE TABLE `test` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`prod_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=19 DEFAULT CHARSET=utf8;
You can do an insert with select. It's a little bit more efficient of two separate queries. The trick is if the select returns empty rows, then no insert happens:
$prod_id = 12;
$N = 3;
$qry = "insert into test (prod_id)
(select $prod_id FROM test WHERE prod_id=$prod_id HAVING count(id) <= $N )";

Updating MySql Column based on another Column

I am still pretty new to MySQL and I just ran into a problem that I can't seem to figure out.
Say I have a table called "tracks" with the following columns and sample data
track_hash | track_order
abc | 1
abc | 2
abc | 3
abc | 4
def | 1
def | 2
ghi | 1
So the point is that when I display the tracks, the tracks should be ordered by the track order. So if i want to display all tracks from abc, it will display it based on the track order (1, 2, 3). Track hash "def" has two tracks...etc.
So currently in my DB I just have an empty track_order column. How would I go about to filling the track_order column with the correct data?
You can do this with update and a user defined variable. However, you have a fundamental problem. SQL tables represent unordered sets. So, there is no inherent ordering in the table, unless a column specifies the order.
Let me assume there is a column called id. Then the following does what you want:
update tracks t
set t.track_order = if(#th = t.track_hash, (#rn := coalesce(#rn, 0) + 1),
if(#th := t.track_hash, #rn := 1, #rn := 1)
)
order by t.track_hash, t.id;
You don't have to initialize the variables for this to work, but you can initialize them before the update.
I think you might be looking for user defined variables.
You could do something like this:
SET #t1=0;
insert into `tablename` (track_order) values(#t1 := #t1+1) where id = some_id
I'm not entirely sure how you would go about doing it for every record in your database. I think this should work, but this does it on a per id basis (you could make it track_hash if that would be better). Not sure if that suits you?
You'll have to do it manually for every track_hash this way, so if you have a lot of different track_hash records it might be worth figuring out how to do it for all of them in one go. But I'm unsure of how to do that.

Add column of data to an existing mySQL table

I have a basic SQL problem that's been driving me mad. If I have a mySQL table e.g below.
How would I add another 80+ values to Column 2 starting from the first empty row (in this example row 3).
I've been trying a number of queries using INSERT or UPDATE but the closest I've got is to add the values to column 2 starting from the last defined ID value (e.g. row 80ish).
ID | Column 2 |
--------------------------------
1 | value |
2 | value |
3 | |
4 | |
5 | |
etc
The real table has around 10 columns, all with data in but I just need to add content (a list of around 80 different strings in CSV format to one of the columns)
I'd appreciate it if anyone could point me in the right direction.
I'd load the data into a separate table with the same structure and then update the target table using join or subquery to determine which columns are currently empty.
i.e. load interim table and then:
update target_table set column2 = (select column2 from interim_table where ...
where column2 is null
(slow but intuitive)
update target table, interim_table
set target table.column2 = interim_table.column2
where target table... = interim_table...
and target_table.column2 is null
(better performance)
Why don't you first run a query to find out the first empty row ID number? you can use SELECT COUNT(*)
FROM TABLE_NAME for that.
then you create a for loop and inside run a INSERT query, starting with the value returned by the previous query. just a scratch:
for(var id = last; id < totalOfQueries; id++)
{
var query = new MysqlCommand("INSERT INTO table VALUES ('" + id + "',....);
}

How do I assign a rotating category to database entries in the order the records come in?

I have a table which gets entries from a website, and as those entries go into the database, they need to be assigned the next category on a list of categories that may be changed at any time.
Because of this reason I can't do something simple like for mapping the first category of 5 to IDs 1, 6, 11, 16.
I've considered reading in the list of currently possibly categories, and checking the value of the last one inserted, and then giving the new record the next category, but I imagine if two requests come in at the same moment, I could potentially assign them both the same category rather then in sequence.
So, my current round of thinking is the following:
lock the tables ( categories and records )
insert the newest row into records
get the newest row's ID
select the row previous to the insertl ( by using order by auto_inc_name desc 0, 1 )
take the previous row's category, and grab the next one from the cat list
update the new inserted row
unlock the table
I'm not 100% sure this will work right, and there's possibly a much easier way to do it, so I'm asking:
A. Will this work as I described in the original problem?
B. Do you have a better/easier way to do this?
Thanks ~
I would do it way simpler... just make a table with one entry, "last_category" (unsigned tinyint not_null). Every time you do an insert just increment that value, and reset as necessary.
I'm not sure I understand your problem, but as I understand it you would like to have something like
category | data
-----------------
0 | lorem
1 | ipsum
.... | ...
4 | dolor
0 | sit
... | ...
How about having a unique auto_increment column, and let category be the MOD 5 of this column?
If you need 100% correct behaviour it sounds like you will need to lock something somewhere so that all your inserts line up properly. You might be able to avoid locking the category table if you use a single SQL statement to insert your data. I'm not sure how MySQL differs but in Oracle I can do this:
insert into my_table (id, col1, col2, category_id)
select :1, :2, :3, :4, c.id -- :1, :2, etc are bind variables. :1 corresponds to the ID.
from
(select
id, -- category id
count(*) over (partition by 1) cnt, -- count of how many categories there are
row_number() over (partition by 1 order by category.id) rn -- row number for current row in result set
from category) c
where c.rn = mod(:1, cnt)
This way in one statement I insert the next record based on the categories that existed at that moment. The insert automatically locks the my_table table until you commit. It grabs the category based on the modulus of the ID. This link shows you how to do a row-number in mysql. I'm not sure if count(*) requires group by in mysql; in oracle it does so I used a partition instead to count the whole result set.

Categories