Show item of the day - php

I am looking to create a function that gets me a random item from a mySQL table, but let's me keep the returned as the "item of the day". In other words, the item that was "the item of the day" yesterday should not be shown again until all other items have been shown as item of the day.
Any suggestions on how to do this in an elegant way?
Thanks

Add a bool column "UsedAsItemOfTheDay" set to false (0). Update to true when item is picked. Exclude already used items from the picking process.
SELECT * FROM `table`
WHERE UsedAsItemOfTheDay = 0
ORDER BY RAND() LIMIT 1;
(Note: this is not the fastest way to return a random row in MySql; it will be slow on huge tables)
See also: quick selection of a random row from a large table in mysql

SELECT <fields> FROM <table> WHERE <some logic to exclude already used> ORDER BY RAND() LIMIT 1 will get you a random row from the table.

Add a column to store whether the item has been used:
ALTER TABLE your_table ADD COLUMN isused BOOL DEFAULT 0;
Get a random item of the day:
SELECT t.*
FROM your_table t
WHERE t.isused = 0
ORDER BY RAND()
LIMIT 1
Now update that record so it can't be used in the future:
UPDATE your_table
SET isused = 1
WHERE id = id_from_select_random_statement

People who "know" SQL will look for declarative solutions and will shun procedural code. Flagging rows is a "smell" for procedural code.
Is the set of Items static (never changes) or stable (rarely changes)? If yes, it would be easier to do a one-off exercise of generating a lookup table of values from now until the end of time, rather than scheduling a proc to running daily to look for unused flags and update the flag for today and clear all flags if all have been used etc.
Create a table of sequential dates between today and a far future date representing the lifetime of your application (you could consider omitting non-business days, of course). Add a column(s) referencing the key in you Items table (ensure you opt for ON DELETE NO ACTION referential action just in case those Items prove not to be static!) Then randomly assign the whole set of Items one per day until each has been used once. Repeat again for the whole set of Items until the table is full. You could easily generate this data using a spreadsheet and import it (or pure SQL if you are hardcore ;)
Quick example using Standard SQL:
Say there are only five Items in the set:
CREATE TABLE Items
(
item_ID INTEGER NOT NULL UNIQUE
);
INSERT INTO Items (item_ID)
VALUES (1),
(2),
(3),
(4),
(5);
You lookup table would be as simple as this:
CREATE TABLE ItemsOfTheDay
(
cal_date DATE NOT NULL UNIQUE,
item_ID INTEGER NOT NULL
REFERENCES Items (item_ID)
ON DELETE NO ACTION
ON UPDATE CASCADE
);
Starting with today, add the whole set of Items in random order:
INSERT INTO Items (item_ID)
VALUES ('2010-07-13', 2),
('2010-07-14', 4),
('2010-07-15', 5),
('2010-07-16', 1),
('2010-07-17', 3);
Then, starting with the most recent unfilled date, add the whole set of Items in (hopefully a different) random order:
INSERT INTO Items (item_ID)
VALUES ('2010-07-18', 1),
('2010-07-19', 3),
('2010-07-20', 4),
('2010-07-21', 5),
('2010-07-22', 2);
...and again...
INSERT INTO Items (item_ID)
VALUES ('2010-07-23', 2),
('2010-07-24', 3),
('2010-07-25', 5),
('2010-07-26', 1),
('2010-07-27', 4);
..and so on until the table is full.
Then it would then simply be a case of looking up today's date in the lookup table as and when required.
If the set of Items changes then the lookup table would obviously need to be regenerated, so you need to balance out the simplicity of design against the need for manual maintenance.

If you have fixed items you can add column
ALTER TABLE your_table ADD COLUMN item_day INT DEFAULT 0;
then selecting item use
WHERE item_day = DATE_FORMAT('%j')
If you get empty result then you can format new list of day items:
<?php
$qry = " UPDATE your_table SET item_day = 0";
$db->execute($qry);
// You only need 355 item to set as item of the day
for($i = 0; $i < 355; $i++) {
$qry = "UPDATE your_table SET item_day = ".($i+1)." WHERE item_day = 0 ORDER BY RAND() LIMIT 1";
$rs = $db->execute($qry);
// If no items left stop update
if (!$rs) { break; }
}
?>

Here's a stored procedure which selects a random row without using ORDER BY RAND(), and which resets the used flag once all items have been used:
DELIMITER //
DROP PROCEDURE IF EXISTS random_iotd//
CREATE PROCEDURE random_iotd()
BEGIN
# Reset used flag if all the rows have been used.
SELECT COUNT(*) INTO #used FROM iotd WHERE used = 1;
SELECT COUNT(*) INTO #rows FROM iotd;
IF (#used = #rows) THEN
UPDATE iotd SET used = 0;
END IF;
# Select a random number between 1 and the number of unused rows.
SELECT FLOOR(RAND() * (#rows - #used)) INTO #rand;
# Select the id of the row at position #rand.
PREPARE stmt FROM 'SELECT id INTO #id FROM iotd WHERE used = 0 LIMIT ?,1';
EXECUTE stmt USING #rand;
# Select the row where id = #id.
PREPARE stmt FROM 'SELECT id, item FROM iotd WHERE id = ?';
EXECUTE stmt USING #id;
# Update the row where id = #id.
PREPARE stmt FROM 'UPDATE iotd SET used = 1 WHERE id = ?';
EXECUTE stmt USING #id;
DEALLOCATE PREPARE stmt;
END;
//
DELIMITER ;
To use:
CALL random_iotd();
The procedure assumes a table structure like this:
CREATE TABLE `iotd` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`item` varchar(255) NOT NULL,
`used` BOOLEAN NOT NULL DEFAULT 0,
INDEX `used` (`used`),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
Here's one way to get the result from PHP (to keep things simple, error checking has been removed):
$mysqli = new mysqli('localhost', 'root', 'password', 'database');
$stmt = $mysqli->prepare('CALL random_iotd()');
$stmt->execute();
$stmt->bind_result($id, $item);
$stmt->fetch();
echo "$id, $item\n";
// 4, Item 4
UPADATE
This version should return the same result repeatedly on a given date. I've not really had time to test this, so be sure to do some testing of your own...
DELIMITER //
DROP PROCEDURE IF EXISTS random_iotd//
CREATE PROCEDURE random_iotd()
BEGIN
# Get today's item.
SET #id := NULL;
SELECT id INTO #id FROM iotd WHERE ts = CURRENT_DATE();
IF ISNULL(#id) THEN
# Reset used flag if all the rows have been used.
SELECT COUNT(*) INTO #used FROM iotd WHERE used = 1;
SELECT COUNT(*) INTO #rows FROM iotd;
IF (#used = #rows) THEN
UPDATE iotd SET used = 0;
END IF;
# Select a random number between 1 and the number of unused rows.
SELECT FLOOR(RAND() * (#rows - #used)) INTO #rand;
# Select the id of the row at position #rand.
PREPARE stmt FROM 'SELECT id INTO #id FROM iotd WHERE used = 0 LIMIT ?,1';
EXECUTE stmt USING #rand;
# Update the row where id = #id.
PREPARE stmt FROM 'UPDATE iotd SET used = 1, ts = CURRENT_DATE() WHERE id = ?';
EXECUTE stmt USING #id;
END IF;
# Select the row where id = #id.
PREPARE stmt FROM 'SELECT id, item FROM iotd WHERE id = ?';
EXECUTE stmt USING #id;
DEALLOCATE PREPARE stmt;
END;
//
DELIMITER ;
And the table structure:
CREATE TABLE `iotd` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`item` varchar(255) NOT NULL,
`used` BOOLEAN NOT NULL DEFAULT 0,
`ts` DATE DEFAULT 0,
INDEX `used` (`used`),
INDEX `ts` (`ts`),
PRIMARY KEY (`id`)
) ENGINE=InnoDB;

Why don't you use sequence?
Sequence serves your purpose easily...

Related

mysql insert record not immediately available, select count(*) doesn't see it right away

In my php code, I have a Mysql query:
SELECT COUNT(*)
to see if the record already exists, then if it doesn't exist I do an:
INSERT INTO <etc>
But if someone hits reload with a second or so, the SELECT COUNT(*) doesn't see the inserted record.
$ssql="SELECT COUNT(*) as counts FROM `points` WHERE `username` LIKE '".$lusername."' AND description LIKE '".$desc."' AND `info` LIKE '".$key."' AND `date` LIKE '".$today."'";
$result = mysql_query($ssql);
$row=mysql_fetch_array($result);
if ($row['counts']==0) // no points for this design before
{
$isql="INSERT INTO `points` (`datetime`,`username`,`ip`,`description`,`points`,`info`, `date`,`uri`) ";
$isql=$isql."VALUES ('".date("Y-m-d H:i:s")."','".$lusername."',";
$isql=$isql."'".$_SERVER['REMOTE_ADDR']."','".$desc."','".$points."',";
$isql=$isql."'".$key."','".$today."','".$_SERVER['REQUEST_URI']."')";
$iresult = mysql_query($isql);
return(true);
}
else
return(false);
I was using MyISAM database type
Instead of running two seperate queries just use REPLACE INTO.
From the documentation:
REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted.
For example if your key field is id then:
REPLACE INTO my_table SET id = 4 AND other_field = 'foobar'
will insert if there is no record with id 4, or if there is then it will replace the other_field value with foobar.

Get both insert_id and updated row id in insert_update query

I have a query like this:
SET #uids = '';
INSERT INTO tbl1 (name,used,is_active)
VALUES (1,0,0),(2,0,0),(24,0,0)
ON DUPLICATE KEY UPDATE
id = LAST_INSERT_ID(id)
, used = (SELECT #uids := concat_ws(',', LAST_INSERT_ID(), #uids))
, used = used+1
, is_active = CASE WHEN used > 3 THEN 1 ELSE 0 END;
SELECT #uids;
See here to figure out the way of getting updated row id.
I get updated row ids' in #uids if it updates any rows but if a row is inserted, I can't get the id of that. So how to get both inserted row id and updated row id?
Or how to execute (SELECT #uids := concat_ws(',', LAST_INSERT_ID(), #uids)) in insert before ON DUPLICATE KEY... ?
Time's short and we are long
You can't do it, because there is no way to fill #uids while inserting which needs a select clause and you are not allowed to use a select clause within an insert statement unless your query can be transformed into an INSERT ... SELECT.
Long answer
As long as you don't try to insert mixed values that may result in both updating and inserting (which probably you do) there is a nasty but safe way you can go with:
SET #uids := '';
INSERT INTO `tbl1` (name, used, is_active)
VALUES (1,0,0),(2,0,0),(24,0,0)
ON DUPLICATE KEY UPDATE
is_active = CASE WHEN used > 3 THEN 1 ELSE 0 END,
id = LAST_INSERT_ID(id),
used = used + 1,
id = (SELECT #uids := concat_ws(',', LAST_INSERT_ID(), #uids));
SELECT #uids, LAST_INSERT_ID() as f, MAX(id) as l from `tbl1`;
Being not so tricky, you have two values at the end:
LAST_INSERT_ID() as f is the first inserted row ID
MAX(id) as l which is last inserted row ID
So with that two boundaries you surly have all inserted rows IDs. Saying that it has drawbacks and that is you always have a LAST_INSERT_ID() value even if rows only were affected by update statement. However as you tagged your question with php there was a chance to get benefit from mysqli_affected_rows while doing a multi_query but I couldn't produce expected return values from mysqli_affected_rows as is documented by MySQL:
For INSERT ... ON DUPLICATE KEY UPDATE statements, the affected-rows
value per row is 1 if the row is inserted as a new row, 2 if an
existing row is updated, and 0 if an existing row is set to its
current values.
You can try it yourself and see if it works. If you get an expected return value then you can understand if your query has done some updates or inserts and read results based on that
As my short answer, there is no correct way to do it within the same query context but may be doing it programatically is neater? (though I don't bet on its performance)
$values = [[1, 0, 0], [2, 0, 0], [24, 0, 0]];
$insertIDs = [];
$updateIDs = [];
foreach ($values as $v) {
$insert = $mysqli->prepare("INSERT INTO `tbl1` (name, used, is_active) VALUES (?, ?, ?)");
$insert->bind_param('ddd', $v[0], $v[1], $v[2]);
$insert->execute();
if ($insert->affected_rows == -1) {
$update = $mysqli->prepare("UPDATE `tbl1` SET id = LAST_INSERT_ID(id), used = used + 1, is_active = CASE WHEN used > 3 THEN 1 ELSE 0 END WHERE name = ?"); // considering `name` as a unique column
$update->bind_param('d', $v[0]);
$update->execute();
if ($update->affected_rows == 1) {
$updateIDs[] = $update->insert_id;
}
} else {
$insertIDs[] = $insert->insert_id;
}
}
var_dump($updateIDs);
var_dump($insertIDs);
Example output:
array(1) {
[0]=>
int(140)
}
array(1) {
[0]=>
int(337)
}
One another workaround could be using MySQL triggers. By creating an AFTER INSERT trigger on table tbl1, you are able to store IDs for later use:
CREATE TRIGGER trigger_tbl1
AFTER INSERT
ON `tbl1` FOR EACH ROW
BEGIN
UPDATE `some_table` SET last_insert_ids = concat_ws(',', LAST_INSERT_ID(), last_insert_ids) WHERE id = 1;
END;

pdo update multiple rows in one query [duplicate]

I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL?
Edit:
For example I have the following
Name id Col1 Col2
Row1 1 6 1
Row2 2 2 3
Row3 3 9 5
Row4 4 16 8
I want to combine all the following Updates into one query
UPDATE table SET Col1 = 1 WHERE id = 1;
UPDATE table SET Col1 = 2 WHERE id = 2;
UPDATE table SET Col2 = 3 WHERE id = 3;
UPDATE table SET Col1 = 10 WHERE id = 4;
UPDATE table SET Col2 = 12 WHERE id = 4;
Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.
Using your example:
INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)
ON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);
Since you have dynamic values, you need to use an IF or CASE for the columns to be updated. It gets kinda ugly, but it should work.
Using your example, you could do it like:
UPDATE table SET Col1 = CASE id
WHEN 1 THEN 1
WHEN 2 THEN 2
WHEN 4 THEN 10
ELSE Col1
END,
Col2 = CASE id
WHEN 3 THEN 3
WHEN 4 THEN 12
ELSE Col2
END
WHERE id IN (1, 2, 3, 4);
The question is old, yet I'd like to extend the topic with another answer.
My point is, the easiest way to achieve it is just to wrap multiple queries with a transaction. The accepted answer INSERT ... ON DUPLICATE KEY UPDATE is a nice hack, but one should be aware of its drawbacks and limitations:
As being said, if you happen to launch the query with rows whose primary keys don't exist in the table, the query inserts new "half-baked" records. Probably it's not what you want
If you have a table with a not null field without default value and don't want to touch this field in the query, you'll get "Field 'fieldname' doesn't have a default value" MySQL warning even if you don't insert a single row at all. It will get you into trouble, if you decide to be strict and turn mysql warnings into runtime exceptions in your app.
I made some performance tests for three of suggested variants, including the INSERT ... ON DUPLICATE KEY UPDATE variant, a variant with "case / when / then" clause and a naive approach with transaction. You may get the python code and results here. The overall conclusion is that the variant with case statement turns out to be twice as fast as two other variants, but it's quite hard to write correct and injection-safe code for it, so I personally stick to the simplest approach: using transactions.
Edit: Findings of Dakusan prove that my performance estimations are not quite valid. Please see this answer for another, more elaborate research.
Not sure why another useful option is not yet mentioned:
UPDATE my_table m
JOIN (
SELECT 1 as id, 10 as _col1, 20 as _col2
UNION ALL
SELECT 2, 5, 10
UNION ALL
SELECT 3, 15, 30
) vals ON m.id = vals.id
SET col1 = _col1, col2 = _col2;
All of the following applies to InnoDB.
I feel knowing the speeds of the 3 different methods is important.
There are 3 methods:
INSERT: INSERT with ON DUPLICATE KEY UPDATE
TRANSACTION: Where you do an update for each record within a transaction
CASE: In which you a case/when for each different record within an UPDATE
I just tested this, and the INSERT method was 6.7x faster for me than the TRANSACTION method. I tried on a set of both 3,000 and 30,000 rows.
The TRANSACTION method still has to run each individually query, which takes time, though it batches the results in memory, or something, while executing. The TRANSACTION method is also pretty expensive in both replication and query logs.
Even worse, the CASE method was 41.1x slower than the INSERT method w/ 30,000 records (6.1x slower than TRANSACTION). And 75x slower in MyISAM. INSERT and CASE methods broke even at ~1,000 records. Even at 100 records, the CASE method is BARELY faster.
So in general, I feel the INSERT method is both best and easiest to use. The queries are smaller and easier to read and only take up 1 query of action. This applies to both InnoDB and MyISAM.
Bonus stuff:
The solution for the INSERT non-default-field problem is to temporarily turn off the relevant SQL modes: SET SESSION sql_mode=REPLACE(REPLACE(##SESSION.sql_mode,"STRICT_TRANS_TABLES",""),"STRICT_ALL_TABLES",""). Make sure to save the sql_mode first if you plan on reverting it.
As for other comments I've seen that say the auto_increment goes up using the INSERT method, this does seem to be the case in InnoDB, but not MyISAM.
Code to run the tests is as follows. It also outputs .SQL files to remove php interpreter overhead
<?php
//Variables
$NumRows=30000;
//These 2 functions need to be filled in
function InitSQL()
{
}
function RunSQLQuery($Q)
{
}
//Run the 3 tests
InitSQL();
for($i=0;$i<3;$i++)
RunTest($i, $NumRows);
function RunTest($TestNum, $NumRows)
{
$TheQueries=Array();
$DoQuery=function($Query) use (&$TheQueries)
{
RunSQLQuery($Query);
$TheQueries[]=$Query;
};
$TableName='Test';
$DoQuery('DROP TABLE IF EXISTS '.$TableName);
$DoQuery('CREATE TABLE '.$TableName.' (i1 int NOT NULL AUTO_INCREMENT, i2 int NOT NULL, primary key (i1)) ENGINE=InnoDB');
$DoQuery('INSERT INTO '.$TableName.' (i2) VALUES ('.implode('), (', range(2, $NumRows+1)).')');
if($TestNum==0)
{
$TestName='Transaction';
$Start=microtime(true);
$DoQuery('START TRANSACTION');
for($i=1;$i<=$NumRows;$i++)
$DoQuery('UPDATE '.$TableName.' SET i2='.(($i+5)*1000).' WHERE i1='.$i);
$DoQuery('COMMIT');
}
if($TestNum==1)
{
$TestName='Insert';
$Query=Array();
for($i=1;$i<=$NumRows;$i++)
$Query[]=sprintf("(%d,%d)", $i, (($i+5)*1000));
$Start=microtime(true);
$DoQuery('INSERT INTO '.$TableName.' VALUES '.implode(', ', $Query).' ON DUPLICATE KEY UPDATE i2=VALUES(i2)');
}
if($TestNum==2)
{
$TestName='Case';
$Query=Array();
for($i=1;$i<=$NumRows;$i++)
$Query[]=sprintf('WHEN %d THEN %d', $i, (($i+5)*1000));
$Start=microtime(true);
$DoQuery("UPDATE $TableName SET i2=CASE i1\n".implode("\n", $Query)."\nEND\nWHERE i1 IN (".implode(',', range(1, $NumRows)).')');
}
print "$TestName: ".(microtime(true)-$Start)."<br>\n";
file_put_contents("./$TestName.sql", implode(";\n", $TheQueries).';');
}
UPDATE table1, table2 SET table1.col1='value', table2.col1='value' WHERE table1.col3='567' AND table2.col6='567'
This should work for ya.
There is a reference in the MySQL manual for multiple tables.
Use a temporary table
// Reorder items
function update_items_tempdb(&$items)
{
shuffle($items);
$table_name = uniqid('tmp_test_');
$sql = "CREATE TEMPORARY TABLE `$table_name` ("
." `id` int(10) unsigned NOT NULL AUTO_INCREMENT"
.", `position` int(10) unsigned NOT NULL"
.", PRIMARY KEY (`id`)"
.") ENGINE = MEMORY";
query($sql);
$i = 0;
$sql = '';
foreach ($items as &$item)
{
$item->position = $i++;
$sql .= ($sql ? ', ' : '')."({$item->id}, {$item->position})";
}
if ($sql)
{
query("INSERT INTO `$table_name` (id, position) VALUES $sql");
$sql = "UPDATE `test`, `$table_name` SET `test`.position = `$table_name`.position"
." WHERE `$table_name`.id = `test`.id";
query($sql);
}
query("DROP TABLE `$table_name`");
}
Why does no one mention multiple statements in one query?
In php, you use multi_query method of mysqli instance.
From the php manual
MySQL optionally allows having multiple statements in one statement string. Sending multiple statements at once reduces client-server round trips but requires special handling.
Here is the result comparing to other 3 methods in update 30,000 raw. Code can be found here which is based on answer from #Dakusan
Transaction: 5.5194580554962
Insert: 0.20669293403625
Case: 16.474853992462
Multi: 0.0412278175354
As you can see, multiple statements query is more efficient than the highest answer.
If you get error message like this:
PHP Warning: Error while sending SET_OPTION packet
You may need to increase the max_allowed_packet in mysql config file which in my machine is /etc/mysql/my.cnf and then restart mysqld.
There is a setting you can alter called 'multi statement' that disables MySQL's 'safety mechanism' implemented to prevent (more than one) injection command. Typical to MySQL's 'brilliant' implementation, it also prevents user from doing efficient queries.
Here (http://dev.mysql.com/doc/refman/5.1/en/mysql-set-server-option.html) is some info on the C implementation of the setting.
If you're using PHP, you can use mysqli to do multi statements (I think php has shipped with mysqli for a while now)
$con = new mysqli('localhost','user1','password','my_database');
$query = "Update MyTable SET col1='some value' WHERE id=1 LIMIT 1;";
$query .= "UPDATE MyTable SET col1='other value' WHERE id=2 LIMIT 1;";
//etc
$con->multi_query($query);
$con->close();
Hope that helps.
You can alias the same table to give you the id's you want to insert by (if you are doing a row-by-row update:
UPDATE table1 tab1, table1 tab2 -- alias references the same table
SET
col1 = 1
,col2 = 2
. . .
WHERE
tab1.id = tab2.id;
Additionally, It should seem obvious that you can also update from other tables as well. In this case, the update doubles as a "SELECT" statement, giving you the data from the table you are specifying. You are explicitly stating in your query the update values so, the second table is unaffected.
You may also be interested in using joins on updates, which is possible as well.
Update someTable Set someValue = 4 From someTable s Inner Join anotherTable a on s.id = a.id Where a.id = 4
-- Only updates someValue in someTable who has a foreign key on anotherTable with a value of 4.
Edit: If the values you are updating aren't coming from somewhere else in the database, you'll need to issue multiple update queries.
No-one has yet mentioned what for me would be a much easier way to do this - Use a SQL editor that allows you to execute multiple individual queries. This screenshot is from Sequel Ace, I'd assume that Sequel Pro and probably other editors have similar functionality. (This of course assumes you only need to run this as a one-off thing rather than as an integrated part of your app/site).
And now the easy way
update my_table m, -- let create a temp table with populated values
(select 1 as id, 20 as value union -- this part will be generated
select 2 as id, 30 as value union -- using a backend code
-- for loop
select N as id, X as value
) t
set m.value = t.value where t.id=m.id -- now update by join - quick
Yes ..it is possible using INSERT ON DUPLICATE KEY UPDATE sql statement..
syntax:
INSERT INTO table_name (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE a=VALUES(a),b=VALUES(b),c=VALUES(c)
use
REPLACE INTO`table` VALUES (`id`,`col1`,`col2`) VALUES
(1,6,1),(2,2,3),(3,9,5),(4,16,8);
Please note:
id has to be a primary unique key
if you use foreign keys to
reference the table, REPLACE deletes then inserts, so this might
cause an error
I took the answer from #newtover and extended it using the new json_table function in MySql 8. This allows you to create a stored procedure to handle the workload rather than building your own SQL text in code:
drop table if exists `test`;
create table `test` (
`Id` int,
`Number` int,
PRIMARY KEY (`Id`)
);
insert into test (Id, Number) values (1, 1), (2, 2);
DROP procedure IF EXISTS `Test`;
DELIMITER $$
CREATE PROCEDURE `Test`(
p_json json
)
BEGIN
update test s
join json_table(p_json, '$[*]' columns(`id` int path '$.id', `number` int path '$.number')) v
on s.Id=v.id set s.Number=v.number;
END$$
DELIMITER ;
call `Test`('[{"id": 1, "number": 10}, {"id": 2, "number": 20}]');
select * from test;
drop table if exists `test`;
It's a few ms slower than pure SQL but I'm happy to take the hit rather than generate the sql text in code. Not sure how performant it is with huge recordsets (the JSON object has a max size of 1Gb) but I use it all the time when updating 10k rows at a time.
The following will update all rows in one table
Update Table Set
Column1 = 'New Value'
The next one will update all rows where the value of Column2 is more than 5
Update Table Set
Column1 = 'New Value'
Where
Column2 > 5
There is all Unkwntech's example of updating more than one table
UPDATE table1, table2 SET
table1.col1 = 'value',
table2.col1 = 'value'
WHERE
table1.col3 = '567'
AND table2.col6='567'
UPDATE tableName SET col1='000' WHERE id='3' OR id='5'
This should achieve what you'r looking for. Just add more id's. I have tested it.
UPDATE `your_table` SET
`something` = IF(`id`="1","new_value1",`something`), `smth2` = IF(`id`="1", "nv1",`smth2`),
`something` = IF(`id`="2","new_value2",`something`), `smth2` = IF(`id`="2", "nv2",`smth2`),
`something` = IF(`id`="4","new_value3",`something`), `smth2` = IF(`id`="4", "nv3",`smth2`),
`something` = IF(`id`="6","new_value4",`something`), `smth2` = IF(`id`="6", "nv4",`smth2`),
`something` = IF(`id`="3","new_value5",`something`), `smth2` = IF(`id`="3", "nv5",`smth2`),
`something` = IF(`id`="5","new_value6",`something`), `smth2` = IF(`id`="5", "nv6",`smth2`)
// You just building it in php like
$q = 'UPDATE `your_table` SET ';
foreach($data as $dat){
$q .= '
`something` = IF(`id`="'.$dat->id.'","'.$dat->value.'",`something`),
`smth2` = IF(`id`="'.$dat->id.'", "'.$dat->value2.'",`smth2`),';
}
$q = substr($q,0,-1);
So you can update hole table with one query

How to optimise this temporary table query?

I have written a stored procedure that takes comma separated value as input and another value. I traverse values from comma separated value and then run a query for each value. now I need to return result so I thought to store results in temporary table and then selected values from that temporary table.But it is taking too long.
for query with 163 comma separated values it is taking 7 seconds.
for query with 295 comma separated values it is taking 12 seconds.
Here is procedure:-
DELIMITER $
create procedure check_fbid_exists(IN myArrayOfValue TEXT, IN leaderID INT(11) )
BEGIN
DECLARE status TINYINT(1);
DECLARE value INT(11);
DECLARE pos int(11);
CREATE TEMPORARY TABLE fbid_exists_result (userID int(11), status tinyint(1));
WHILE (CHAR_LENGTH(myArrayOfValue) > 0)
DO
SET pos=LOCATE( ',', myArrayOfValue);
IF pos>0 THEN
SET value = LEFT( myArrayOfValue,pos-1 );
SET myArrayOfValue= SUBSTRING( myArrayOfValue,pos+1 );
ELSE
SET value = myArrayOfValue;
SET myArrayOfValue='';
END IF;
SELECT EXISTS(SELECT 1 FROM users_followings WHERE UserID=value and LeaderUserID=leaderID LIMIT 1) INTO status;
insert into fbid_exists_result VALUES(value,status);
END WHILE;
SELECT * FROM fbid_exists_result ;
DROP TEMPORARY TABLE IF EXISTS fbid_exists_result ;
END$

How to Pass Variable into a MySQL Stored Procedure from PHP

I have the following stored procedure:
proc_main:begin
declare done tinyint unsigned default 0;
declare dpth smallint unsigned default 0;
create temporary table hier(
AGTREFERRER int unsigned,
AGTNO int unsigned,
depth smallint unsigned default 0
)engine = memory;
insert into hier values (p_agent_id, p_agent_id, dpth);
/* http://dev.mysql.com/doc/refman/5.0/en/temporary-table-problems.html */
create temporary table tmp engine=memory select * from hier;
while done <> 1 do
if exists( select 1 from agents a inner join hier on a.AGTREFERRER = hier.AGTNO and hier.depth = dpth) then
insert into hier
select a.AGTREFERRER, a.AGTNO, dpth + 1 from agents a
inner join tmp on a.AGTREFERRER = tmp.AGTNO and tmp.depth = dpth;
set dpth = dpth + 1;
truncate table tmp;
insert into tmp select * from hier where depth = dpth;
else
set done = 1;
end if;
end while;
select
a.AGTNO,
a.AGTLNAME as agent_name,
if(a.AGTNO = b.AGTNO, null, b.AGTNO) as AGTREFERRER,
if(a.AGTNO = b.AGTNO, null, b.AGTLNAME) as parent_agent_name,
hier.depth,
a.AGTCOMMLVL
from
hier
inner join agents a on hier.AGTNO = a.AGTNO
inner join agents b on hier.AGTREFERRER = b.AGTNO
order by
-- dont want to sort by depth but by commission instead - i think ??
-- hier.depth, hier.agent_id;
a.AGTCOMMLVL desc;
drop temporary table if exists hier;
drop temporary table if exists tmp;
end proc_main
While the function does its job well - it only currently allows sorting via AGTCOMMLVL descending order. The stored procedure's purpose is to match a memberID with their parentID and associated COMMLVL. Once paired appropriately,I use the memberID in a second query to return information about that particular member.
I would like to be able to sort by any number of filters but have the following problems:
I can't seem to find a way to pass a variable into the stored procedure altering its sorting by field.
Even if I could - the sort may actually only contain data from the second query (such as first name, last name, etc)
Running a sort in the second query does nothing even though syntax is correct - it always falls back to the stored procedure's sort.
any ideas?
EDIT
My php uses mysqli with code:
$sql = sprintf("call agent_hier2(%d)", $agtid);
$resulta = $mysqli->query($sql, MYSQLI_STORE_RESULT) or exit(mysqli_error($mysqli));
If you want to sort by input parameter of the stored procedure, you need to use Prepared staments
For example,
DELIMITER //
CREATE PROCEDURE `test1`(IN field_name VARCHAR(40) )
BEGIN
SET #qr = CONCAT ("SELECT * FROM table_name ORDER BY ", field_name);
PREPARE stmt FROM #qr;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END //
$stmt = $dbh->prepare("CALL sp_takes_string_returns_string(?)");
$value = 'hello';
$stmt->bindParam(1, $value, PDO::PARAM_STR|PDO::PARAM_INPUT_OUTPUT, 4000);
// call the stored procedure
$stmt->execute();
print "procedure returned $value\n";
This also works in Mysql 5.6
DELIMITER //
CREATE PROCEDURE `test1`(IN field_name VARCHAR(40) )
BEGIN
"SELECT * FROM table_name ORDER BY ", field_name);
END //

Categories