I need to update a row in my table.
This task like a replace the value of specific column in table.
I'm not using PRIMARY KEY or UNIQUE KEY (Because I can't insert duplicate value in that particular column).
+-----+-----+-----+
| id |col1 |col2 |
+-----+-----+-----+
| 1 | a | 404 |
+-----+-----+-----+
| 2 | b | 22 |
+-----+-----+-----+
Now I update my table :
UPDATE table_name SET col2 = 0 WHERE col2 = 404;
UPDATE table_name SET col2 = 404 WHERE id = 2;
This result I want.
+-----+-----+-----+
| id |col1 |col2 |
+-----+-----+-----+
| 1 | a | 0 |
+-----+-----+-----+
| 2 | b | 404 |
+-----+-----+-----+
I have used two query in order to get it done.
Is there any possible function in a single query?
(OR)
is there any simpler method?
You could use a case expression:
UPDATE table_name
SET col2 = CASE col2 WHEN 0 THEN 404 WHEN 404 THEN 2 END
WHERE col2 IN (2, 404);
But frankly, using two separate statements seems clearer to me.
As Mureinik has already said, you can use a case expression. It is better to have a single update, especially if you are creating two requests for the two updates currently. However, if the order of the execution is important for some reason, then you cannot put them together like that.
Related
I can't find a clear answer for this anywhere in MySQL documentation.
When I run a query, something like:
Code Block 1
$stmt = $db->prepare('SELECT id, name FROM table WHERE status=1');
does the search start at the beginning of the table, at row 0 (or the lowest available row)?
What I'm trying to do is go through a table one row at a time, and then exit when I get to the end:
Code Block 2
$curRow = 0;
while(true){
$stmt = $db->prepare('SELECT id, name FROM table WHERE status=? AND id>? LIMIT 1');
$stmt->execute(array(0, $curRow));
$result = $stmt->fetchAll();
if(count($result)){
$curRow = $result[0]['id'];
$stmt2 = $db->prepare('UPDATE table SET status=? WHERE id=?');
$stmt2->execute(array(1, $curRow));
... do some other stuff ...
}else{
exit();
}
}
And so far, in testing, this has worked exactly as intended. But will it always be so?
Possible erroneous case:
Start out with the following table:
table
id | name | status
-- | ---- | ------
1 | ... | 0
2 | ... | 0
3 | ... | 0
4 | ... | 0
5 | ... | 0
6 | ... | 0
And run the query in Code Block 2. Say it starts at the first row, so now we have $curRow=1, and the table looks as follows:
table
id | name | status
-- | ---- | ------
1 | ... | 1
2 | ... | 0
3 | ... | 0
4 | ... | 0
5 | ... | 0
6 | ... | 0
All is well. The code does whatever it needs to, and then continues with the loop. Any of the remaining rows will satisfy the conditions in $stmt (i.e. status=0 and id>$curRow).
Will the statement always look at consecutive rows when checking the conditions? If not, it could end up at any arbitrary row, say the third:
table
id | name | status
-- | ---- | ------
1 | ... | 1
2 | ... | 0
3 | ... | 1
4 | ... | 0
5 | ... | 0
6 | ... | 0
And now we have $curRow=3, which means the query will never go back and look at the second row.
I know it's tricky business speaking in absolutes (always, never, every time, ...), but is there a way to ensure that the query begins at the lowest available row? Or does MySQL handle this automatically?
There is no guarantee for any reliable order when you do not explicitly set it to a key. It might seem to be ordered for now, but over time, with more data, maybe with more servers, partitioned data, union'ed data, it will change quickly to something unexpected.
Better use ORDER BY:
$stmt = $db->prepare('SELECT id, name FROM table WHERE status=1 ORDER BY ID ASC');
Make sure you have an index on the column you want to order by, it will speed up things!
You should not write such a code which has such assumptions of your database. Your code might get less maintainable, harder to debug when some change comes to your database and that'll be a total headache for you. You should think of other mechanisms / workarounds to get the job done which is also more professional.
You might want to add a column which will provide that they can be ordered. For example date, ID, whatsoever.
Look up ORDER BY clause.
I have the following MySQL table named users
+----------------+----------------+----------+ +----------+
| uniquenum | state | type | | custid |
+----------------+----------------+----------+ +----------+
+----------------+----------------+----------+ +----------+
| 00001 | 03 | 1 | | 10300001 |
+----------------+----------------+----------+ +----------+
| 00002 | 02 | 3 | | 30200002 |
+----------------+----------------+----------+ +----------+
The above three columns uniquenum, state and type are all concatenated and shown in the custid column. This I was able to achieve by running the following SQL query:
UPDATE users SET custid = concat (type,state,uniquenum)
What I am trying to achieve is to make the custid column to automatically get the values of the other three columns when new values are added or the old ones updated. So, I tried to create a trigger as follows:
CREATE trigger insert_custid
before insert on users
for each row
set new.custid = concat(new.type,new.state,new.uniquenum);
create trigger update_custid
before update on users
for each row
set new.custid = concat(new.type,new.state,new.uniquenum);
When I do this and insert into the table, the custid column does not store the correct values. Instead it returns all zeros in place of the value of uniquenum.
uniquenum is AUTO_INCREMENT with zerofill and starts at 00001 with an interval of 1. Could this be causing an issue? Any help is greatly appreciated. Thank you :)
I know that this title is overused, but it seems that my kind of question is not answered yet.
So, the problem is like this:
I have a table structure made of four tables (tables, rows, cols, values) that I use to recreate the behavior of the information_schema (in a way).
In php I am generating queries to retrieve the data, and the result would still look like a normal table:
SELECT
(SELECT value FROM `values` WHERE `col` = "3" and row = rows.id) as "col1",
(SELECT value FROM `values` WHERE `col` = "4" and row = rows.id) as "col2"
FROM rows WHERE `table` = (SELECT id FROM tables WHERE name = 'table1')
HAVING (col2 LIKE "%4%")
OR
SELECT * FROM
(SELECT
(SELECT value FROM `values` WHERE `col` = "3" and row = rows.id) as "col1",
(SELECT value FROM `values` WHERE `col` = "4" and row = rows.id) as "col2"
FROM rows WHERE `table` = (SELECT id FROM tables WHERE name = 'table1')) d
WHERE col2 LIKE "%4%"
note that the part where I define the columns of the result is generated by a php script. It is less important why I am doing this, but I want to extend this algorithm that generates the queries for a broader use.
And we got to the core problem, I have to decide if I will generate a where or a having part for the query, and I know when to use them both, the problem is my algorithm doesn't and I have to make a few extra checks for this. But the two above queries are equivalent, I can always put any query in a sub-query, give it an alias, and use where on the new derived table. But I wonder if I will have problems with the performance or not, or if this will turn back on me in an unexpected way.
I know how they both work, and how where is supposed to be faster, but this is why I came here to ask. Hopefully I made myself understood, please excuse my english and the long useless turns of phrases, and all.
EDIT 1
I already know the difference between the two, and all that implies, my only dilemma is that using custom columns from other tables, with variable numbers and size, and trying to achieve the same result as using a normally created table implies that I must use HAVING for filtering the derived tables columns, at the same time having the option to wrap it up in a subquery and use where normally, this probably will create a temporary table that will be filtered afterwards. Will this affect performance for a large database? And unfortunately I cannot test this right now, as I do not afford to fill the database with over 1 billion entries (that will be something like this: 1 billion in rows table, 5 billions in values table, as every row have 5 columns, 5 rows in cols table and 1 row in tables table = 6,000,006 entries in total)
right now my database looks like this:
+----+--------+-----------+------+
| id | name | title | dets |
+----+--------+-----------+------+
| 1 | table1 | Table One | |
+----+--------+-----------+------+
+----+-------+------+
| id | table | name |
+----+-------+------+
| 3 | 1 | col1 |
| 4 | 1 | col2 |
+----+-------+------+
where `table` is a foreign key from table `tables`
+----+-------+-------+
| id | table | extra |
+----+-------+-------+
| 1 | 1 | |
| 2 | 1 | |
+----+-------+-------+
where `table` is a foreign key from table `tables`
+----+-----+-----+----------+
| id | row | col | value |
+----+-----+-----+----------+
| 1 | 1 | 3 | 13 |
| 2 | 1 | 4 | 14 |
| 6 | 2 | 4 | 24 |
| 9 | 2 | 3 | asdfghjk |
+----+-----+-----+----------+
where `row` is a foreign key from table `rows`
where `col` is a foreign key from table `cols`
EDIT 2
The conditions are there just for demonstration purposes!
EDIT 3
For only two rows, it seems there is a difference between the two, the one using having is 0,0008 and the one using where is 0.0014-0.0019. I wonder if this will affect performance for large numbers of rows and columns
EDIT 4
The result of the two queries is identical, and that is:
+----------+------+
| col1 | col2 |
+----------+------+
| 13 | 14 |
| asdfghjk | 24 |
+----------+------+
HAVING is specifically for GROUP BY, WHERE is to provide conditional parameters. See also WHERE vs HAVING
I believe the having clause would be faster in this case, as you're defining specific values, as opposed to reading through the values and looking for a match.
See: http://database-programmer.blogspot.com/2008/04/group-by-having-sum-avg-and-count.html
Basically, WHERE filters out columns before passing them to an aggregate function, but HAVING filters the aggregate function's results.
you could do it like that
WHERE col2 In (14,24)
your code WHERE col2 LIKE "%4%" is bad idea so what about col2 = 34 it will be also selected.
How much faster (in %) sql will be if I will avoid to used built-in mysql date and time functions ?
What do I mean ? For example: SELECT id FROM table WHERE WEEKOFYEAR(inserted)=WEEKOFYEAR(CURDATE())
MySQL has a lot of buil-in function to work with date and time, and they are suitable as well. But what about peromance ?
Above sql can be rewritten without built-in functions, like: SELECT id FROM table WHERE inserted BETWEEN 'date for 1 day of particular week 00:00:00' AND 'last day of particular week 23:59:59', server side code become worse :( but on db side we could use indexes
I see two problems for usage built-in functions:
1. indexes
I did small test
mysql> explain extended select id from table where inserted between '2013-07-01 00:00:00' and '2013-07-01 23:59:59';
+----+-------------+-------+-------+---------------+------+---------+------+------+----------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+-------+---------------+------+---------+------+------+----------+--------------------------+
| 1 | SIMPLE | table | range | ins | ins | 4 | NULL | 7 | 100.00 | Using where; Using index |
+----+-------------+-------+-------+---------------+------+---------+------+------+----------+--------------------------+
mysql> explain extended select id from table where date(inserted)=curdate();
+----+-------------+-------+-------+---------------+------+---------+------+--------+----------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+-------+---------------+------+---------+------+--------+----------+--------------------------+
| 1 | SIMPLE | table | index | NULL | ins | 4 | NULL | 284108 | 100.00 | Using where; Using index |
+----+-------------+-------+-------+---------------+------+---------+------+--------+----------+--------------------------+
First one took 0.00 sec second one was running after first one and took 0.15. Everything was made with small anout of data.
and second problem, is
time to call that functions
If in table I have 1 billion records it means that WEEKOFYEAR, DATE whatever... would be called so many times, so many records do we have, right ?
So the question will it bring real profit if I will stop to work with mysql built-in date and time functions ?
Using a function of a column in a WHERE clause or in a JOIN condition will prevent the use of indexes on the column(s), if such indexes exist. This is because the raw value of the column is indexed, as opposed to the computed value.
Notice the above does not apply for a query like this:
SELECT id FROM atable WHERE inserted = CURDATE(); -- the raw value of "inserted" is used in the comparison
And yes, on top of that, the function will be executed for each and every row scanned.
The second query is running the date function on every row in the table, while the first query can just use the index to find the rows it needs. Thats where the biggest slowdown would be. Look at the rows column in the explain output
So I have this query:
SELECT * FROM cars {$statement} AND deleted = 'no' AND carID NOT IN (SELECT carID FROM reservations WHERE startDate = '".$sqlcoldate."') GROUP BY model
It basically checks the reservations table and then if there are reservations, it gets those carIDs and excludes them from the loop.
This is cool, so as there may be three dodge vipers and 2 are booked out it will only display the last one, and it will only display one at a time anyway because I group the results by model.
All that is good, however when it runs out of entries, so all the cars are booked out, the car does not appear in the list of cars. (As i clear from the query).
I would like a way to say if no rows of a certain car model are in the results, to display a placeholder, that says something like 'UNAVAILABLE'.
Is this possible at all? Its mainly so users can see the company owns that car, but knows its not available on that date.
You should probably handle this in the PHP, checking the number of rows returned and replacing the 0 with "UNAVAILABLE".
Based on TO comment:
In this case you want to look at
http://dev.mysql.com/doc/refman/5.1/en/case.html
This would need to go into the SELECT list like
SELECT
CASE car_count WHEN 0 THEN 'UNAVAILABLE'
WHERE ...
Without seen some of your data, its hard to give you a query, but if you move your subquery to your select expression, you could return the count available (which would be 0 when they are all reserved). Then when you display your data, you could then check if the count is 0, and display your unavailable message.
Edit:
Given the table cars:
+----+----------+
| id | model |
+----+----------+
| 1 | viper |
| 2 | explorer |
| 3 | viper |
| 4 | explorer |
+----+----------+
and the table reservations:
+-------+------------+
| carid | date |
+-------+------------+
| 1 | 2013-03-07 |
| 3 | 2013-03-07 |
+-------+------------+
A query similar to yours above will return:
+----+----------+
| id | model |
+----+----------+
| 2 | explorer |
+----+----------+
If you change it to something like:
SELECT
`outer`.`model`,
(
SELECT COUNT(*)
FROM
`cars` AS `inner`
WHERE
`inner`.`model` = `outer`.`model` AND
`inner`.`id` NOT IN(
SELECT `carid`
FROM `reservations`
WHERE `date` = '2013-03-07'
)
GROUP BY `inner`.`model`
) AS `count`
FROM cars AS `outer`
GROUP BY `outer`.`model`;
then you would get results like:
+----------+-------+
| model | count |
+----------+-------+
| explorer | 2 |
| viper | NULL |
+----------+-------+
If you then needed the NULL value to come back as a 0, you could use COALESCE, as Liv mentioned previously.
It's not pretty, and I'm sure it could be done a much cleaner way, but it does work.
There was a similar question asked here that might get you headed in the right direction. Check out the COALESCE() function.
The built-in function COALESCE() returns the first not-null value in its arguments. This lets you structure queries like SELECT COALSECE(foo, 'bar') [...] such that the result will be the value in column 'foo' if it is not null, or the value 'bar' if it is.