I have a table with more than 300 000 rows and I need to select the highest value for the column 'id'. Usually, I will do like this:
SELECT id FROM my_table ORDER BY id DESC
... but this will cause slow queries and I don't want to use it. Is there a different way to solve this situation? id is auto increment and primary key.
Later Edit: It seems my full code is quite bad written, as I deduct from your comments. Below I posted a sample of the code I'm working and the tables. Can you suggest a proper way to insert the last ID+1 of table_x in two tables (including table_x itself). I have to mention that the script will be running more than once.
TABLE_X TABLE_Y
------------ ----------
id_x | value id_y | id_x
------------ ----------
1 | A 1 | 3
2 | B
3 | C
<?php
for($i=0; $i<10; $i++){
$result_x = mysql_query('SELECT id_x FROM table_x ORDER BY id_x DESC');
$row_x = mysql_fetch_array($result_x);
$next = $row_x['id_x'] + 1;
mysql_query('INSERT INTO table_x(id_x) VALUES("'.$next.'")');
mysql_query('INSERT INTO table_y(id_x) VALUES("'.$next.'")');
}
?>
Slightly better:
SELECT id FROM my_table ORDER BY id DESC LIMIT 1
Significantly better:
SELECT MAX(id) FROM my_table
Here is the right code you have to use.
mysql_query('INSERT INTO table_x(id_x) VALUES(NULL)');
$id = mysql_insert_id();
mysql_query("INSERT INTO table_y(id_x) VALUES($id)");
Depending on the context either
SELECT id FROM my_table ORDER BY id DESC LIMIT 1
or mysql_insert_id() in PHP or (SELECT LAST_INSERT_ID()) in MySQL
As other said, you should use the MAX operator:
SELECT MAX( id ) FROM my_table ORDER BY id DESC
As a general rule of thumb, always reduce the amount of records returned from the database. The database always is faster than your application program when operating on result sets.
In case of slow queries, please give EXPLAIN a try:
EXPLAIN SELECT id FROM my_table ORDER BY id DESC
vs.
EXPLAIN SELECT MAX( id ) FROM my_table ORDER BY id DESC
EXPLAIN ask MySQL's query optimizer how it sees the query. Look in the documentation, to learn how to read its output.
PS: I really wonder, why you need MAX(id). Even if your application gets the value back from the database, it is useless: Another process might just during the next CPU cycle have inserted a new record - and MAX(id) isn't valid any more.
I guess it is slow because you retrieve all 300 000 rows. Add LIMIT 1 to the query.
SELECT id FROM my_table ORDER BY id DESC LIMIT 1
Or use the MAX() operator.
Related
I have a MySQL query that results in something like this:
person | some_info
==================
bob | pphsmbf24
bob | rz72nixdy
bob | rbqqarywk
john | kif9adxxn
john | 77tp431p4
john | hx4t0e76j
john | 4yiomqv4i
alex | n25pz8z83
alex | orq9w7c24
alex | beuz1p133
etc...
(This is just a simplified example. In reality there are about 5000 rows in my results).
What I need to do is go through each person in the list (bob, john, alex, etc...) and pull out a row from their set of results. The row I pull out is sort of random but sort of also based on a loose set of conditions. It's not really important to specify the conditions here so I'll just say it's a random row for the example.
Anyways, using PHP, this solution is pretty simple. I make my query and get 5000 rows back and iterate through them pulling out my random row for each person. Easy.
However, I'm wondering if it's possible to get what I would from only a MySQL query so that I don't have to use PHP to iterate through the results and pull out my random rows.
I have a feeling it might involve a BUNCH of subselects, like one for each person, in which case that solution would be more time, resource and bandwidth intensive than my current solution.
Is there a clever query that can accomplish this all in one command?
Here is an SQLFiddle that you can play with.
To get a random value for a distinct name use
SELECT r.name,
(SELECT r1.some_info FROM test AS r1 WHERE r.name=r1.name ORDER BY rand() LIMIT 1) AS 'some_info'
FROM test AS r
GROUP BY r.name ;
Put this query as it stands in your sqlfiddle and it will work
Im using r and r1 as table alias names. This will also use a subquery to select a random some_info for the name
SQL Fiddle is here
My first response would be to use php to generate a random number:
$randId = rand($min, $max);
Then run a SQL query that only gets the record where your index equals $randID.
Here is the solution:
select person, acting from personel where id in (
select lim from
(select count(person) c, min(id) i, cast(rand()*(count(person)-1) +min(id)
as unsigned) lim from personel group by person order by i) t1
)
The table used in the example is below:
create table personel (
id int(11) not null auto_increment,
person char(16),
acting char(19),
primary key(id)
);
insert into personel (person,acting) values
('john','abd'),('john','aabd'),('john','adbd'),('john','abfd'),
('alex','ab2d'),('alex','abd3'),('alex','ab4d'),('alex','a6bd'),
('max','ab2d'),('max','abd3'),('max','ab4d'),('max','a6bd'),
('jimmy','ab2d'),('jimmy','abd3'),('jimmy','ab4d'),('jimmy','a6bd');
You can limit the number of queries, and order by "rand()" to get your desired result.
Perhaps if you tried something like this:
SELECT name, some_info
FROM test
WHERE name = 'tara'
ORDER BY rand()
LIMIT 1
I am trying to update fields in my DB, but got stuck with such a simple problem: I want to update just one row in the table with the biggest id number. I would do something like that:
UPDATE table SET name='test_name' WHERE id = max(id)
Unfortunatelly it doesnt work. Any ideas?
Table Structure
id | name
---|------
1 | ghost
2 | fox
3 | ghost
I want to update only last row because ID number is the greatest one.
The use of MAX() is not possible at this position. But you can do this:
UPDATE table SET name='test_name' ORDER BY id DESC LIMIT 1;
For multiple table, as #Euthyphro question, use table.column.
The error indicates that column id is ambiguous.
Example :
UPDATE table1 as t1
LEFT JOIN table2 as t2
ON t2.id = t1.colref_t2
SET t1.name = nameref_t2
ORDER BY t1.id DESC
LIMIT 1
UPDATE table SET name='test_name' WHERE id = (SELECT max(id) FROM table)
This query will return an error as you can not do a SELECT subquery from the same table you're updating.
Try using this:
UPDATE table SET name='test_name' WHERE id = (
SELECT uid FROM (
SELECT MAX(id) FROM table AS t
) AS tmp
)
This creates a temporary table, which allows using same table for UPDATE and SELECT, but at the cost of performance.
I think iblue's method is probably your best bet; but another solution might be to set the result as a variable, then use that variable in your UPDATE statement.
SET #max = (SELECT max(`id`) FROM `table`);
UPDATE `table` SET `name` = "FOO" WHERE `id` = #max;
This could come in handy if you're expecting to be running multiple queries with the same ID, but its not really ideal to run two queries if you're only performing one update operation.
UPDATE table_NAME
SET COLUMN_NAME='COLUMN_VALUE'
ORDER BY ID
DESC LIMIT 1;
Because you can't use SELECT IN DELETE OR UPDATE CLAUSE.ORDER BY ID DESC LIMIT 1. This gives you ID's which have maximum value MAX(ID) like you tried to do. But MAX(ID) will not work.
Old Question, but for anyone coming across this you might also be able to do this:
UPDATE
`table_name` a
JOIN (SELECT MAX(`id`) AS `maxid` FROM `table_name`) b ON (b.`maxid` = a.`id`)
SET a.`name` = 'test_name';
We can update the record using max() function and maybe it will help for you.
UPDATE MainTable
SET [Date] = GETDATE()
where [ID] = (SELECT MAX([ID]) FROM MainTable)
It will work the perfect for me.
I have to update a table with consecutive numbers.
This is how i do.
UPDATE pos_facturaciondian fdu
SET fdu.idfacturacompra = '".$resultado["afectados"]."',
fdu.fechacreacion = '".$fechacreacion."'
WHERE idfacturaciondian =
(
SELECT min(idfacturaciondian) FROM
(
SELECT *
FROM pos_facturaciondian fds
WHERE fds.idfacturacompra = ''
ORDER BY fds.idfacturaciondian
) as idfacturaciondian
)
Using PHP I tend to do run a mysqli_num_rows then put the result into a variable, then do an UPDATE statement saying where ID = the newly created variable. Some people have posted there is no need to use LIMIT 1 on the end however I like to do this as it doesn't cause any trivial delay but could prevent any unforeseen actions from being taken.
If you have only just inserted the row you can use PHP's mysqli_insert_id function to return this id automatically to you without needing to run the mysqli_num_rows query.
Select the max id first, then update.
UPDATE table SET name='test_name' WHERE id = (SELECT max(id) FROM table)
mysql fetch previous or next record order by anyother field name and not by using order by id
select * from table where id > $id order by name asc limit 1
select * from table where id < $id order by name desc limit 1
I am able to get next and previous records but in this case how can i
upgrade next and previous records.
ID Links orderID
14 Google.com 1
15 Yahoo.com 2
20 gmail.com 3
25 facebook.com 4
What about if i use + and - button in front each link to upgrade and downgrade them and then rearrange the menus order by orderID ?
Well, if you really want to do it in a single query, you can use a subquery to find out the ID you need to update. The problem lies in the fact that MySQL cannot update the same table that you're trying to subquery, for obvious data integrity reasons. So you'll need to use some workarounds for that, such as creating a temporary table in a subquery.
UPDATE table AS t
SET [...]
WHERE t.`id` = (select * FROM (select `id` from table where `id` > $id order by `id` asc limit 1) AS sq)
There is absolutely no need to do the two selects.
You can do the following:
UPDATE table SET field='some value' WHERE id=$id+1
UPDATE table SET field='some value' WHERE id=$id-1
There you go :)
There is about 1 milion records.
Query is needed for the pagination system.
Query looks like this:
SELECT field1, field2, field3 FROM table
WHERE field4 = '$value'
ORDER BY field5 ASC limit $offset, 30;
There is index on field4 and field5.
All fields are varchar type.
Field5 was tested as int but no real improvement notice.
Right now query with limit 0, 30 takes about 1 second, but query with limit 119970, 30 takes about 20 seconds.
Is this realistic to obtain result less than 0.1 second on paginated pages? Such loading time is needed for the website to offer good user experience.
EXPLAIN (select with limit 0,30)
id: 1
select_type: SIMPLE
table: table
type: index
possible_keys: NULL
key: field5
key_len: 768
ref: NULL
rows: 223636
Extra: Using where
You could try table partitioning:
http://dev.mysql.com/doc/refman/5.1/en/partitioning.html
Depending on how much your data is changing, you could use memcached to reduce the amount of times a certain query needs to get run and reduce the response time for most users.
It should have index "B-TREE (field4, field5)".
This index will allow MySQL to use field4 to find the records and limit them with order on field5.
I have a suggestion that the data should not be fully loaded into pagination directly instead using ajax query again for different page when user click it. So in exact page it only shows the current page number data. Hope it helps and Good Luck.
Is this table have PK?
Select field1,field2,field3 from table as a
join (select PK from table WHERE field4 = '$value'
ORDER BY field5 ASC limit $offset, 30) as b
on a.PK=b.PK;
Have you considered making another table (table 6) that is an indexed hash of table4? Searching numbers instead of text is going to be a lot faster so the query is something like:
SELECT field1, field2, field3 Force Index(Table6) FROM table
WHERE field 6 = '$hashvalue' AND field4 = '$value'
ORDER BY field5 ASC limit $offset, 30;
It should help to eliminate 99.99% of data before it has to text search and should speed up your queries regardless of the offset....
PHP
$total_points = mysql_result(mysql_query("SELECT COUNT(*) as Num FROM account WHERE id='$id'"),0)
Mysql account table
|user_id|
mysql points table
|id | user_id | points |
or
PHP
$total_points = mysql_query("SELECT points FROM account WHERE id='$id'");
Mysql account table
|user_id| points |
Mysql points table
|id | user_id | points |
Storing the variable would probably be faster, but that also means constantly updating it, which can be less efficient / slower in the long run.
Using COUNT(id) will be much better than COUNT(*), however. So my vote would be to use COUNT(id) :)
First off, for the first line you had, I believe it's faster to use FOUND_ROWS() than COUNT(*).
$total_points = mysql_num_rows(mysql_query("SELECT FOUND_ROWS() FROM account WHERE id='$id'"),0)
The second approach would be faster once that points table grows large, but you need to make sure you increment the account table properly so those values are in sync. It could become inaccurate if you forget to add the points in some places, or forget to delete them, etc.
The COUNT(*) or query version should be faster because you are not going futher down to mysql_num_rows. For counting you don't need all fields (*), you should simply do COUNT(fieldID) which is a lot faster.
Few Points To Note:
Note that you are getting only one row anyways because you are using where clause, in other words, the result will be either one row or no row if not found:
$total_points = mysql_result(mysql_query("SELECT COUNT(*) as Num FROM account WHERE id='$id'"),0)
Normally you should count when you are expecting more than one record eg:
$total_points = mysql_result(mysql_query("SELECT COUNT(*) as Num FROM account"),0)
Again, for optimized query, use a single field:
$total_points = mysql_result(mysql_query("SELECT COUNT(fieldID) as Num FROM account"),0)