mysql auto-increment - reassign after a record has been deleted - php

I'm using auto-increment to assign a id to every new entry in my database.
Also, i've got a php-script which is selecting 10 entrys from this database, according to the current pagenumber.
For example, if the current page is 2, the script should select every entry between id=20 and id=29.
Now, if i delete an entry, the id is lost. Therefore the script will only show 9 entrys when id28 has been deleted.
Is there a way to reassign the auto-increment values after e record has been deleted?

This is usually considered desirable: if entry number 28 is deleted, there will never again be an entry 28. If someone ever tries to refer to it, they're clearly trying to refer to the one that's been deleted, and you can report an error.
If you went back and reassigned 28 to some new entry, now you have no idea whether the person meant the old 28 or the new record.
Let's take a step back and revisit what you want to do. Your goal is not to show ten entries between 20 and 30. Your goal is to show ten entries that meet your criteria. For that, you can use the built-in LIMIT and OFFSET terms:
SELECT ...
FROM ...
WHERE ...
OFFSET 29
LIMIT 10
The OFFSET tells MySQL where to start counting and LIMIT tells it how many to count. MySQL will put together the whole result set, then give you 10 entries beginning at the 30th one. That's your fourth page of results. It does not remotely matter now what IDs they happen to have.

You should change the way you select entries for a page.
By using the LIMIT .. OFFSET clause, you will be able to select 10 entries, starting at Nth entry:
SELECT *
FROM table
ORDER BY id
LIMIT 10
OFFSET 19
LIMIT 10 means return only 10 rows
OFFSET 19 means return rows after 19th row
This way, it doesn't matter if some id has been removed and ids are not sequential.
Just change the OFFSET:
Page 1 => OFFSET 0
Page 2 => OFFSET 9
Page 3 => OFFSET 19
Page N => OFFSET (N-1)*10-1

Your question suggests a query like
SELECT columns FROM table WHERE id BETWEEN 20 AND 29;
or less elegant
SELECT columns FROM table WHERE id >= 20 AND id <= 29;
If so, I suggest you read up on the LIMIT clause and do somethong along the lines of
SELECT columns FROM table ORDER BY id LIMIT 20, 10;

You can but you shouldn't. If id=29 exists are you reset the auto-increment to 28, then there will be problems when auto-increment wants to use id=29 but the record already exists.
You'd be better off writing a query like so:
select * from table order by ID LIMIT n,10;
Where n is the page number*10

Reassign auto-increment values it's bad practice, because id, may used in relation with other tables.Use offset / limit construction, and forget about reassign auto-increment :)

As a direct answer to your question (auto_increment), you can change its value with the following query:
ALTER TABLE `your_table` AUTO_INCREMENT = 200
The next created item would have its next AI collumn value set to 200. This is useful in some cases, although I agree you would be better off using the LIMIT offsets.

Related

Mariadb: Pagination using OFFSET and LIMIT is skipping one row

I have this MariaDB table:
id, name
The id column has these attributes: Primary, auto_increment, unique.
The table has 40,000 rows.
I'm using this PHP & MariaDB to load rows from this table.
This is the PHP code:
$get_rows = $conn->prepare("SELECT * FROM my_table where id> 0 ORDER BY id ASC LIMIT 30 OFFSET ?");
$get_rows->bind_param('i', $offset);
//etc.
The query returned everything correctly at the first time, but in the next query (made through AJAX), I received the next 30 rows with a gap of one row between the current result and the next one. And this goes on and on.
In the table, the row #1 had been deleted. So, I restored it, and now the query works. However, I will definitely have to delete more rows in the future. (I don't have the option of soft-deleting).
Is there any way I can keep deleting rows, and have these queries return correct results (without skipping any row)?
EDIT
Here's an example of the range of the ids in the first 2 queries:
Query 1:
247--276
Query 2:
278--307
(277 is missing)
NB I asked ChatGPT, but it couldn't help. :')
LIMIT and OFFSET query rows by position, not by value. So if you deleted a row in the first "page," then the position of all subsequent rows moves down by one.
One solution to ensure you don't miss a row is to define pages by the greatest id value on the preceding page, instead of by the offset.
$get_rows = $conn->prepare("
SELECT * FROM my_table where id> ?
ORDER BY id ASC LIMIT 30");
$get_rows->bind_param('i', $lastId);
This only works if your previous query viewed the preceding page, so you can save the value of the last id in that page.

From a table get the last 100 rows in mysql

I want to retrieve the last 100 values, I've used this query:
SELECT * FROM values WHERE ID BETWEEN max(ID)-100 and max(ID);
but I receive this message:
ERROR 1111 (HY000): Invalid use of group function
Order by the ID in descending order and take only the first 100 records of the result
SELECT * FROM values
order by id desc
limit 100
This is the more reliable version since there can be gaps in the ID sequence which would make your query inaccurate (besides of it being wrong syntactically).
Your question is not very clear
What is last 100 values? Last 100 ids inserted? Or last 100 rows updated?
Assuming that you are looking for the last 100 roes inserted, you approach has issues. First know that the IDs are not sequentially committed to DB.
For, example ID of a row can be 5 at sometime and ID of a row inserted later can be 4. How this happens is beyond the scope, but just know that it is possible.
Coming to solution
Just do
SELECT TOP 100 * from VALUES ORDER BY ID DESC

Best way to have random order of elements in the table

I have got table with 300 000 rows. There is specially dedicated field (called order_number) in this table to story the number, which is later used to present the data from this table ordered by order_number field. What is the best and easy way to assign the random number/hash for each of the records in order to select the records ordered by this numbers? The number of rows in the table is not stable and may grow to 1 000 000, so the rand method should take it into the account.
Look at this tutorial on selecting random rows from a table.
If you don't want to use MySQL's built in RAND() function you could use something like this:
select max(id) from table;
$random_number = ...
select * from table where id > $random_number;
That should be a lot quicker.
UPDATE table SET order_number = sha2(id)
or
UPDATE table SET order_number = RAND(id)
sha2() is more random than RAND().
I know you've got enough answers but I would tell you how we did it in our company.
The first approach we use is with additional column for storing random number generated for each record/row. We have INDEX on this column, allowing us to order records by it.
id, name , ordering
1 , zlovic , 14
2 , silvas , 8
3 , jouzel , 59
SELECT * FROM table ORDER BY ordering ASC/DESC
POS: you have index and ordering is very fast
CONS: you will depend on new records to keep the randomization of the records
Second approach we have used is what Karl Roos gave an his answer. We retrieve the number of records in our database and using the > (greater) and some math we retrieve rows randomized. We are working with binary ids thus we need to keep autoincrement column to avoid random writings in InnoDB, sometimes we perform two or more queries to retrieve all of the data, keeping it randomized enough. (If you need 30 random items from 1,000,000 records you can run 3 simple SELECTs each for 10 items with different offset)
Hope this helps you. :)

Select is much slower when selecting latest records

A table with about 70K records is displayed on a site, showing 50 records per page.
Pagination is done with limit offset,50 on the query, and the records can be ordered on different columns.
Browsing the latest pages (so the offset is around 60,000) makes the queries much slower than when browsing the first pages (about 10x)
Is this an issue of using the limit command?
Are there other ways to get the same results?
With large offsets, MySQL needs to browse more records.
Even if the plan uses filesort (which means that all records should be browsed), MySQL optimizes it so that only $offset + $limit top records are sorted, which makes it much more efficient for lower values of $offset.
The typical solution is to index the columns you are ordering on, record the last value of the columns and reuse it in the subsequent queries, like this:
SELECT *
FROM mytable
ORDER BY
value, id
LIMIT 0, 10
which outputs:
value id
1 234
3 57
4 186
5 457
6 367
8 681
10 366
13 26
15 765
17 345 -- this is the last one
To get to the next page, you would use:
SELECT *
FROM mytable
WHERE (value, id) > (17, 345)
ORDER BY
value, id
LIMIT 0, 10
, which uses the index on (value, id).
Of course this won't help with arbitrary access pages, but helps with sequential browsing.
Also, MySQL has certain issues with late row lookup. If the columns are indexed, it may be worth trying to rewrite your query like this:
SELECT *
FROM (
SELECT id
FROM mytable
ORDER BY
value, id
LIMIT $offset, $limit
) q
JOIN mytable m
ON m.id = q.id
See this article for more detailed explanations:
MySQL ORDER BY / LIMIT performance: late row lookups
It's how MySQL deals with limits. If it can sort on an index (and the query is simple enough) it can stop searching after finding the first offset + limit rows. So LIMIT 0,10 means that if the query is simple enough, it may only need to scan 10 rows. But LIMIT 1000,10 means that at minimum it needs to scan 1010 rows. Of course, the actual number of rows that need to be scanned depend on a host of other factors. But the point here is that the lower the limit + offset, the lower that the lower-bound on the number of rows that need to be scanned is...
As for workarounds, I would optimize your queries so that the query itself without the LIMIT clause is as efficient as possible. EXPLAIN is you friend in this case...

Is it possible to have 2 limits in a MySQL query?

Ok here is the situation (using PHP/MySQL) you are getting results from a large mysql table,
lets say your mysql query returns 10,000 matching results and you have a paging script to show 20 results per page, your query might look like this
So page 1 query
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse=something else
LIMIT 0,20
So page 2 query
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse=something else
LIMIT 20,40
Now you are grabbing 20 results at a time but there are a total of 10,000 rows that match your search criteria,
How can you only return 3,000 of the 10,000 results and still do your paging of 20 per page with a LIMIT 20 in your query?
I thought this was impossible but myspace does it on there browse page somehow, I know they aren't using php/mysql but how can it be achieved?
UPDATE
I see some people have replied with a couple of methods, it seems none of these would actually improve the performance by limiting the number to 3000?
Program your PHP so that when it finds itself ready to issue a query that ends with LIMIT 3000, 20 or higher, it would just stop and don't issue the query.
Or I am missing something?
Update:
MySQL treats LIMIT clause nicely.
Unless you have SQL_CALC_FOUND_ROWS in your query, MySQL just stops parsing results, sorting etc. as soon as it finds enough records to satisfy your query.
When you have something like that:
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse='something else'
LIMIT 0, 20
, MySQL will fetch first 20 records that satisfy the criteria and stop.
Doesn't matter how many records match the criteria: 50 or 1,000,000, performance will be the same.
If you add an ORDER BY to your query and don't have an index, then MySQL will of course need to browse all the records to find out the first 20.
However, even in this case it will not sort all 10,000: it will have a "running window" of top 20 records and sort only within this window as soon as it finds a record with value large (or small) enough to get into the window.
This is much faster than sorting the whole myriad.
MySQL, however, is not good in pipelining recorsets. This means that this query:
SELECT column
FROM (
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse='something else'
LIMIT 3000
)
LIMIT 0, 20
is worse performance-wise than the first one.
MySQL will fetch 3,000 records, cache them in a temporary table (or in memory) and apply the outer LIMIT only after that.
Firstly, the LIMIT paramters are Offset and number of records, so the second parameter should always be 20 - you don't need to increment this.
Surely if you know the upper limit of rows you want to retrieve, you can just put this into your logic which runs the query, i.e. check that Offset + Limit <= 3000
As Sohnee said, or (depending on your requirements) you can get all the 3000 records by SQL and then use array_slice in php to get chunks of the array.
You could achieve this with a subquery...
SELECT name FROM (
SELECT name FROM tblname LIMIT 0, 3000
) `Results` LIMIT 20, 40
Or with a temporary table, whereby you select all 3000 rows into a temp table then page by the temporary row id, which will be sequential.
You can specify the limit as a function of the page number (20*p, 20*p+2) in your php code, and limit the value of the page number to 150.
Or you could get the 3000 records and them using jquery tabs, split the records on 20 per page.

Categories