This question already has answers here:
How to count all rows when using SELECT with LIMIT in MySQL query?
(5 answers)
Closed 9 years ago.
I'm trying to make page number for my table and need to know row count.
for example my query is SELECT * FROM tbl and the result is 100rows.
I should limit query to 10rows for one page.
if I do that I cant get all rows count.
I want to know is there any different between limit with mysql our php in process speed.
please tell me if you have any idea.
You shoud use LIMIT in your MySQL. Why?
*Because it will take you less time to fetch some amount of data and transmit it instead of getting everything.
*Because it will consume less memory.
*Because working with smaller arrays is generally faster.
As for me, I used PHP as a filter only when I could not perform the filtering in MySQL
Just to update my answer, as Joachim Isaksson already posted:
SQL_CALC_FOUND_ROWS will help you count all the rows, so you can perform a correct pagination.
The best way is probably to use FOUND_ROWS() and combine them both;
SELECT SQL_CALC_FOUND_ROWS * FROM tbl LIMIT 10
...to get the first 10 rows as a result. Then you can just do;
SELECT FOUND_ROWS()
...to get the total number of rows you would have got if you hadn't used LIMIT.
What happens if you will have 100k rows? Of course, you will get rows count, but performance significantly decrease (and memory usage significantly increase). Just use second query to obtain amount of rows and limit rows in mysql
As i understand your problem ,solution might be like this :Strictly by mysql
Use two function first for counting total record
second for getting result
select count(1) as total from table_name
select col1,col2.. from table_name limit $start , $slot_size
Now slot_size is number of record you want to show on page,say 10
start is value which will change according to page
Related
I have a very large database (~10 million rows) and I want to list these things as fast as possible in a table. I have few options :
I can limit the rows from Mysql - Not Preferred as I want to count the rows with specific type of data say attachment
Fetch all rows and use while loop to limit 1000 records each time - I think it's good to do but calling 10 million rows in memory looks insane and I am quite sure that it must have worse performance.
Count the total data and then list using limit - but mysql count is a deal breaker as inspite of unique and indexed id I have faced bad time with mysql count.
What is the best way to do this?
If I just want to list 10 million rows and parsing data using php to stop it and display each time 1000 rows it is a bad idea ?
Theres some things to consider:
Is the database optimized? if yes, skip
Indexing columns you want to filter the search from
Select the columns you require from it only (instead of select *)
If you want to count the total and the id is sequencial, you get select the latest row and count based on the id if it's 'that slow'
If you're looking at some sort of pagination, you can count the rows and select only a few records based of an user input (select with limit 1000, skip '1000' when its page 2, etc)
You wouldn't want 10million data in your "memory" when you'd be using 0.1% of it right?
I am using the ActiveRecord library that comes with the PHP framework CodeIgniter. I frequently find myself needing to query the database for a number of rows using a LIMIT clause, but also needing to know the total number of rows I would have pulled had I not included the LIMIT clause. For example, this is frequently the case when I provide pagenation for many results. I only want to pull 20 records at a time, but I also need to know how many rows there are in total which match my query where clause. I need to create 2 slightly different queries, a count query:
SELECT * FROM table WHERE [where_clause];
And a 'paged' query:
SELECT * FROM table WHERE [where_clause] LIMIT 0,20;
Is there an elegant solution to this problem with ActiveRecord? There doesn't seem to be anything out of the box which will help me. Obviously I can write around the problem with my own PHP but it would be ideal if I could take advantage of some aspect of the library to not have to duplicate code, etc.
Answer my own question, best way is to use SQL_CALC_FOUND_ROWS like so, Add this select call to your query:
$this->db->select('SQL_CALC_FOUND_ROWS *',false);
And then immediately afterwards to get the count:
$count = $this->db->query("SELECT FOUND_ROWS() AS count")->row('count');
I am making a simple message board in PHP with a MySQL database. I have limited messages to 20 a page with the 'LIMIT' operation.
An example of my URL is: http://www.example.com/?page=1
If page is not specified, it defaults to 1. As I mentioned earlier, there is a limit of 20 per page, however if that is out of a possible 30 and I wish to view page 2, I only end up with 10 results. In this case, the LIMIT part of my query resembles LIMIT 20,40 - How can I ensure in this case that 20 are returned?
I would prefer to try and keep this as much on the MySQL side as possible.
EDIT:
To clarify, if I am on page 2, I will be fetching rows 20-30, however this is only 10 rows, so I wish to select 10-30 instead.
EDIT:
I am currently using the following query:
My query:
SELECT MOD(COUNT(`ID`),20) AS lmt WHERE `threadID`=2;
SELECT * FROM `msg_messages` WHERE `threadID`=2 LIMIT 20-(20-lmt) , 40-(20-lmt) ;
There are 30 records this matches.
I'm not sure to really understand the question, but if I do I think that the best practive would be to prevent users to go to a page with no results. To do so, you can easily check how many rows you have in total even if you are using the "LIMIT" clause using SQL_CALC_FOUND_ROWS.
For example you could do:
Select SQL_CALC_FOUND_ROWS * from blog_posts where balblabla...
Then you have to run another query like this:
Select FOUND_ROWS() as posts_count
It will return the total rows very quickly. With this result and knowing the current page, you can decide if you display next/prev links to the user.
You could do a:
SELECT COUNT(*)/20 AS pages FROM tbl;
..to get the max number of pages, then work out if you're going to be left with a partial set and adjust your paging query accordingly.
I have a file that goes thru a large data set and splits out the rows in a paginated manner. The dataset contains about 210k rows, which isn't even that much, it will grow to 3Mil+ in a few weeks, but its already slow.
I have a first query that gets the total number of items in the DB for a particular WHERE clause combination, the most basic one looks like this:
SELECT count(v_id) as num_items FROM versions
WHERE v_status = 1
It takes 0.9 seconds to run.
The 2nd query is a LIMIT query that gets the actual data for that page. This query is really quick. (less than 0.001 s).
SELECT
v_id,
v_title,
v_desc
FROM versions
WHERE v_status = 1
ORDER BY v_dateadded DESC
LIMIT 0, 25
There is an index on v_status, v_dateadded
I use php. I cache the result into memcace, so subsequent requests are really fast, but the first request is laggy. Especially once I throw in a fulltext search in there, it starts taking 2-3 seconds for the 2 queries.
I don't think this is right, but try making it count(*), i think the count(x) has to go through every row and count only the ones that don't have a null value (so it has to go through all the rows)
Given that v_id is a PRIMARY KEY it should not have any nulls, so try count(*) instead...
But i don't think it will help since you have a where clause.
Not sure if this is the same for MySQL, but in MS SQL Server COUNT(*) is almost always faster than COUNT(column). The parser determines the fastest column to count and uses that.
Run an explain plan to see how the optimizer is running your queries.
That'll probably tell you what Andreas Rehm told you: you'll want to add indices that cover your where clauses.
EDIT: For me FOUND_ROWS() was the fastest way of doing this:
SELECT
SQL_CALC_FOUND_ROWS
v_id,
v_title,
v_desc
FROM versions
WHERE v_status = 1
ORDER BY v_dateadded DESC
LIMIT 0, 25;
Then in a secondary query just do:
SELECT FOUND_ROWS();
If you are outputting to PHP you do this:
$totalnumber = mysql_result(mysql_query($secondquery)),0,0);
I was previously trying to the same thing as OP, putting COUNT(column) on the first query but it took about three times longer than even the slowest WHERE and ORDERBY query that I could do (with a LIMIT set). I tried changing to COUNT(*) and it improved a lot. But results in my case were even better using MySQL's FOUND_ROWS();
I am testing in PHP with microtime and repeating the query. In OP's case, if he ran COUNT(*) I think he will save some time, but it is not the fastest way of doing this. I ran some tests on COUNT(*) VS. FOUND_ROWS() and FOUND_ROWS() is quite a bit faster.
Using FOUND_ROWS() was nearly twice as fast in my case.
I first started doing EXPLAIN on the COUNT(*) query. In OP's case you would see that MySQL still checks a total of 210k rows in the first query. It checks every row before even starting the LIMIT query and doesn't seem to get any performance benefit from doing this.
If you run EXPLAIN on the LIMIT query it will probably check less than 100 rows as you have limited the results to 25. But this is still overlap and there will be some cases where you can't afford this or at the least you should still compare performance with FOUND_ROWS().
I thought this might only save time on large LIMIT requests, but when I run EXPLAIN on my LIMIT query it was actually only checking 25 rows to get 15 values. However, there was still a very noticeable difference in query time - on average I got down from .25 to .14 seconds and achieved the same results.
For my application most of my SQL queries return a specified number of rows. I'd also like to get the maximum possible number of results i.e. how many rows would be returned if I wasn't setting a LIMIT.
Is there a more efficient way to do this (using just SQL?) than returning all the results, getting the size of the result set and then splicing the set to return just the first N rows.
You can use SELECT COUNT(*), but this isn't ideal for large data sets.
A more efficient solution is to use SQL_CALC_FOUND_ROWS and FOUND_ROWS():
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows
First query:
SELECT SQL_CALC_FOUND_ROWS id, name, etc FROM table LIMIT 10;
Second query:
SELECT FOUND_ROWS();
You'll still need two queries, but you run the main query once, saving resources.
I'd also like to get the maximum possible number of results i.e. how many rows would be returned if I wasn't setting a LIMIT.
Use:
SELECT COUNT(*)
FROM YOUR_TABLE
...to get the number of rows that currently exist in YOUR_TABLE.
Is there a more efficient way to do this (using just SQL?) than returning all the results, getting the size of the result set and then splicing the set to return just the first N rows.
Only fetch the rows/information you need.
Getting everything means a lot of data is going over the wire, that likely won't be used at all. In this situation, data is being cached - which means it can get stale because it isn't fresh from the database.
This sounds like pagination...
To get the specified number of rows you could use select count(*). Is that what you are looking for ?