Imagine you got a 1m record table and you want to limit the search results down to say 10,000 and not more than that. So what do I use for that? Well, the answer is use the limit clause.
example
select recid from mytable order by recid asc limit 10000
This is going to give me the last 10,000 records entered into this table.
So far no paging.
But the limit phrase is already in use.
That brings to question to the next level.
What if I want to page thru this record particular record set 100 recs at a time? Since the limit phrase is already part of the original query, how do I use it again, this time to take care of the paging?
If the org. query did not have a limit clause to begin with, I'd adjust it as limit 0,100 and then adjusting it as limit 100,100 and then limit 200,100 and so on while the paging takes it course. But at this time, I cannot.
You almost think you'd want to use two limit phrases one after the other - which is not not gonna work.
limit 10000 limit 0,100 for sure it would error out.
So what's the solution in this particular case?
[EDIT]
After I post this question, I realized that the limit 10000 in the org. query becomes meaningless once the paging routine kicks in. I somehow confused myself and though that order by recid asc limit 10000 in its entirety is part of the where clause. In reality, the limit 10000 portion has no bearing in the recordset content - other than taking care of the confining the recordset to the requested limit. So, there is no point of keeping the limit 10000 once the paging starts. I'm sorry for wasting your time with this. :(
I'd say get rid of the first limit, then don't bother doing a count of the table, or take the lesser of the count and your limit, i.e. 10000, and do the pagination based on that.
i.e.
$perpage = 100;
$pages = $totalcount/$perpage;
$page = $get['Page'];
if($page > $pages || $page < 0)
{
$page = 0;
}
$limit = "LIMIT " . ($page * $perpage) . ", " . $perpage;
To calculate totalcount, do
SELECT COUNT(*) FROM mytable
then check it against your limit, i.e.
if($totalcount > 10000)
{
$totalcount = 10000;
}
The reason to do a dedicated count query is that it requires very little DB to PHP data transfer, and many DBMS's can optimize the crap out of it compared to a full table SELECT query.
LIMIT can have two arguments, the first being the offset and the second being how many records to return. So
LIMIT 5,10
Will skip the first 5 records then fetch the next 10.
You will have to set your limit based on the current page. Something like
LIMIT (CURRENT_PAGE -1) * PAGE_SIZE , PAGE_SIZE
So if you had ten records per page and were on page 2 you would skip the first ten records and grab the next ten.
The offset suggestion is a great one and you should probably use that. But if for some reason offset doesn't fit your needs (say someone inserting a new record would shift your page slightly) you could also add a where recid > #### clause to your query. This is how you would paginate when working with Twitter API.
Here is an example in PHP.
<?php
$query = 'select recid from mytable';
if(isset($_GET['recid'])&&$_GET['recid']!=''){
$query = $query.' where recid > '.$_GET['recid'];
}
$query = $query.' order by recid asc limit 10000';
//LOG INTO MYSQL
$result = mysql_query($query);
$last_id = '';
while ($row = mysql_fetch_assoc($result)) {
//DO YOUR DISPLAY WORK
$last_id = $row['recid'];
}
echo '<a href="?recid='.$last_id.'>Next Page</a>';
?>
Again, a bit more complicated than needs to be but will return set pages.
You can use the offset variable. If this is your full query then You could use:
select recid from mytable order by recid asc limit 100 offset 300
for example would give you from 300-399. And obviously you will increase the offset by 100 for every page. So for the first page offset =0, for the second page offset = 100, etc.
to be general offset = (page-1)*100
And as #Mathieu Lmbert said you can make sure the offset doesn't reach 9900.
Effectively there can be only one limit, and it must be in the query. So the only solution will be to adjust the LIMIT clause in your query.
The thing you shouldn't do is to read all 10.000 entries, throwing away the 9900 that you do not want.
Related
I am using COUNT and GROUP BY to only show unique results in a database query. EX:
$result = $db->query("SELECT *,COUNT(clientname)
FROM recruitingForm
GROUP BY clientname LIMIT $offset, $rowsperpage");
There is also the beginnings of pagination going on, but it is giving me the results of the entire database and not my smaller list.
How do i set up my db query in order to do that?
Fixed - By removing COUNT(clientname). LIMIT was never in my pagination code to begin with.
I have a MySQL table with columns including id, names of people and a random number. What I would like to do is every week, collect the names into random groups of 5 each. It's for 5 aside football tournaments and it's getting too big to do it by hand now and I'd like it to be automatic.
The way I think it'll work will be to get the num_rows, divide by 5 to get the total number of groups ($divide). Then make a loop where 5 rows are selected at random and a random number between 1 and $divide is given to them. It's got to change every week so it can't be a one off task. I'd also like it to accommodate situations where the num_rows doesn't exactly divide by 5 and creates a last group of the remaining number.
This is as far as I've got -
$num_rows = mysql_num_rows($data);
$divide = $num_rows / 5;
$rannum = RAND (1, $divide);
$sql = mysql_query("UPDATE allpeep SET rannumber = $rannum")
or die(mysql_error());
but as you may have guessed, this just inserts the same random number into all the rows. I think this requires a while loop, but I can't quite work out how it'll work.
SELECT * FROM TABLE ORDER BY RAND() LIMIT 5
But this still isn't going to protect from duplicate if multiple queries are made, so ...
SELECT * FROM TABLE ORDER BY RAND()
Get the whole set, then parse into 5's in your server script. I'm assuming you want these static until next week, so for each 5 set update the sql server telling them what 'group' they are in this week.
In the alternative, you could use the former syntax if you use a seed for the rand() function. Using the year/week of year combination will give you a reproducable rand sequence
SELECT * FROM TABLE ORDER BY RAND(YEARWEEK(CURDATE()))
Then you avoid having to update anything on the server, and only have to output the results
$result = mysql_query("SELECT * FROM {tablename} ORDER BY RAND(YEARWEEK(CURDATE()))")
or die(mysql_error());
$group = 1;
$row = true;
while ($row) {
echo "Group $group";
for ($i=0; $group++, $i<5 && $row=mysql_fetch_assoc($result); $i++) {
// output current row
echo "team $row['team']";
}
}
Here's a similar thread that should help you. Assign a Unique Random Number to Each Row
If you setup a script to run this query, you can run it using a cronjob anytime you want.
So I have a database, and I would like to display the last ten entries.
So far I have tried several ways but run into a couple of issues. I have tried using a for-loop starting at the row count for the database and linking that to the ID, but then of course should I delete an entry the iterations will have data missing. For example if I have deleted lots of entries and have the first entry start at id 20 then if there are only 10 rows it will look to display 10 to 1, which don't exist.
Any thoughts?
You could try, depending on your implementation,
SELECT * FROM `table_name` WHERE conditions ORDER BY `id` DESC LIMIT 10;
Try this :
mysql_connect("localhost","usernm","pwd");
mysql_select_db("database");
$rs=mysql_query("SELECT id,col1,col2,coln FROM table_name WHERE condt ORDER BY id DESC LIMIT 10") or die(mysql_error());
while($row= mysql_fetch_row($rs))
{
print_r($row);
}
To randomly select records from one table; do I have to always set a temporary variable in PHP? I need some help with selecting random rows within a CodeIgniter model, and then display three different ones in a view every time my homepage is viewed. Does anyone have any thoughts on how to solve this issue? Thanks in advance!
If you don't have a ton of rows, you can simply:
SELECT * FROM myTable ORDER BY RAND() LIMIT 3;
If you have many rows, this will get slow, but for smaller data sets it will work fine.
As Steve Michel mentions in his answer, this method can get very ugly for large tables. His suggestion is a good place to jump off from. If you know the approximate maximum integer PK on the table, you can do something like generating a random number between one and your max PK value, then grab random rows one at a time like:
$q="SELECT * FROM table WHERE id >= {$myRandomValue}";
$row = $db->fetchOne($q); //or whatever CI's interface to grab a single is like
Of course, if you need 3 random rows, you'll have three queries here, but as they're entirely on the PK, they'll be fast(er than randomizing the whole table).
I would do something like:
SELECT * FROM table ORDER BY RAND() LIMIT 1;
This will put the data in a random order and then return only the first row from that random order.
I have this piece of code in production to get a random quote. Using MySQL's RAND function was super slow. Even with 100 quotes in the database, I was noticing a lag time on the website. With this, there was no lag at all.
$result = mysql_query('SELECT COUNT(*) FROM quotes');
$count = mysql_fetch_row($result);
$id = rand(1, $count[0]);
$result = mysql_query("SELECT author, quote FROM quotes WHERE id=$id");
you need a query like this:
SELECT *
FROM tablename
WHERE somefield='something'
ORDER BY RAND() LIMIT 3
It is taken from the second result of
http://www.google.com/search?q=mysql+random
and it should work ;)
Ordering a big table by rand() can be very expensive if the table is very large. MySQL will need to build a temporary table and sort it. If you have primary key and you know how many rows are in the table, use LIMIT x,1 to grab a random row, where x is the number of the row you want to get.
I'm currently displaying a random row from all the entries and that works fine.
SELECT * FROM $db_table where live = 1 order by rand() limit 1
now, i'd like to limit it to the last 100 entries in the db.
every row in the db has an ID and a timestamp.
it's a small database, so overhead-minimization is not a priority.
thanks!
EDIT:
Still can't get it running.. I get a mysql_fetch_array error:
"Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource
Here's all of my code:
<?php $sql = "SELECT * FROM
(SELECT * FROM $db_table ORDER BY $datetime DESC LIMIT 100)
ORDER BY rand() LIMIT 1";
$query = mysql_query($sql);
while($row = mysql_fetch_array($query)) {
echo "".$row['familyname']."";
} ?>
Thanks again!
This is what I came up with off the top of my head. I've tested it and it works in SQLite, so you shouldn't have much trouble with MySQL. The only change was that SQLite's random function is random() not rand():
SELECT * FROM
(SELECT * FROM $db_table ORDER BY $timestamp DESC LIMIT 100)
ORDER BY rand() LIMIT 1
This page has a pretty detailed writeup on how to optimize an ORDER BY RAND()-type query. It's actually too involved for me to explain adequately on SO (also, I don't fully understand some of the SQL commands used, though the general concept makes sense), but the final optimized query makes use of several optimizations:
First, ORDER BY RAND(), which uses a filesort algorithm on the entire table, is dropped. Instead, a query is constructed to simply generate a single random id.
At this stage, an index scan is being used, which is even less efficient than a filesort in many cases, so this is optimized away with a subquery.
The WHERE clause is replaced with a JOIN to reduce the number of rows fetched by the outer SELECT, and the number of times the subquery is executed, to just 1.
In order to account for holes in the ids (from deletions) and to ensure an equal distribution, a mapping table is created to map row numbers to ids.
Triggers are used to automatically update & maintain the mapping table.
Lastly, stored procedures are created to allow multiple rows to be selected at once. (Here, ORDER BY is reintroduced, but used only on the result rows.)
Here are the performance figures:
Q1. ORDER BY RAND()
Q2. RAND() * MAX(ID)
Q3. RAND() * MAX(ID) + ORDER BY ID
100 1.000 10.000 100.000 1.000.000
Q1 0:00.718s 0:02.092s 0:18.684s 2:59.081s 58:20.000s
Q2 0:00.519s 0:00.607s 0:00.614s 0:00.628s 0:00.637s
Q3 0:00.570s 0:00.607s 0:00.614s 0:00.628s 0:00.637s