I am using the ActiveRecord library that comes with the PHP framework CodeIgniter. I frequently find myself needing to query the database for a number of rows using a LIMIT clause, but also needing to know the total number of rows I would have pulled had I not included the LIMIT clause. For example, this is frequently the case when I provide pagenation for many results. I only want to pull 20 records at a time, but I also need to know how many rows there are in total which match my query where clause. I need to create 2 slightly different queries, a count query:
SELECT * FROM table WHERE [where_clause];
And a 'paged' query:
SELECT * FROM table WHERE [where_clause] LIMIT 0,20;
Is there an elegant solution to this problem with ActiveRecord? There doesn't seem to be anything out of the box which will help me. Obviously I can write around the problem with my own PHP but it would be ideal if I could take advantage of some aspect of the library to not have to duplicate code, etc.
Answer my own question, best way is to use SQL_CALC_FOUND_ROWS like so, Add this select call to your query:
$this->db->select('SQL_CALC_FOUND_ROWS *',false);
And then immediately afterwards to get the count:
$count = $this->db->query("SELECT FOUND_ROWS() AS count")->row('count');
Related
I'd like to have a SQL select statement (in PHP) that grabs all the data in my table. Then I'd like to be able to look for certain criteria and give me a count of rows that meet that criteria.
I already can do it by using mysql_num_rows and a separate select per criteria, but that seems inefficient. (i'd have dozens of select statements going) Is there a way to do it with just one select statement then use PHP to count the various things I want to count?
Edit:
I dont have any relevant code to post since the only code I do have is just an SQL select that does the filtering for me, and then uses mysql_num_rows to count them. This is what I am trying to avoid doing.
Example Select statement:
SELECT MDate, Type, MW, Region, Status FROM Scheduler WHERE Status = 'Complete';
In the above example I am looking for Status = Complete but I have several status' that I'd like get counts of individually without having to put a separate Select for each status.
You'll need to write some SQL along the lines of ...
SELECT COUNT(*) FROM Table WHERE `status` IN ("Complete", ...)
GROUP BY `status`
Without specifics of your database schema or the code you already have, that's the best answer you'll likely get.
mysql query very slow but i use "group by" function...
i remove group by query and query very fast.
How can I solve this problem?
my query code:
$myquery1 = mysql_query("SELECT * FROM konucuklar
WHERE status=''
and category='football'
GROUP BY matchhour
ORDER BY id asc");
while($myquery1record = mysql_fetch_array($myquery1)){
$myquery2 = mysql_query("SELECT * FROM konucuklar
WHERE mactarihi='$bugunt'
and statu=''
and kategori='futbol'
and macsaati='$myquery1record[matchhour]'
ORDER BY id asc");
$toplams=#mysql_num_rows($myquery2);
while ($myquery2record=mysql_fetch_array($myquery2)) {
// code
}
}
}
Your first query does not comply with SQL standards and will be processed by mysql only if strict sql mode is not enabled.
You are issuing the 2nd query in a loop based on the results returned by the 1st query. So, if the 1st query returns 10 rows, then you will execute the 2nd query 10 times. This is very slow. You should rewrite the 2 queries as one, since both queries query the same table and have almost the same where criteria.
No idea what the 2nd while loop does, as I can't see where $listele is defined.
The slow down might not be related to the GROUP BY clause. Try adding an index on columns you need to .
Link to understand index : http://www.tutorialspoint.com/sql/sql-indexes.htm
MySQL Profiling might also help you in your endeavour
Your queries are not optimized and probably could be done better in other way incluiding using only one composed query (JOIN) to fetch all data at once.
Also if your tables have lots of items is good practice to create INDEXES to the fields uses in the common queries for the filter to make the search faster.
Example, your firs select has this complexity (and probably is not well formed)
SELECT * FROM konucuklar WHERE status=''
and category='football'
GROUP BY matchhour
ORDER BY id asc
But is used only to get the matchhour for the second query. The minimal optimization is to use a query to fetch only the required field.
SELECT DISTINCT matchhour FROM konucuklar WHERE status='' and category='football'
Have searched but can't find an answer which suits the exact needs for this mysql query.
I have the following quires on multiple tables to generate "stats" for an application:
SELECT COUNT(id) as count FROM `mod_**` WHERE `published`='1';
SELECT COUNT(id) as count FROM `mod_***` WHERE `published`='1';
SELECT COUNT(id) as count FROM `mod_****`;
SELECT COUNT(id) as count FROM `mod_*****`;
pretty simple just counts the rows sometimes based on a status.
however in the pursuit of performance i would love to get this into 1 query to save resources.
I'm using php to fetch this data with simple mysql_fetch_assoc and retrieving $res[count] if it makes a difference (pro isn't guaranteed, so plain old mysql here).
The overhead of sending a query and getting a single-row response is very small.
There is nothing to gain here by combining the queries.
If you don't have indexes yet an INDEX on the published column will greatly speed up the first two queries.
You can use something like
SELECT SUM(published=1)
for some of that. MySQL will take the boolean result of published=1 and translate it to an integer 0 or 1, which can be summed up.
But it looks like you're dealing with MULTIPLE tables (if that's what the **, *** etc... are), in which case you can't really. You could use a UNION query, e.g.:
SELECT ...
UNION ALL
SELECT ...
UNION ALL
SELECT ...
etc...
That can be fired off as one single query to the DB, but it'll still execute each sub-query as its own query, and simply aggregate the individual result sets into one larger set.
Disagreeing with #Halcyon I think there is an appreciable difference, especially if the MySQL server is on a different machine, as every single query uses at least one network packet.
I recommend you UNION the queries with a marker field to protect against the unexpected.
As #Halcyon said there is not much to gain here. You can anyway do several UNIONS to get all the result in one query
Can someone explain how SQLite3 is used to give the number of rows found in a query?
mysql has mysql_num_rows and SQLite2 has a similar function but someone in SQLite3 development seems to have forgotten to add that function.
The examples that I have seen here and on other sites do not answer the question but instead only create more questions.
SO...
I have a query like $queryStr = $d->query("SELECT * FROM visitors WHERE uid='{$userid}' AND account='active';");
what I want to know is how do I find out how many results are in $queryStr
I am not looking for the number of rows in the database, just the number of rows in the "query results"
SQLite computes query results on the fly (because this is more efficient for an embedded database where there is no network communication).
Therefore, it is not possible to find out how many records a query returns without actually stepping through all those records.
You should, if at all possible, structure your algorithm so that you do not need to know the number of records before you have read them.
Otherwise, you have to execute a separate query that uses SELECT COUNT(*) to return the number of records, like this:
$countQuery = $d->query("SELECT COUNT(*) FROM visitors WHERE uid='{$userid}' AND account='active';");
For example, if I have to count the comments belonging to an article, it's obvious I don't need to cache the comments total.
But what if I want to paginate a gallery (WHERE status = 1) containing 1 million photos. Should I save that in a table called counts or SELECT count(id) as total every time is fine?
Are there other solutions?
Please advise. Thanks.
For MySQL, you don't need to store the counts, you can use SQL_CALC_FOUND_ROWS to avoid two queries.
E.g.,
SELECT SQL_CALC_FOUND_ROWS *
FROM Gallery
WHERE status = 1
LIMIT 10;
SELECT FOUND_ROWS();
From the manual:
In some cases, it is desirable to know how many rows the statement
would have returned without the LIMIT, but without running the
statement again. To obtain this row count, include a
SQL_CALC_FOUND_ROWS option in the SELECT statement, and then invoke
FOUND_ROWS() afterward.
Sample usage here.
It depends a bit on the amount of queries that are done on that table with 1 million records. Consider just taking care of good indexes, especially also multi-column indexes (because they are easily forgotton: here. That will do a lot. And, be sure the queries become cached also well on your server.
If you use this column very regular, consider saving it (if it can't be cached by MySQL), as things could become slow. But most of the times good indexing will take care of it.
Best try: setup some tests to find out if a query can still be fast and performance is not dropping when you execute it a lot of times in a row.
EXPLAIN [QUERY]
Use that command (in MySQL) to get information about the way the query is performed and if it can be improved.
Doing the count every time would be OK.
During paging, you can use SQL_CALC_FOUND_ROWS anyway
Note:
A denormalied count will become stale
No-one will page so many items