So my (MySQL) Looks like so:
'Table' -> Column : p_id, name, rank, branch(enum (Air Force, Army, Marines, Navy, None)), billet, etc etc
So I am trying to grab the branch with the lowest amount of members and trying to avoid getting anything from (None or Navy 'at this time') The main branches I want to grab stuff from would be Air Force, Army, and Marines. Showing the lowest amount of personnel / members from that branch
So this is what I have done
$stats['lowest'] = DB::run('SELECT MIN(branch) as min_branch FROM personnel')->fetchColumn();
Recruits should go to : <?php echo $stats['lowest']; ?>
It shows Air Force even though it has 3 members in it while Army and Marines haves 0 with the 'branch' title
You are probably looking for a group and count as a sub query
SELECT
`a`.`branch`
FROM(
SELECT
`personnel`.`branch`,
COUNT(*) AS `c`
FROM
`personnel`
WHERE
`personnel`.`branch` IN('Air Force','Army','Marines')
GROUP BY
`personnel`.`branch`
ORDER BY
`c` ASC
) AS `a`
LIMIT 1
BUT
Since you are trying to get Army and Marines which don't have any records... they wont be included in your query. So any result that is returned will never give you a 0 value.
It appears you are trying to count records that dont exist... which wont work without setting up a database view (although someone else might have a better answer)
Related
I have an orders grid holding 1 million records. The page has pagination, sort and search options. So If the sort order is set by customer name with a search key and the page number is 1, it is working fine.
SELECT * FROM orders WHERE customer_name like '%Henry%' ORDER BY
customer_name desc limit 10 offset 0
It becomes a problem when the User clicks on the last page.
SELECT * FROM orders WHERE customer_name like '%Henry%' ORDER BY
customer_name desc limit 10 offset 100000
The above query takes forever to load. Index is set to the order id, customer name, date of order column.
I can use this solution https://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/ if I don't have a non-primary key sort option, but in my case sorting is user selected. It will change from Order id, customer name, date of order etc.
Any help would be appreciated. Thanks.
Problem 1:
LIKE "%..." -- The leading wildcard requires a full scan of the data, or at least until it finds the 100000+10 rows. Even
... WHERE ... LIKE '%qzx%' ... LIMIT 10
is problematic, since there probably not 10 such names. So, a full scan of your million names.
... WHERE name LIKE 'James%' ...
will at least start in the middle of the table-- if there is an index starting with name. But still, the LIMIT and OFFSET might conspire to require reading the rest of the table.
Problem 2: (before you edited your Question!)
If you leave out the WHERE, do you really expect the user to page through a million names looking for something?
This is a UI problem.
If you have a million rows, and the output is ordered by Customer_name, that makes it easy to see the Aarons and the Zywickis, but not anyone else. How would you get to me (James)? Either you have 100K links and I am somewhere near the middle, or the poor user would have to press [Next] 'forever'.
My point is that the database is not the place to introduce efficiency.
In some other situations, it is meaningful to go to the [Next] (or [Prev]) page. In these situations, "remember where you left off", then use that to efficiently reach into the table. OFFSET is not efficient. More on Pagination
I use a special concept for this. First I have a table called pager. It contains an primary pager_id, and some values to identify a user (user_id,session_id), so that the pager data can't be stolen.
Then I have a second table called pager_filter. I consist of 3 ids:
pager_id int unsigned not NULL # id of table pager
order_id int unsigned not NULL # store the order here
reference_id int unsigned not NULL # reference into the data table
primary key(pager_id,order_id);
As first operation I select all records matching the filter rules from and insert them into pager_filter
DELETE FROM pager_filter WHERE pager_id = $PAGER_ID;
INSERT INTO pager_filter (pager_id,order_id,reference_id)
SELECT $PAGER_ID pager_id, ROW_NUMBER() order_id, data_id reference_id
FROM data_table
WHERE $CONDITIONS
ORDER BY $ORDERING
After filling the filter table you can use an inner join for pagination:
SELECT d.*
FROM pager_filter f
INNER JOIN data_table d ON d.data_id = f.reference id
WHERE f.pager_id = $PAGER_ID && f.order_id between 100000 and 100099
ORDER BY f.order_id
or
SELECT d.*
FROM pager_filter f
INNER JOIN data_table d ON d.data_id = f.reference id
WHERE f.pager_id = $PAGER_ID
ORDER BY f.order_id
LIMIT 100 OFFSET 100000
Hint: All code above is not tested pseudo code
Essentially I want to order by "title", while grouping movies with the same, non-NULL "series_id" together within that order. Titles where series_id IS NULL should order by title only, while non NULL series_ids should order by "series_order" and be listed in the results based on "series"."title" within the original columns sort. I would prefer to achieve this within the query rather then loading the entire database and sorting from there.
What I have tried (MySQL 5.7.17):
ORDER BY title, series_id
SELECT * FROM media
ORDER BY title, series_id, series_order
LIMIT 0, 30
Does not account for titles in a series that are not alphabetically the same (see sample below).
ORDER BY CASE using CONCAT to sort by 'series'.'title' + 'media'.'series_order'
SELECT * FROM media
ORDER BY CASE
WHEN series_id IS NULL THEN title
ELSE CONCAT((SELECT title FROM series WHERE id = media.series_id), series_order)
END
LIMIT 0, 30
Th results are correctly ordered in a SQL Fiddle, but not on the dev server. To be fair this is still not the desired result as the 'series'.'title' may differ from the original movies title.
LEFT JOIN to include the 'series' table for sorting using the same idea
SELECT media.*, series.title, series.id FROM media
LEFT JOIN series ON media.series_id = series.id
LIMIT 0, 30
This does not order the data correctly, either.
Sample Data:
title
series_id
series_order
88 Minutes
NULL
NULL
Live Free or Die Hard
100094
4
3rd Rock from the Sun
100000
2
2 Guns
NULL
NULL
Die Hard
100094
1
Evil Dead
NULL
NULL
A Good Day to Die Hard
100094
5
3rd Rock from the Sun
100000
1
Desired Result
Order
2 Guns
NULL
3rd Rock from the Sun
1
3rd Rock from the Sun
2
88 Minutes
NULL
Die Hard
1
Live Free or Die Hard
4
A Good Day to Die Hard
5
Evil Dead
NULL
Primary Table: media
Relevant Columns: title, series_id, series_order
Series Table: series
Relevant Columns: id, title
Fiddle: http://sqlfiddle.com/#!9/efca7c/3
On the fiddle, option 2 appears to be working. On the dev version it only partially orders things.
EDIT: As it turns out the PHP code I was using to store the data before converting it to JSON was inevitably re-ordering the results by the Primary ID.
Based on the required convolution to achieve your desired sorting, if this was my application, I'd probably create a new column which contains the "base title" for the series and populate that value during insertion, then you could sort on that without any voodoo or eye-strain.
In the absence of modifying your table structure, I managed to downgrade a solution that was using ROW_NUMBER() and PARTITION (MySQL8.0 Demo) into a couple of nested subqueries -- it's not what I consider beautiful.
SQL (Demo)
SELECT m2.title grouping_title, m1.title, COALESCE(m2.title, m1.title), m1.series_order
FROM media m1
LEFT JOIN (
SELECT series_id, title
FROM media m3
WHERE series_order = (SELECT MIN(series_order) FROM media WHERE series_id = m3.series_id)
) m2 ON m1.series_id = m2.series_id
ORDER BY COALESCE(m2.title, m1.title), m1.series_order
You can modify the outer SELECT as you wish, but I just wanted to show what the COALESCE() function was generating. Effectively, I'm joining media table onto itself so that I can obtain the lowest series_order value for a given series_id. The title in the THAT row represents the "base title" to be used in the first rule of the sorting algorithm -- unless it is NULL, in which case, we just use the title from the parent query.
For your application output, you will want to use the m1.title and the m1.series_order.
Use a CASE expression to check whether seris_order value is null, if it is null then take only titlw, otherwise concatenate title with the series_order.
Query
Select case when series_order is null then title else concat(title, ' : Season ', series_order) end as title
From table_name
Order by 1;
So I have the following query, which I use it to get some analytics stats.
SELECT count(*) as total,CONCAT(YEAR(created),'-',MONTH(created),'-',DAY(created))
as date_only FROM logs where action = 'banner view'
and created BETWEEN '2015-07-03 21:03'
AND '2017-08-02 21:03' group by date_only order by created asc
This works, and it gives me this:
So what I actually need is, the total count of the rows in this case is 20, this is a dummy example, but I need to use this count to check before showing the stats if the data is too big to be displayed on a graphic.
Can this be achieved?
//LE
So the process will be like this:
1. Get a count of the total rows, if the count of rows is smaller than X(number will be in config and it will be a basic if statement), then go ahread and run the above query.
More info:
I actually use this query to display the stats, I just need to adapt it in order to show the total count rows
So the result of thquery should be
total | 20 in this case
I think you would want to use a derived table. Just wrap your original query in parenthesis after the FROM and then give the derived table an alias (in this case tmp). Like so:
SELECT count(*) FROM (
SELECT count(*) as total,CONCAT(YEAR(created),'-',MONTH(created),'-',DAY(created))
as date_only FROM logs where action = 'banner view'
and created BETWEEN '2015-07-03 21:03'
AND '2017-08-02 21:03' group by date_only order by created asc
) as tmp;
If I understand what you want to do correctly, this should work. It should return the actual number of results from your original query.
What's happening is that the results of the parenthesized query are getting used as a sort of virtual table to query against. The parenthesized query returns 20 rows, so the "virtual" table has 20 rows. The outer count(*) just counts how many rows there are in that virtual table.
Based on the PHP tag, I assume you are using PHP to send the queries to MySQL. If so, you can use mysqli_num_rows to get the answer.
If your query result is in $result then:
$total = mysqli_num_rows($result);
Slightly different syntax for Object Oriented style instead of procedural style.
The best part is you don't need an extra query. You perform the original query and get mysqli_num_rows as an extra without running another query. So you can figure out pagination or font size or whatever and then display without doing the query again.
This is an small query but works fine, and give me the total number of rows, you just need add your conditions.
SELECT COUNT(*) FROM table WHERE field LIKE '%condition%'
The group by I think you need to eliminated, becouse, this instead of count the records, divide in all your group by, example: records = 4, with group by you have
1
1
1
1
I hope this help you
You can try this way .
SELECT COUNT(*) FROM ( SELECT count(*) as total,CONCAT(YEAR(created),'-',MONTH(created),'-',DAY(created))
as date_only FROM logs where action = 'banner view'
and created BETWEEN '2015-07-03 21:03'
AND '2017-08-02 21:03' group by date_only HAVING total >=20 ) temp
Currently we sort products in a grid by the columns of the grid. First column A, then B then C. Column A can hold the revenue (or the profit) en column B can hold the date the product was added. If we sort by revenue first and then date added the result is a list of high revenue products; products with the same revenue are sorted by date subsequently. The other way around, with first date added and then revenue results in a list sorted by new products first.
The problem we face is the following: if we add 100 new products and sort by newest first then the bestsellers are nowhere to be seen on page 1 (but far far down the list). When we sort by bestsellers then new products will never get a chance because the are hardly ever seen and never climb up based on revenue.
So my question is: how does one sort an array based on attribute 1, then 2 etc. But after that 'mingles or mixes' the sorting in the above case so we see for example: 1 bestseller, 1 new, 1 bestseller, 1 new .. or similar 1 bestseller, 2 new, 1 bestseller, 2 new ... so you kind of promote 1 of the attributes
Can this be done?
Below an example with the original data, then the 'normal' ssort by revenue and the last grid contains the sort we would like to achieve.
How would one do this? And can this be done either or both in PHP? Or does MySQL have something for this.
I appreciate your help
You can try a purely MySQL based solution:
SELECT * FROM (
SELECT #inc1 := IF(#inc1 IS NULL, 1, #inc1 + 1) AS `inc`, `product_id`, ... FROM `products` ORDER BY `sales`
UNION
SELECT #inc2 := IF(#inc2 IS NULL, 1, #inc2 + 1) AS `inc`, `product_id`, ... FROM `products` WHERE `date_added` > 'yesterday' ORDER BY `date_added`
) AS `tbl` GROUP BY `product_id` ORDER BY `inc` ASC
This gives you an idea, but basically what we're trying to achieve is this, we get all the best sellers first and give them a generated increment number, secondly we do the same but with the latest products... because both data sets increment separately we get the same numbers. Grouping by the product id will get rid of any duplicates (if for example your new product becomes an instant success) and the order by inc will sort them nicely to achieve your mixed result...
well that's the theory anyway! Hopefully this gives you a good start ;)
if you want to order the resultset by custom order MySQL has FIELD() function
ORDER BY FIELD(column,'value1','value2','value3')
I want to ignore the duplicates in my database when I will set my "LIMIT 0, 50", then "LIMIT 50, 50" then LIMIT..... I will need to scan the duplicates on only 1 column of my table, not all the columns at once. I can't merge the duplicates because they are different in a way : these duplicates have different prices.
more precisely, I will need to show a list of these items, but to show their different prices at their right.
I need a precise number (50) per pages, so I cant load less then go to the next page. I could therefore load more from the beginning (changing the max and previous offsets if i'm on a far page) in a way that if i ignore the duplicates, I will got exactly 50 per pages and I will get the good number of pages shown at the end.
I'm a bit beginner with PHP and I have no idea about how to do that. Maybe pre-scan all the table and then start writing my code, by being flexible with my scan's variables of LIMIT and everything ? what functions I need ? how ?
Else, do something pre-programmed or a function of php that I don't know it exists can solve this problem ? Or I really need to get an headhache xD
I am not entirely certain of what you are asking, but I think you might want to do a aggregate statement along these lines:
select
itemID,
group_concat(itemPrice)
from
yourTable
group by
itemID
limit 50
This will bring back a list of 50 items and a second column where all the prices are grouped together. Then in your PHP code, you can either explode() that second column keep it as is.
Edit: If you select every field, you can't then use an aggregate function. If you want to select other columns that won't be different, add them to both the select and the group by sections like this:
select
itemID,
itemName,
itemSomething,
itemSomethingElse,
group_concat(itemPrice)
from
yourTable
group by
itemID,
itemName,
itemSomething,
itemSomethingElse
limit 50
Probably you can group by item, and use GROUP_CONCAT to show different prices list? In this way you can still use LIMIT 50. If the price column is numeric, cast it to VCHAR.
I admit I borrowed the group_concat() function from the other answers :)
After reading this paragraph from the docs:
The default behavior for UNION is that duplicate rows are removed from the result.
The optional DISTINCT keyword has no effect other than the default because it also
specifies duplicate-row removal. With the optional ALL keyword, duplicate-row removal
does not occur and the result includes all matching rows from all the SELECT statements.
Assume the following table (testdb.test):
ID Name Price
1 Item-A 10
2 Item-A 15
3 Item-A 9.5
4 Item-B 5
5 Item-B 4
6 Item-B 4.5
7 Item-C 50
8 Item-C 55
9 Item-C 40
You can page this table rows (9 rows) or groups (3 groups, based on the item's name).
If you would like to page your items based on the item groups, this should help:
SELECT
name, group_concat(price)
FROM
testdb.test
GROUP BY name
LIMIT 1 , 3
UNION SELECT
name, group_concat(price)
FROM
testdb.test
GROUP BY name
LIMIT 0 , 3; -- Constant 0, range is the same as the first limit's
If you would like to page your items based on all the items (I don't think that's what you were asking for, but just in case it helps someone else), this should help:
SELECT
name, price
FROM
testdb.test
LIMIT 1 , 5
UNION SELECT
name, price
FROM
testdb.test
LIMIT 0 , 5; -- Constant 0, range is the same as the first limit's
A very important thing to note is how you'll have to modify the limits. The first limit is your key, you can start from any limit you'd like as long as it's <= count(*) but you will have to have the same range as the second limit (i.e 3 in the first example and 5 in the second example). And the second limit will always start from 0 as shown.
I enjoyed working on this, hope this helps.