I'm using the following query to get a list of users having maximum 'speed'.
With group by:
SELECT users.email, speed.speed
FROM users INNER JOIN speed ON users.email=speed.email
GROUP BY users.email
ORDER BY speed.speed DESC LIMIT 15
How ever on running the query, the 'ORDER BY' is not working. It always show the result in the top.
If I remove 'GROUP BY', I got the order list. But I only want 1 row with same email ids. How can i apply group by on this?
Without Group By:
SELECT users.email, speed.speed
FROM users INNER JOIN speed ON users.email=speed.email
ORDER BY speed.speed DESC LIMIT 15
Without an aggregate function, GROUP BY will show you the grouped column with whichever value is selected last from non-grouped columns. You'll want to use an aggregate function on your speed.speed column if you want it sorted.
If you want each user with their maximum speed you'll want to do something like
SELECT users.email, MAX(speed.speed) FROM users INNER JOIN speed ON users.email=speed.email ORDER BY speed.speed DESC LIMIT 15
Or if you wanted the minimum
SELECT users.email, MIN(speed.speed) FROM users INNER JOIN speed ON users.email=speed.email ORDER BY speed.speed DESC LIMIT 15
Or Both
SELECT users.email, MAX(speed.speed) as maxspeed, MIN(speed.speed) as minspeed FROM users INNER JOIN speed ON users.email=speed.email ORDER BY speed.speed DESC LIMIT 15
If you just want the 15 highest speeds:
SELECT s.email, s.speed
FROM speed s
ORDER BY s.speed DESC
LIMIT 15;
If you want the users with the highest speed:
SELECT s.email, s.speed
FROM speed s
WHERE s.speed = (SELECT MAX(s2.speed) FROM speed s2)
LIMIT 15;
Note that joins are not needed for these queries.
Related
I'm trying to speed up the following query as it takes quite long to run: now it's 'only' about 1.5 seconds, but it will certainly get slower with more rows (which will 10x over the next period).
Basically, I want to show all the rows from the orders table for the user, and per row show the total order amount (which is the SUM of the orders_products table).
SELECT
orders.order_number,
orders.order_date,
companies.company_name,
COALESCE(SUM(orders_products.product_price * orders_products.product_quantity),0) AS order_value
FROM orders
LEFT JOIN companies ON companies.id = orders.company_id
LEFT JOIN orders_products ON orders_products.order_id = orders.id
LEFT JOIN users ON users.id = orders.user_id
WHERE orders.user_id = '$user_id'
AND companies.user_id = '$user_id'
GROUP BY orders.id ORDER BY orders.order_date DESC, orders.order_number DESC
I've tried adding another condition AND orders_products.user_id = '$user_id'. Speed wise the query was about 12x faster (yeah!) but the problem is that not all orders have products in them. In this case, the orders without products in them are not returned.
How do I change my query so that despite of an order not having products in them, it still is returned (with total order value 0), whilst also speeding up the query?
Thank you in advance for your help!
You might find it faster to use a correlated subquery:
SELECT o.order_number, o.order_date, c.company_name,
(SELECT COALESCE(SUM(op.product_price * op.product_quantity), 0)
FROM orders_products op
WHERE op.order_id = o.id
) AS order_value
FROM orders o LEFT JOIN
companies c
ON c.id = o.company_id AND c.user_id = o.user_id
WHERE o.user_id = '$user_id'
ORDER BY o.order_date DESC, o.order_number DESC
This gets rid of the outer aggregation which is often a performance win.
Then for performance you want the following indexes:
orders(user_id, order_date desc, order_number_desc, company_id)
companies(id, company_id, company_name)
orders_products(order_id, product_price, product_quantity)
I have 3 tables - users, journals, journaltags. I select data from 3 tables using chosen tags.
$sqltheme="SELECT users.id as uid, users.name, users.surname, users.avatar, journals.id, journals.author_id, journals.title, journals.text, journals.create_date, journaltags.name as jname FROM users
INNER JOIN journals ON users.id=journals.author_id
INNER JOIN journaltags ON journaltags.journal_id = journals.id WHERE journals.create_date LIKE ? AND journals.author_id=? AND (".$expression.") ORDER BY journals.id DESC LIMIT 10";
$stmtheme=$conn->prepare($sqltheme);
$stmtheme->execute($array);
But if two tags is the same for one journal then it is selected the same journal two times. How can I make DISTINCT journals.id. I tried GROUP BY journals.id but it didnt help.
If I understand correctly, your problem is that the journaltags table may have one or more rows with a duplicated journal_id and name column value, right?
You can simply add a distinct clause to your select statement, after the word SELECT:
SELECT DISTINCT users.id as uid, users.name, users.surname, users.avatar, journals.id, journals.author_id, journals.title, journals.text, journals.create_date, journaltags.name as jname FROM users
INNER JOIN journals ON users.id=journals.author_id
INNER JOIN journaltags ON journaltags.journal_id = journals.id WHERE journals.create_date LIKE ? AND journals.author_id=? AND (".$expression.") ORDER BY journals.id DESC LIMIT 10
The reason that your GROUP BY journals.id did not work, is because you had other columns that needed to be included in the grouping as well. Adding distinct is essentially a short way of writing group by [all selected columns]
The following query
SELECT
invoices.id,
SUM(payments.amount)
FROM adass.invoices
INNER JOIN adass.payments ON invoices.id=payments.id_invoice
GROUP BY invoices.id ORDER BY 1 desc LIMIT 0, 25
returns the following result
However, when I join another table to the query like so:
SELECT
invoices.id,
SUM(payments.amount)
FROM adass.invoices
INNER JOIN adass.invoice_items ON invoices.id=invoice_items.id_invoice
INNER JOIN adass.payments ON invoices.id=payments.id_invoice
GROUP BY invoices.id ORDER BY 1 desc LIMIT 0, 25
It duplicates the payment amount for invoice id 13919, effectively doubling the payment of 100 in the table
What is causing this?
I've added the table contents below
INVOICE ITEMS TABLE
PAYMENTS TABLE
UPDATE: Larger query as follows
SELECT SQL_CALC_FOUND_ROWS invoices.id,
COALESCE(SUM(invoice_items.gross),0) AS gross,
COALESCE(SUM(invoice_items.net) - SUM(invoice_items.gross),0) AS vat,
COALESCE(SUM(invoice_items.net),0) AS net,
COALESCE(SUM(payments.amount),0),
COALESCE(SUM(payments.amount) - SUM(invoice_items.net),0) AS outstanding
FROM adass.invoices
LEFT JOIN adass.invoice_items ON invoices.id=invoice_items.id_invoice
LEFT JOIN adass.payments ON invoices.id=payments.id_invoice
GROUP BY invoices.id ORDER BY 1 desc LIMIT 0, 25
RESULT:
It's clear what's going on. Your INVOICE #13919 has two INVOICE ITEMS, hence you get double the amount of invoice payment because your query is generating two rows. If you had three INVOICE ITEMS, then the amount would have been tripled.
You need to remove this clause from your query INNER JOIN adass.invoice_items ON invoices.id=invoice_items.id_invoice since you are not using columns from adass.invoice_items anyway.
Your query then will be:
SELECT invoices.id,
SUM(payments.amount)
FROM adass.invoices
INNER JOIN adass.payments ON invoices.id=payments.id_invoice
GROUP BY invoices.id ORDER BY 1 desc LIMIT 0, 25
I have a set of tracks that need to be played,
There are something like 70 tracks in the database, and my script need to generate a new ID to play in order to start the next track.
Current query: ($row['v_artist'] is the current artist playing)
SELECT *
FROM t_tracks
WHERE v_artist NOT LIKE '%".$row['v_artist']."%'
ORDER BY RAND()
LIMIT 1;
Now I wish to add a subquery to rand() so that it picks a random id, but not from the first 50 (NOT IN?)
Subquery:
SELECT *
FROM `t_playlist`
ORDER BY pl_last_played DESC
LIMIT 50, 1
How can I get a random ID from t_tracks that does not exist in the query for t_playlist?
Conceptually, I think you want this:
SELECT *
FROM t_tracks
WHERE v_artist NOT LIKE '%".$row['v_artist']."%' AND
track_id NOT IN (SELECT track_id FROM t_playlist ORDER BY pl_last_played DESC LIMIT 50)
ORDER BY RAND()
LIMIT 1;
However, MySQL doesn't permit LIMIT in some subqueries, so use LEFT JOIN instead:
SELECT t.*
FROM t_tracks t LEFT JOIN
(SELECT track_id
FROM t_playlist
ORDER BY pl_last_played DESC
LIMIT 50
) p
ON t.track_id = p.track_id
WHERE t.v_artist NOT LIKE '%".$row['v_artist']."%' AND
p.track_id IS NULL
ORDER BY RAND()
LIMIT 1;
I have a problem. I would like to get only 300 rows from table without touching LIMIT. I need LIMIT for pagination. Is this possible in MySQL?
My current query:
SELECT a.title, a.askprice, a.picture, a.description, a.userid, a.id
FROM mm_ads AS a WHERE a.category = 227 AND a.status = 1
ORDER BY id DESC LIMIT 40,20
Edit:
Simple explanation: I need to get from a system last 300 ads but I need to keep pagination, because I don't want to have 300 rows listed in one page..
SELECT *
FROM (
SELECT a.title, a.askprice, a.picture, a.description, a.userid, a.id
FROM mm_ads AS a
WHERE a.category = 227 AND a.status = 1
ORDER BY id DESC
LIMIT 300
) t
LIMIT 40,20
If the purpose is to speed up the query, then you can create a composite index:
ALTER TABLE `mm_ads`
ADD INDEX `mm_ads_index` (`category` ASC, `status` ASC, `id` DESC);
Use SQL_CALC_FOUND_ROWS after your SELECT:
SELECT SQL_CALC_FOUND_ROWS *
EDIT:
And in php, run this row to get the amount of rows:
list($int_rows) = mysql_fetch_row(mysql_query("SELECT FOUND_ROWS()"));
This will go through all the rows, get the total amount, but not fetch all the rows.
EDIT2:
May have misunderstod your question, however this is a common solution for pagination.
Simple solution is
Fits count only amount of result you need and use it in your pagination
then use limit in your query to load data on each page
SELECT a.title, a.askprice, a.picture, a.description, a.userid, a.id
FROM mm_ads AS a WHERE a.category = 227 AND a.status = 1
ORDER BY id DESC LIMIT 40,20
it dosen't matter how large your data base is it only gives you the 20 results (although it search for full database)
One more thing that you can do is just fetch all the 300 rows from database and save it in an array and then paginate the array indexes