getting data from table where one field is maximum in one query - php

i want to get data from my database where one field is max, at this moment i do this in 2 queries. the thing is i dont want to overload the server so i am looking for a way to make it in 1 query. any suggestions?
as you can see i am looking for entry where the timestamp is max.
$query = "SELECT MAX(TIMESTAMP) AS timestamp FROM `data`";
$run_query = mysql_query($query);
$highest = mysql_result($run_query,'0','timestamp');
$query = "SELECT * FROM `data` where `timestamp`='$highest'";
$run_query = mysql_query($query);
thanks in advance.

An alternative, if you can guarantee that there will never be two records with the same timestamp:
SELECT *
FROM data
ORDER BY timestamp DESC
LIMIT 1
If you can have duplicate timestamps, then the other answers with the sub-select are the better solution.

This will simply work as you desired.
SELECT *
FROM data
WHERE timestamp = (SELECT MAX(timestamp) FROM data)
Backticks on this case are optionals. But actually timestamp is a reserved keyword but is permitted to be used even without escaping it with backtick.

SELECT * FROM `data` WHERE `timestamp` = (SELECT MAX(`timestamp`) FROM `data`)

If your goal is to minimize server resources, the difference between one query and two queries is really minor. The engine needs to do pretty much the same work. The difference would be the slight overhead of compiling two queries rather than one.
Regardless of the solution, you will minimize server resources by building an index on data(timestamp).

Related

How to optimize query to avoid timeout on mysql/php connection

I recently design a php/mysql website.
I the index page I used this sql query
SELECT * FROM upload WHERE aid = '206' AND pin = '0' AND format like '%video%' ORDER BY created DESC
SELECT * FROM upload WHERE aid = '206' AND pin = '0' AND format like '%image%' ORDER BY created DESC
SELECT * FROM upload WHERE aid = '206' AND pin = '0' AND format like '%image%' ORDER BY created DESC
I do not have a big data yet...maybe 1000 records in DB.
But when I load the page..it takes 8-20 seconds to load.
when I remove that sal queries,the load time will be normal.
please advice me best method to resolve that.
You are using 3 different SQL queries to get the data from same table.
You can get the same result using only one SQL query.
So SQL query should be as below.
SELECT * FROM `upload` WHERE `aid` = '206' AND `pin` = '0' AND (`format` like '%video%' OR `format` like '%image%') ORDER BY `created` DESC
There may be chances that you might have included many CSS,Images or JavaScript libraries.
You can check your page performance issue using below site.
http://tools.pingdom.com/fpt/
Things that might be worth trying to speed things up:
add appropriate indexes to your table (perhaps aid should be an index?)
combine these three queries into a single query (as per Ali Khanusiya's answer)
add an appropriate LIMIT clause to your query (eg. LIMIT 1 or LIMIT 10) to fetch a subset of the results (you can use an OFFSET to paginate the results)

MYSQL from the 20th row to the last row [duplicate]

I would like to construct a query that displays all the results in a table, but is offset by 5 from the start of the table. As far as I can tell, MySQL's LIMIT requires a limit as well as an offset. Is there any way to do this?
From the MySQL Manual on LIMIT:
To retrieve all rows from a certain
offset up to the end of the result
set, you can use some large number for
the second parameter. This statement
retrieves all rows from the 96th row
to the last:
SELECT * FROM tbl LIMIT 95, 18446744073709551615;
As you mentioned it LIMIT is required, so you need to use the biggest limit possible, which is 18446744073709551615 (maximum of unsigned BIGINT)
SELECT * FROM somewhere LIMIT 18446744073709551610 OFFSET 5
As noted in other answers, MySQL suggests using 18446744073709551615 as the number of records in the limit, but consider this: What would you do if you got 18,446,744,073,709,551,615 records back? In fact, what would you do if you got 1,000,000,000 records?
Maybe you do want more than one billion records, but my point is that there is some limit on the number you want, and it is less than 18 quintillion. For the sake of stability, optimization, and possibly usability, I would suggest putting some meaningful limit on the query. This would also reduce confusion for anyone who has never seen that magical looking number, and have the added benefit of communicating at least how many records you are willing to handle at once.
If you really must get all 18 quintillion records from your database, maybe what you really want is to grab them in increments of 100 million and loop 184 billion times.
Another approach would be to select an autoimcremented column and then filter it using HAVING.
SET #a := 0;
select #a:=#a + 1 AS counter, table.* FROM table
HAVING counter > 4
But I would probably stick with the high limit approach.
As others mentioned, from the MySQL manual. In order to achieve that, you can use the maximum value of an unsigned big int, that is this awful number (18446744073709551615). But to make it a little bit less messy you can the tilde "~" bitwise operator.
LIMIT 95, ~0
it works as a bitwise negation. The result of "~0" is 18446744073709551615.
You can use a MySQL statement with LIMIT:
START TRANSACTION;
SET #my_offset = 5;
SET #rows = (SELECT COUNT(*) FROM my_table);
PREPARE statement FROM 'SELECT * FROM my_table LIMIT ? OFFSET ?';
EXECUTE statement USING #rows, #my_offset;
COMMIT;
Tested in MySQL 5.5.44. Thus, we can avoid the insertion of the number 18446744073709551615.
note: the transaction makes sure that the variable #rows is in agreement to the table considered in the execution of statement.
I ran into a very similar issue when practicing LC#1321, in which I have to select all the dates but the first 6 dates are skipped.
I achieved this in MySQL with the help of ROW_NUMBER() window function and subquery. For example, the following query returns all the results with the first five rows skipped:
SELECT
fieldname1,
fieldname2
FROM(
SELECT
*,
ROW_NUMBER() OVER() row_num
FROM
mytable
) tmp
WHERE
row_num > 5;
You may need to add some more logics in the subquery, especially in OVER() to fit your need. In addition, RANK()/DENSE_RANK() window functions may be used instead of ROW_NUMBER() depending on your real offset logic.
Reference:
MySQL 8.0 Reference Manual - ROW_NUMBER()
Just today I was reading about the best way to get huge amounts of data (more than a million rows) from a mysql table. One way is, as suggested, using LIMIT x,y where x is the offset and y the last row you want returned. However, as I found out, it isn't the most efficient way to do so. If you have an autoincrement column, you can as easily use a SELECT statement with a WHERE clause saying from which record you'd like to start.
For example,
SELECT * FROM table_name WHERE id > x;
It seems that mysql gets all results when you use LIMIT and then only shows you the records that fit in the offset: not the best for performance.
Source: Answer to this question MySQL Forums. Just take note, the question is about 6 years old.
I know that this is old but I didnt see a similar response so this is the solution I would use.
First, I would execute a count query on the table to see how many records exist. This query is fast and normally the execution time is negligible. Something like:
SELECT COUNT(*) FROM table_name;
Then I would build my query using the result I got from count as my limit (since that is the maximum number of rows the table could possibly return). Something like:
SELECT * FROM table_name LIMIT count_result OFFSET desired_offset;
Or possibly something like:
SELECT * FROM table_name LIMIT desired_offset, count_result;
Of course, if necessary, you could subtract desired_offset from count_result to get an actual, accurate value to supply as the limit. Passing the "18446744073709551610" value just doesnt make sense if I can actually determine an appropriate limit to provide.
WHERE .... AND id > <YOUROFFSET>
id can be any autoincremented or unique numerical column you have...

How to improve Mysql database performance without changing the db structure

I have a database that is already in use and I have to improve the performance of the system that's using this database.
There are 2 major queries running about 1000 times in a loop and this queries have inner joins to 3 other tables each. This in turn is making the system very slow.
I tried actually to remove the query from the loop and fetch all the data only once and process it in PHP. But this is putting to much load on the memory (RAM) and the system is hanging if 2 or more clients try to use the system.
There is a lot of data in the tables even after removing the expired data .
I have attached the query below.
Can anyone help me with this issue ?
select * from inventory
where (region_id = 38 or region_id = -1)
and (tour_opp_id = 410 or tour_opp_id = -1)
and room_plan_id = 141 and meal_plan_id = 1 and bed_type_id = 1 and hotel_id = 1059
and FIND_IN_SET(supplier_code, 'QOA,QTE,QM,TEST,TEST1,MQE1,MQE3,PERR,QKT')
and ( ('2014-11-14' between from_date and to_date) )
order by hotel_id desc ,supplier_code desc, region_id desc,tour_opp_id desc,inventory.inventory_id desc
SELECT * ,pinfo.fri as pi_day_fri,pinfoadd.fri as pa_day_fri,pinfochld.fri as pc_day_fri
FROM `profit_markup`
inner join profit_markup_info as pinfo on pinfo.profit_id = profit_markup.profit_markup_id
inner join profit_markup_add_info as pinfoadd on pinfoadd.profit_id = profit_markup.profit_markup_id
inner join profit_markup_child_info as pinfochld on pinfochld.profit_id = profit_markup.profit_markup_id
where profit_markup.hotel_id = 1059 and (`booking_channel` = 1 or `booking_channel` = 2)
and (`rate_region` = -1 or `rate_region` = 128)
and ( ( period_from <= '2014-11-14' and period_to >= '2014-11-14' ) )
ORDER BY profit_markup.hotel_id DESC,supplier_code desc, rate_region desc,operators_list desc, profit_markup_id DESC
Since we have not seen your SHOW CREATE TABLES; and EXPLAIN EXTENDED plan it is hard to give you 1 answer
But generally speaking in regard to your query "BTW I re-wrote below"
SELECT
hotel_id, supplier_code, region_id, tour_opp_id, inventory_id
FROM
inventory
WHERE
region_id IN (38, -1)
AND tour_opp_id IN (410, -1)
AND room_plan_id IN (141, 1)
AND bed_type_id IN (1, 1059)
AND supplier_code IN ('QOA', 'QTE', 'QM', 'TEST', 'TEST1', 'MQE1', 'MQE3', 'PERR', 'QKT')
AND ('2014-11-14' BETWEEN from_date AND to_date )
ORDER BY
hotel_id DESC, supplier_code DESC, region_id DESC, tour_opp_id DESC, inventory_id DESC
Do not use * to get all the columns. You should list the column that you really need. Using * is just a lazy way of writing a query. limiting the columns will limit the data size that is being selected.
How often is the records in the inventory are being updates/inserted/delete? If not too often then you can use consider using SQL_CACHE. However, caching a query will cause you problems if you use it and the inventory table is updated very often. In addition, to use query cache you must check the value of query_cache_type on your server. SHOW GLOBAL VARIABLES LIKE 'query_cache_type';. If this is set to "0" then the cache feature is disabled and SQL_CACHE will be ignored. If it is set to 1 then the server will cache all queries unless you tell it not too using NO_SQL_CACHE. If the option is set to 2 then MySQL will cache the query only where SQL_CACHE clause is used. here is documentation about query_cache_type
If you have an index on those following column in this order it will help you (hotel_id, supplier_code, region_id, tour_opp_id, inventory_id)
ALTER TABLE inventory
ADD INDEX (hotel_id, supplier_code, region_id, tour_opp_id, inventory_id);
If possible increase sort_buffer_size on your server as most likely you issue here is that your are doing too much sorting.
As for the second query "BTW I re-wrote below"
SELECT
*, pinfo.fri as pi_day_fri,
pinfoadd.fri as pa_day_fri,
pinfochld.fri as pc_day_fri
FROM
profit_markup
INNER JOIN
profit_markup_info AS pinfo ON pinfo.profit_id = profit_markup.profit_markup_id
INNER JOIN
profit_markup_add_info AS pinfoadd ON pinfoadd.profit_id = profit_markup.profit_markup_id
INNER JOIN
profit_markup_child_info AS pinfochld ON pinfochld.profit_id = profit_markup.profit_markup_id
WHERE
profit_markup.hotel_id = 1059
AND booking_channel IN (1, 2)
AND rate_region IN (-1, 128)
AND period_from <= '2014-11-14'
AND period_to >= '2014-11-14'
ORDER BY
profit_markup.hotel_id DESC, supplier_code DESC, rate_region DESC,
operators_list DESC, profit_markup_id DESC
Again eliminate the use of * from your query
Make sure that the following columns have the same type/collation and same size. pinfo.profit_id, profit_markup.profit_markup_id, pinfoadd.profit_id, pinfochld.profit_id and each one have to have an index on every table. If the columns have different types then MySQL will have to convert the data every time to join the records. Even if you have index it will be slower. Also, if those column are characters type (ie. VARCHAR()) make sure they are of the CHAR() with a collation of latin1_general_ci as this will be faster for finding ID, but if you are using INT() even better.
Use the 3rd and 4th trick I listed for the previous query
Try using STRAIGHT_JOIN "you must know what your doing here or it will bite you!" Here is a good thread about this When to use STRAIGHT_JOIN with MySQL
I hope this helps.
For the first query, I am not sure if you can do much (assuming you have already indexed the fields you are ordering by) apart from replacing the * with column names (Don't expect this to increase the performance drastically).
For the second query, before you go through the loop and put in selection arguments, you could create a view with all the tables joined and ordered then make a prepared statement to select from the view and bind arguments in the loop.
Also, if your php server and the database server are in two different places, it is better if you did the selection through a stored procedure in the database.
(If nothing works out, then memcache is the way to go... Although I have personally never done this)
Here you have increase query performance not an database performance.
For both queries first check index is available on WHERE and ON(Join) clause columns, if index is missing then you have to add index to improve query performance.
Check explain plane before create index.
If possible show me the explain plane of both query that will help us.

Join two tables, then Order By date, BUT combining both tables

Alright, I'm trying to figure out why I can't understand how to do this well...
I have two tables:
invoices:
id
userID
amount
date
payments:
id
userID
amount
date
So, the goal here is to join both tables, where the userID matches whatever I want it to be - and then return everything ordered by date (most recent at the top). However, because there is a date field in each of the tables, I'm not sure how MySQL will handle things... will is sort by both dates automatically? Here's what I was thinking...
"SELECT DISTINCT *
FROM invoices,payments
WHERE {$userID} = invoice.userID
OR {$userID} = payments.userID
ORDER BY date DESC";
But, it's starting to become clear to me that maybe this isn't even the right use of a join command... maybe I need to just get all data on each table alone, then try to sort it somehow with PHP? If that's the better method, what's a good way to do this type of DATE sort while keeping all row data in tact?
I should add, the TIME inside the unix timestamp (that's how "date" is stored) is NOT negligible - it should sort by the date and time.
Thanks all...
If the columns of both tables are the same, you can use a UNION
SELECT X.*
FROM ( SELECT `id`,
`userID`,
'INVOICE' AS PTYPE
`amount`,
`date`
FROM `invoices`
WHERE {$userID} = userID
UNION
SELECT `id`,
`userID`,
'PAYMENT' AS PTYPE
`amount`,
`date`
FROM `payments`
WHERE {$userID} = userID
) X
ORDER BY X.`date`
EDIT
Read the relevant section of the MySQL manual on UNIONS. There are other ways of phrasing this, but this is my preferred style - it should be clear to anybody reading that the ORDER BY clause applies to the result of both sides of the UNION. A carelessly written UNION - even with an ORDER BY - may still leave the final resultset in indeterminate order.
The purpose of the PTYPE is that this query returns an extra column called PTYPE, that indicates whether each individual row is an INVOICE or a PAYMENT... ie. which of the two tables it comes from. It's not mandatory, but can often be useful within a union
Because you have two identical fields named date, MySQL will not know which one you're trying to order by.
"SELECT DISTINCT *
FROM invoices,payments
WHERE {$userID} = invoice.userID
OR {$userID} = payments.userID
ORDER BY invoices.date, payments.date DESC";
This would sort on the invoice date, then the payment date - if that's what you are trying to find out
If your data tipe is Date, Timestamp, or anything related, the SGBD will order it properly. If that was what you've asked.
But if the datatype is String, even when dates is store, it will not sort the way you want.

How to use MySQL Found_Rows() in PHP?

I try to avoid doing Count() because of performance issue. (i.e. SELECT COUNT() FROM Users)
If I run the followings in phpMyAdmin, it is ok:
SELECT SQL_CALC_FOUND_ROWS * FROM Users;
SELECT FOUND_ROWS();
It will return # of rows. i.e. # of Users.
However, if I run in in PHP, I cannot do this:
$query = 'SELECT SQL_CALC_FOUND_ROWS * FROM Users;
SELECT FOUND_ROWS(); ';
mysql_query($query);
It seems like PHP doesn't like to have two queries passing in. So, how can I do that?
SQL_CALC_FOUND_ROWS is only useful if you're using a LIMIT clause, but still want to know how many rows would've been found without the LIMIT.
Think of how this works:
SELECT SQL_CALC_FOUND_ROWS * FROM Users;
You're forcing the database to retrieve/parse ALL the data in the table, and then you throw it away. Even if you aren't going to retrieve any of the rows, the DB server will still start pulling actual data from the disk on the assumption that you will want that data.
In human terms, you bought the entire contents of the super grocery store, but threw away everything except the pack of gum from the stand by the cashier.
Whereas, doing:
SELECT count(*) FROM users;
lets the DB engine know that while you want to know how many rows there are, you couldn't care less about the actual data. On most any intelligent DBMS, the engine can retrieve this count from the table's metadata, or a simple run through the table's primary key index, without ever touching the on-disk row data.
Its two queries:
$query = 'SELECT SQL_CALC_FOUND_ROWS * FROM Users';
mysql_query($query);
$query = 'SELECT FOUND_ROWS()';
mysql_query($query);
PHP can only issue a single query per mysql_query call
It's a common misconception, that SQL_CALC_FOUND_ROWS performs better than COUNT(). See this comparison from Percona guys: http://www.mysqlperformanceblog.com/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/
To answer you question: Only one query is allowed per one mysql_query call, as described in manual: mysql_query() sends a unique query (multiple queries are not supported)
Multiple queries are supported when using ext/mysqli as your MySQL extension:
http://www.php.net/manual/en/mysqli.multi-query.php
Only this code works for me so i want to share it for you.
$Result=mysqli_query($i_link,"SELECT SQL_CALC_FOUND_ROWS id From users LIMIT 10");
$NORResult=mysqli_query($i_link,"Select FOUND_ROWS()");
$NORRow=mysqli_fetch_array($NORResult);
$NOR=$NORRow["FOUND_ROWS()"];
echo $NOR;
Use 'union' and empty columns:
$sql="(select sql_calc_found_rows tb.*, tb1.title
from orders tb
left join goods tb1 on tb.goods_id=tb1.id
where {$where}
order by created desc
limit {$offset}, {$page_size})
union
(select found_rows(), '', '', '', '', '', '', '', '', '')
";
$rs=$db->query($sql)->result_array();
$total=array_pop($rs);
$total=$total['id'];
This is an easy way & works for me :
$query = "
SELECT SQL_CALC_FOUND_ROWS *
FROM tb1
LIMIT 5";
$result = mysqli_query($link, $query);
$query = "SELECT FOUND_ROWS() AS count";
$result2 = mysqli_query($link, $query);
$row = mysqli_fetch_array($result2);
echo $row['count'];
Do you really think that selecting ALL rows from tables is faster than counting them?
Myisam stores a number of records in table's metadata, so SELECT COUNT(*) FROM table don't have to access data.

Categories