I have a large amount of data from a data Logging device stored in a MySQL DB that I want to place on a graph, I want to show a months worth of data - the Logging is per second.
I’m using PHP and the Google Charts library to draw the graph as an image client side.
There is no point trying to display 2,628,000 on a graph on a screen so I want to try and get an SQL query to give an average datapoint for say each hour (3600 down to 1), instead of each second, unless it is out of bounds.
The reason being the whole point in the graph is to show if the value has gone out of bounds and when it did.
The current SQL query to get the data required for last month for example is below, the first problem is PHP is hitting its memory limit before its able to return the data:
SELECT Tms, Hz FROM log WHERE Tms >= ".$start." AND Tms <=".$finish." ORDER BY Tms ASC
The average value should be for example 60, the upper limit is 61.5 and the lower limit is 58.5 - any value outside of these should be returned as-is otherwise the hours worth of data should be returned as an average for that hour.
EDIT: To answer the comments:
DB structure is:
ID - double - AUTO_INCREMENT
Tms - timestamp
Hz - float
Example Data is:
ID | Tms | Hz
1 | 1559347082 | 59.91
2 | 1559347083 | 59.98
3 | 1559347084 | 60.53
4 | 1559347085 | 62.03
5 | 1559347086 | 61.11
6 | 1559347087 | 60.93
7 | 1559347088 | 60.88
.......
3606 | 1559350686 | 59.99
The expected results would be to have an array of results, all of the values within an hour as an average, unless there is a value out of bounds.
So for the data above, items 1,2,3 would be returned with the average Tms: 1559347083 and average Hz: 60.14, but the next value in the array of results would be Tms: 1559347085 and Hz: 62.03.
Results:
Tms: 1559347083 | Hz: 60.14
Tms: 1559347085 | Hz: 62.03
Tms: 1559348886 | Hz: 60.17
The maximum amount of points to be averaged or grouped together would be 3600 rows = 1 hour so the graph does show some movement.
One of the current errors when trying to select a large amount of data:
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 20480 bytes)
This is happening as the result is being placed into an array so I can add the values for the bounds so there is a clear line on the graph:
while($row = $result->fetch_assoc()) {
$dataPoint = array($row['Tms'], '58.5', $row[$graph], '61.5');
....
array_push($dataPoints, $dataPoint);
This array ($dataPoints) then gets passed to a function to either output as JSON or output as CSV using fputcsv
It is not logical, or useful, to have one query that does give both, hourly averages, and individual out of bounds values. This requires two queries. So let's start with the first, the hourly average:
SELECT
COUNT(ID) AS CountID,
DATE(Tms) AS DateTms,
HOUR(Tms) AS HourTms,
AVG(Hz) AS AvgHz
FROM
log
WHERE
Tms >= '2019-01-01 12:00:00' AND
Tms <= '2019-12-12 12:00:00'
GROUP BY
HOUR(Tms)
ORDER BY
Tms ASC
I've put real dates in the WHERE conditions, instead of the undocumented variables $start and $finish, but these can, of course, be replace. I've added a counter, because it is always useful, and finally, because we report for each hour of the day, I have added a date. The GROUP BY HOUR(Tms) does the grouping by whole hours.
The second query is about the out of bouds values. It is simply:
SELECT
ID,
Tms,
Hz
FROM
log
WHERE
Tms >= '2019-01-01 12:00:00' AND
Tms <= '2019-12-12 12:00:00' AND
(Hz < 58.5 OR Hz > 61.5)
ORDER BY
Tms ASC
You can easily combine the results of these two queries into one array with PHP. However...
I am worried that the last query might produce too much data when there are too much out of bound values. And that's probably what you're saying in your later addition to the question. To solve this you could work with an hourly average of the out of bounds values. You would have to use two queries for this, one for values below the lower limit and one for those above the upper limit. I'll show the first one here:
SELECT
COUNT(ID) AS CountID,
DATE(Tms) AS DateTms,
HOUR(Tms) AS HourTms,
AVG(Hz) AS AvgHz
FROM
log
WHERE
Tms >= '2019-01-01 12:00:00' AND
Tms <= '2019-12-12 12:00:00' AND
Hz < 58.5
GROUP BY
HOUR(Tms)
ORDER BY
Tms ASC
This looks very much like the first query, which is a good thing. The only addition is the range limiting of the Hz value. The other query simply has Hz > 61.5. The results of the three queries can be collected in an array and displaying in a graph.
The three queries could be forced into one query, but I don't see the advantage of that. With three separate queries you could, for instance, write a PHP function that does the query and gets the results, and all you need to vary, using function parameters, is range limiting and the start/finish times.
Finally a bit about your database. I see you use doubles for the ID, that should probably be an integer. Also don't forget to put indexes on Tms and Hz otherwise your queries might be very slow.
Related
I'm not sure how to ask this properly as I'm a little green to this and seeing how I can't ask it properly I haven't been able to google the results.
Backstory: I manage an apartment complex. Every apartment has a digital electrical meter. Every day I can download a CSV file of all unit's and their readings.
Using PHP and SQL i can pull the UNIT # from a table called tenants - Then I can reference the specific unit # in a search on my browser from a specific date and it will automatically calculate the usage for the month (or whatever range I select).
I have that part down! What I'm trying to do now is create a one button pull where I can see all usage from all tenants in one easy table.
Right now the database looks likes this
|UNIT|KWH|DATE |
|101 |100|01/01/2022|
|102 |80 |01/01/2022|
|103 |110|01/01/2022|
|104 |108|01/01/2022|
|101 |110|01/02/2022|
|102 |90 |01/02/2022|
|103 |125|01/02/2022|
|104 |128|01/01/2022|
ETC
It just keeps growing as I import the CSV file daily into the database
What I want to be able to quickly see is:
|UNIT|TOTAL KWH|DATE RANGE
|101 |10 |01/01/2022 - 01/30/2022|
|102 |10 |01/01/2022 - 01/30/2022|
|103 |15 |01/01/2022 - 01/30/2022|
|104 |20 |01/01/2022 - 01/30/2022|
The below code gives me the specific unit
SELECT Max(KWH)-Min(KWH) AS TOTALKWH,UNIT AS UNIT
FROM testdb
WHERE UNIT = 'Unit_220'
AND Date >='11/01/2022' AND Date <='11/30/2022'
I'm stuck on how to select all units and not just a specific unit. Any thoughts how to do this easily? Or perhaps a better way than I am currently?
Use MySQL to achieve your desired idea.
SELECT
tb1.UNIT,
(
tb1.KWH -
(
SELECT
tb3.KWH
FROM
kwh AS tb3
WHERE
tb3.DATE = DATE_ADD(tb1.DATE, INTERVAL -1 MONTH) AND tb3.UNIT = tb1.UNIT
)
) AS "TOTAL KWH",
CONCAT(
DATE_ADD(tb1.DATE, INTERVAL -1 MONTH),
" ~ ",
DATE_ADD(tb1.DATE, INTERVAL -1 DAY)
) AS "DATE RANGE"
FROM
kwh AS tb1
WHERE
(
SELECT
COUNT(tb2.DATE)
FROM
kwh AS tb2
WHERE
tb2.DATE < tb1.DATE
) >= 1
ORDER BY
tb1.DATE;
I created a sistem to input results from a school basketball tournament. The idea is that after the game the operators will input the result in a format that the system fetches to save in the db in a format like the one below:
Date | Team | Score 1Q | Score 2Q | Score 3Q | Score 4Q | Score OT | Final Score | W | L | Won over Team | Lost to Team | Regular Season? | Finals?
I created a PHP page that calculate many stats from the table above, like Total Wins, Win%, Avg Points, Avg. Points per Quarter, % Turn Around Games when loosing on Half Time or 3Q, % Finals games disputed, Times became champions etc, and many more deep stats.
But I was thinking in creating a View with this information calcalated on the DB and in real time, instead of having the script handles it.
But how can I turn the selects needed from the first table into a working second table with all calculations done whenever we make the selection?
Thanks
#decio, I think your idea about creating a view to calculate those stats is not a bad idea. You might be able to do so with the something similar to the following SQL script:
CREATE VIEW result_stats_view AS SELECT SUM(W) as total_wins, SUM(L) as total_losses FROM precalculate_stats_table_name;
This shows the total wins and losses for the season, but you probably get the idea. Check out MySQL aggregate functions (like average, sum, etc.) here:
https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html
Once you have your calculations added to the view then you can simply do query like this to get your calculated data:
SELECT * from result_stats_view
So I have a table that looks like this:
Person Product Date Quantity
1 A 1/11/2014 1
2 A 1/11/2014 2
1 A 1/20/2014 2
3 A 1/21/2014 1
3 B 1/21/2014 1
1 A 1/25/2014 1
I want to find the Count of Quantity where Product is A and Person has a Count > 1 WITHIN ANY SLIDING 30 DAY RANGE. Another key is that once two records meet the criteria, they should not add to the count again. For example, Person 1 will have a count of 3 for 1/11 and 1/20, but will not have a count of 3 for 1/20 and 1/25. Person 2 will have a count of 2. Person 3 will not show up in the results, because the second product is B. This query will run within a specific date range also (e.g, 1/1/2014 - 10/27/2014).
My product is written in MySQL and PHP and I would prefer to do this exclusively in MySQL, but this seems more like an OLAP problem. I greatly appreciate any guidance.
Another key is that once two records meet the criteria, they should not add to the count again.
This is not relational. In order for this to be meaningful, we have to define the order in which records are evaluated. While SQL does have ORDER BY, that's for display purposes only. It does not affect the order in which the query is computed. The order of evaluation is not meant to matter.
I do not believe this can be expressed as a SELECT query at all. If I am correct, that leaves you with plSQL or a non-SQL language.
If you're willing to drop this requirement (and perhaps implement it in post-processing, see below), this becomes doable. Start with a view of all the relevant date ranges:
CREATE VIEW date_ranges(
start_date, -- DATE
end_date -- DATE
) AS
SELECT DISTINCT date, DATE_ADD(date, INTERVAL 30 day)
FROM your_table;
Now, create a view of relevant counts:
CREATE VIEW product_counts(
person, -- INTEGER REFERENCES your_table(person)
count, -- INTEGER
start_date, -- DATE
end_date -- DATE
) AS
SELECT y.person,
sum(y.quantity),
r.start_date,
r.end_date
FROM date_ranges r
JOIN your_table y
ON y.date BETWEEN r.start_date AND r.end_date
GROUP BY y.person
HAVING sum(y.quantity) > 1;
For post-processing, you need to look at each row in the product_counts view and look up the purchase orders (rows of your_table) which correspond to it. Check whether you've seen any of those orders before (using a hash set), and if so, exclude them from consideration, reducing the count of the current item and possibly eliminating it entirely. This is best done in a procedural language other than SQL.
So I have a single table inside which I have a score system for points. It looks something along this line:
Columns:
ID Name Date Points
1 Peter 2014-07-15 5
2 John 2014-07-15 6
3 Bill 2014-07-15 3
and so on...
Everyday, the new results are being put into the table with the total amount of points acumulated, however in order to be able to get historic values, the results are put into new rows. So on the 2014-07-16, the table will look like this:
ID Name Date Points
1 Peter 2014-07-15 5
2 John 2014-07-15 6
3 Bill 2014-07-15 3
4 Peter 2014-07-16 11
5 John 2014-07-16 12
6 Bill 2014-07-16 3
However sometimes when a player doesn't take part for the whole day and doesn't get any points, he will still be added, but the points will remain the same (here this is shown by the case of Bill).
My question is how to count the number of each type of players (active - Peter and John ie when the points value changes from one date to another and inactive - Bill ie when the points value stays the same).
I have managed to get this query to only select players who do have the same value, but it's giving me the list of players rather than the count. Although I could potentialy be wrong with this query:
SELECT Points, name, COUNT(*)
FROM points
WHERE DATE(Date) = '2014-07-15' OR DATE(Date) = '2014-07-16'
GROUP BY Points
HAVING COUNT(*)>1
I'm not sure how to count the number of rows (could do a bypass trick with PHP getting the number of rows, but interested in SQL only) or how to invert it, to get a count of players who have a different score (again, could get total of rows and then subtract the above number, but not interested in that either - I'd prefer the SQL).
Regards and thanks in advance.
You are pretty close.
If you have at most one row per "player" per "date", you could do something like this:
SELECT SUM(IF(c.cnt_distinct_points<2,1,0)) AS cnt_inactive
, SUM(IF(c.cnt_distinct_points>1,1,0)) AS cnt_active
FROM ( SELECT p.name
, COUNT(DISTINCT p.points) AS cnt_distinct_points
FROM points p
WHERE DATE(p.Date) IN ('2014-07-15','2014-07-16')
GROUP BY p.name
) c
The inline view query (aliased as c) gets a count of the distinct number of "points" values for each player. We need to "group by" name, so we can get a distinct list of players, along with an indication whether the points value was different or not. If all of the non-NULL "points" values for a given player are the same, COUNT(DISTINCT ) will return a value of 1. Otherwise, we'll get a value larger than 1.
The outer query processes that list, collapsing all of the rows into a single row. The "trick" is to use expressions in the SELECT list that return 1 or 0, depending on whether the player is "inactive", and perform a SUM aggregate on that. Do the same thing, but a different expression to return a 1 if the player is "active".
If the count of distinct points for a player is 1, we'll essentially be adding 1 to cnt_inactive. Similarly, of the distinct points for a player is greater than 1, we'll be adding 1 to the cnt_active.
If this doesn't make sense, let me know if you have questions.
NOTE: Ideally, we'd avoid using the DATE() function around the p.Date column reference, so we could enable an appropriate index.
If the Date column is defined as (MySQL datatype) DATE, then the DATE() function is unnecessary. If the Date column is defined as (MySQL datatype) DATETIME or TIMESTAMP, we could use an equivalent predicate:
WHERE p.Date >= '2014-07-15' AND p.Date < '2014-07-16' + INTERVAL 1 DAY
That looks more complicated, but a predicate of that form is sargable (i.e. MySQL can use an index range scan to satisfy it, rather than having to look at every row in the table.)
For performance, we'd probably benefit from an index with leading columns of name and date
... ON points (`name`,`date`)
(MySQL may be able to avoid a "Using filesort" operation for the GROUP BY).
I would solve this problem by looking at the previous number of points and then doing a comparison:
select date(date), count(*) as NumActives;
from (select p.*,
(select p2.points
from points p2
where p2.name = p.name and p2.date < p.date
order by p2.date desc
limit 1
) as prev_points
from points p
) p
where prev_points is NULL or prev_points <> points;
Of course, you can add a where clause to get the count for any particular day.
I play a lot of board games and I maintain a site/database which keeps track of several statistics. One of the tables keeps track of various times. It's structure looks like this:
gameName (text - the name of the board game)
numPeople (int - the number of people that played)
timeArrived (timestamp - the time we arrived at the house we are playing the game)
beginSetup (timestamp - the time when we begin to set up the game)
startPlay (timestamp - the time we actually start playing the game)
gameEnd (timestamp - the time the game is finished)
Basically, what I'm wanting to do is use these times to get some interesting/useful info from (like what game on average takes the longest to set up, what game on average takes the longest to play, what game is the longest from arrival to finish, etc...) Normally, I rely way too much on PHP and I would just do a select * ... and grab all the times then do some PHP calculations to find all the stats but I know that MySQL can do all this for me with a query. Unfortunately, I get pretty lost when it comes to more complex queries so I'd like some help.
I'd like some examples of a couple queries and hopefully I can figure out other average time queries once someone gets me started. What would the query look like for longest time on average to play a board game? What about quickest game/time to set up on average?
Additional Info:
drew010 - You have me off to a great start but I'm not getting the results I'd expected. I've give you some real exmples...
I've got a game called Harper and it's been played twice (so there are two records in the database with time entires). Here are what the times look like for it:
beginSetup(1) = 2012-07-25 12:06:03
startPlay(1) = 2012-07-25 12:47:14
gameEnd(1) = 2012-07-25 13:29:45
beginSetup(2) = 2012-08-01 12:06:30
startPlay(2) = 2012-08-01 12:55:00
gameEnd(2) = 2012-08-01 13:40:32
When I then run the query you provided me (and I convert the seconds into hours/minutes/seconds) I get these results (sorry, I don't know how to do the cool table you did):
gameName = Harper
Total Time = 03:34:32
...and other incorrect numbers.
From the numbers, the Average Total Time should be about 1 hour and 24 minutes - not 3 hours and 34 minutes. Any idea why I'd be getting incorrect numbers?
Here is a query to get the average setup time and play time for each game, hope it helps:
SELECT
gameName,
AVG(UNIX_TIMESTAMP(startPlay) - UNIX_TIMESTAMP(beginSetup)) AS setupTime,
AVG(UNIX_TIMESTAMP(gameEnd) - UNIX_TIMESTAMP(startPlay)) AS gameTime,
AVG(UNIX_TIMESTAMP(gameEnd) - UNIX_TIMESTAMP(beginSetup)) AS totalTime,
FROM `table`
GROUP BY gameName
ORDER BY totalTime DESC;
Should yield results similar to:
+----------+-----------+-----------+-----------+
| gameName | setupTime | gameTime | totalTime |
+----------+-----------+-----------+-----------+
| chess | 1100.0000 | 1250.0000 | 2350.0000 |
| checkers | 466.6667 | 100.5000 | 933.3333 |
+----------+-----------+-----------+-----------+
I just inserted about 8 test rows with some random data so my numbers don't make sense, but that is the result you would get.
Note that this will scan your entire table so it could take a while depending on how many records you have in this table. It's definitely something you want to run in the background periodically if you have a considerable amount of game records.
For something like how long it took to set up you could write something like:
SELECT DATEDIFF(HOUR, BeginSetup, StartTime) -- in hours how long to set up