I am using backpack-crud controller for
PHP-Laravel.
With the crudController given by backpack (library), all I have to do
is to query it with Laravel Eloquent (also raw sql is possible) queries.
Than the Backpack library will automatically
print the listview for me.
But I am struggling with this difficult query.
The thing is that I have 4 columns,
session_id | column_id | batch | data
10 | 1 | 1 | data1
10 | 2 | 1 | data2
10 | 1 | 2 | data1*
10 | 2 | 2 | data2*
Let's say this is the data I have.
I want to display this grouping by session_id, batch,
and ordering within row by column_id.
so the result query would be something like
1 : data1 data2
2 : data1* data2*
If there is a third batch with data
session_id | column_id | batch | data
10 | 1 | 3 | data1**
Then it would appear under the third batch as
3 : data1**
I can do this with code but not with sql.
Any advice would be grateful.
This looks like a PIVOT in sql server. Unfortunately mysql does not have this feature.
I can give you an approximate raw mysql query using GROUP_CONCAT. Assuming your table name is mytable.
SELECT
session_id,
batch,
GROUP_CONCAT(data ORDER BY column_id SEPARATOR ', ') AS dataList
FROM mytable
GROUP BY session_id, batch
Then you can split the aliased dataList column using given separator (here I've used ,).
You may change the separator according to the data contain in data column as you wish.
Hope this helps to you.
Related
When I started designing my application database schema few months ago I have been told not to store the same data/calculated data in more than one place in the database(normalization). If I do, I will make a scope of bugs when I update the data in one place and left the other without updating. So I did an orders table and ordersDetails table. Something like this..
-- orders table
+-----+---------+----------+
| ID | clintID | date |
+-----+---------+----------+
| 1 | 1 |2018-02-22|
| 2 | 1 |2018-02-23|
| 3 | 2 |2018-02-24|
+-----+---------+----------+
-- orderDetail table
+-----+---------+------------+----------+----------+
| ID | orderID | itemNumber | quantity | unitPrice|
+-----+---------+------------+----------+----------+
| 1 | 1 | 12345 | 3 | 100.75 |
| 2 | 1 | 12346 | 3 | 100.75 |
| 3 | 2 | 12347 | 3 | 100.75 |
| 4 | 2 | 12345 | 3 | 100.75 |
| 5 | 3 | 12347 | 3 | 100.75 |
| 6 | 3 | 12345 | 3 | 100.75 |
+-----+---------+------------+----------+----------+
And to make the the queries easier for me I made a view "allOrdersSummary" like
-- allOrdersSummary
SELECT
orders.*, SUM(orderDetail.quantity * orderDetail.unitPrice) totalAmount
FROM orders INNER JOIN orderDetail ON orders.ID = orderDetail.orderID
GROUP BY orders.ID;
and I used this view later for my queries, but now I started to get the MAX_JOIN_SIZE error.
So I thought of saving the calculated total order amount along with the orders table ID, clintID, date, totalAmount and whenever I change something in the orderDeatils table I update the calculated totalAmount column in the orders table, I don't know if this is good or bad!
This problem -I don't know if this is considered a problem or not- is encountered many times, for example to know the unread messages of the client making the request I have to do sum(messages) unread from messages where to = ? and isRead = 0
A) should I make another column for calculated totalAmount in the orders table or it is a normal thing in databases to calculate the totalAmount from the orderDetails table every time I need it ?
B) If you recommend making another column in the orders table, what is the best way to update it every time a change happens in the orderDetails table ? should I update it at the PHP layer whenever I update the orderDetails table, or this is something that needs a stored procedure ?
Yes, it is normal to store pre-calculated values, based on other data in the database, in a database. But not necessarily for the reason you mention. I never had a problem with MAX_JOIN_SIZE.
The main, and probably only, reason for storing calculated values is speed. So you do it for values that don't change that often and that may be used in queries that use a lot of data and may therefore be too slow if you didn't use them.
For instance: If you want to know the average value of all the orders in your database the query would be a lot faster if you already have the order totals.
Why, and how, you update the values is completely up to you. However you have got to be consistent about it. If you use the MVC pattern it would make sense to integrate it in the controller. Or in simple terms: Whenever a form is submitted that could change one of the values, out of which the pre-calculated value is computed, you need to recompute it.
This is a clear demonstration where 'normalization' is not entirely maintained. It's not really pretty, but sometimes worth it. You could, of course, argue, that the calculated value represents 'new' information, and therefore does not offend against 'normalization'.
You have an "inflate-deflate" problem.
JOIN the two tables to make a much larger temporary table.
GROUP BY to shrink back to one row per row of the original (orders) table.
This avoids the problem:
SELECT *,
( SELECT SUM(quantity * unitPrice
FROM orderDetail WHERE orderID = orders.ID
) AS totalAmount
FROM orders;
Please let me know how your experience is with this one. It is one of the simplest examples of the inflate-deflate problem.
I am having a bit of a problem running a select query on a database. Some of the data is held as a list of comma separated values, an example:
Table: example_tbl
| Id | standardid | subjectid |
| 1 | 1,2,3 | 8,10,3 |
| 2 | 7,6,12 | 18,19,2 |
| 3 | 10,11,12 | 4,3,7 |
And an example of the kind of thing I am trying to run:
select * from table where standardid in (7,10) and subjectid in (2,3,4)
select * from table where FIND_IN_SET(7,10,standardid) and FIND_IN_SET(2,3,4,subjectid)
Thanks in advance for anything you can tell me.
comma separated values in a database are inherently problematic and inefficient, and it is far, far better to normalise your database design; but if you check the syntax for FIND_IN_SET() it looks for a single value in the set, not matches several values in the set.
To use it for multiple values, you need to use the function several times:
select * from table
where (FIND_IN_SET(7,standardid)
OR FIND_IN_SET(10,standardid))
and (FIND_IN_SET(2,subjectid)
OR FIND_IN_SET(3,subjectid)
OR FIND_IN_SET(4,subjectid))
My table looks like this:
+------------------------+
| id | title | position |
+------------------------+
| 1 | test 2 | 3 |
+------------------------+
| 2 | test 3 | 1 |
+------------------------+
| 3 | test 1 | 0 |
+------------------------+
I found this query which retrieves the rows ordered based on the position field which holds the id of the predecessor.
SELECT
*
FROM
mytable AS t1
LEFT JOIN
mytable AS t2
ON t2.position = t1.id
I wonder why this is working because there is no order by clause and the database should't know that position 0 is the row to start at.
The result is dependent on the order you inserted the rows into the table. If, for example, you had inserted the row with id=3 before you inserted the row with id=2, then you would have got a non-sorted result.
As it stands, you are pulling the data out of t1 in the order of id because that is the order you put the elements into the table
See http://sqlfiddle.com/#!2/63a925/2 and try it for yourself.
N.B. Databases are not guaranteed to work as you state, it is simply that most databases work this way. You should not rely on this behaviour as a minor change to the schema or query could ruin your whole day! Note also that if id is a (primary?) key, the insert order will probably be overridden by the fact that the database will pull the rows out in the order of the index.
That query is joining in table 2 based on the ID in table 1 equaling the position in table 2. Since the IDs in table 1 are sequential, the output appears to be sorted
I know that this title is overused, but it seems that my kind of question is not answered yet.
So, the problem is like this:
I have a table structure made of four tables (tables, rows, cols, values) that I use to recreate the behavior of the information_schema (in a way).
In php I am generating queries to retrieve the data, and the result would still look like a normal table:
SELECT
(SELECT value FROM `values` WHERE `col` = "3" and row = rows.id) as "col1",
(SELECT value FROM `values` WHERE `col` = "4" and row = rows.id) as "col2"
FROM rows WHERE `table` = (SELECT id FROM tables WHERE name = 'table1')
HAVING (col2 LIKE "%4%")
OR
SELECT * FROM
(SELECT
(SELECT value FROM `values` WHERE `col` = "3" and row = rows.id) as "col1",
(SELECT value FROM `values` WHERE `col` = "4" and row = rows.id) as "col2"
FROM rows WHERE `table` = (SELECT id FROM tables WHERE name = 'table1')) d
WHERE col2 LIKE "%4%"
note that the part where I define the columns of the result is generated by a php script. It is less important why I am doing this, but I want to extend this algorithm that generates the queries for a broader use.
And we got to the core problem, I have to decide if I will generate a where or a having part for the query, and I know when to use them both, the problem is my algorithm doesn't and I have to make a few extra checks for this. But the two above queries are equivalent, I can always put any query in a sub-query, give it an alias, and use where on the new derived table. But I wonder if I will have problems with the performance or not, or if this will turn back on me in an unexpected way.
I know how they both work, and how where is supposed to be faster, but this is why I came here to ask. Hopefully I made myself understood, please excuse my english and the long useless turns of phrases, and all.
EDIT 1
I already know the difference between the two, and all that implies, my only dilemma is that using custom columns from other tables, with variable numbers and size, and trying to achieve the same result as using a normally created table implies that I must use HAVING for filtering the derived tables columns, at the same time having the option to wrap it up in a subquery and use where normally, this probably will create a temporary table that will be filtered afterwards. Will this affect performance for a large database? And unfortunately I cannot test this right now, as I do not afford to fill the database with over 1 billion entries (that will be something like this: 1 billion in rows table, 5 billions in values table, as every row have 5 columns, 5 rows in cols table and 1 row in tables table = 6,000,006 entries in total)
right now my database looks like this:
+----+--------+-----------+------+
| id | name | title | dets |
+----+--------+-----------+------+
| 1 | table1 | Table One | |
+----+--------+-----------+------+
+----+-------+------+
| id | table | name |
+----+-------+------+
| 3 | 1 | col1 |
| 4 | 1 | col2 |
+----+-------+------+
where `table` is a foreign key from table `tables`
+----+-------+-------+
| id | table | extra |
+----+-------+-------+
| 1 | 1 | |
| 2 | 1 | |
+----+-------+-------+
where `table` is a foreign key from table `tables`
+----+-----+-----+----------+
| id | row | col | value |
+----+-----+-----+----------+
| 1 | 1 | 3 | 13 |
| 2 | 1 | 4 | 14 |
| 6 | 2 | 4 | 24 |
| 9 | 2 | 3 | asdfghjk |
+----+-----+-----+----------+
where `row` is a foreign key from table `rows`
where `col` is a foreign key from table `cols`
EDIT 2
The conditions are there just for demonstration purposes!
EDIT 3
For only two rows, it seems there is a difference between the two, the one using having is 0,0008 and the one using where is 0.0014-0.0019. I wonder if this will affect performance for large numbers of rows and columns
EDIT 4
The result of the two queries is identical, and that is:
+----------+------+
| col1 | col2 |
+----------+------+
| 13 | 14 |
| asdfghjk | 24 |
+----------+------+
HAVING is specifically for GROUP BY, WHERE is to provide conditional parameters. See also WHERE vs HAVING
I believe the having clause would be faster in this case, as you're defining specific values, as opposed to reading through the values and looking for a match.
See: http://database-programmer.blogspot.com/2008/04/group-by-having-sum-avg-and-count.html
Basically, WHERE filters out columns before passing them to an aggregate function, but HAVING filters the aggregate function's results.
you could do it like that
WHERE col2 In (14,24)
your code WHERE col2 LIKE "%4%" is bad idea so what about col2 = 34 it will be also selected.
I have a comma delimited list that im storing in a varchar field in a mysql table.
Is it possible to add and remove values from the list directly using sql queries? Or do I have to take the data out of the table, manipulate in PHP and replace it back into mysql?
There is no way to do it in InnoDB and MyIsam engines in mysql. Might be in other engines (check CSV engine).
You can do it in a stored procedure, but, not recommended.
What you should do to solve such an issue is to refactor your code and normalize your DB =>
original table
T1: id | data | some_other_data
1 | gg,jj,ss,ee,tt,hh | abanibi
To become:
T1: id | some_other_data
1 | abanibi
T2: id | t1_id | data_piece
1 | 1 | gg
2 | 1 | jj
3 | 1 | ss
4 | 1 | ee
5 | 1 | tt
6 | 1 | hh
and if data_piece is a constant value in the system which is reused a lot, you need to add there a lookup table too.
I know it looks more work, but then it will save you issues like you have now, which take much more time to solve.