I am developing a coffee ordering application in which i have 3 different tables
main_menu with fields id,item_name e.g. Cafe,Pizza bar,Breakfast etc.
sub_menu with fields id,sub_item_name,item_name,price e.g chai latte for $2.00
sub_type with fields id,sub_item_type,sub_item_name,item_name,add_price e.g. Chai latte large size(+ $0.50) and soy milk(+$0.50) so total price $3.00
What I am confuse with is that user may select more than one sub_type when ordering just like in the above example where he may want extra milk and size upgrade. So when I am inserting order into database, do i have to run the insert query twice when user selects sub_type? Or is there any way to do it in single query?
And same thing when I am displaying data in the kitchen side: how can i group the sub_type and display it in same row so that coffee maker don't get confused and easily know that that's the same order with extras?
i am using android at coffee ordering side and php for kitchen side display
i need displaying result in php table something like this
+----------+-------------+---------------------+------------+
| Order No |Item_name |Sub_Item_Name |Total Price |
+----------+-------------+---------------------+------------+
|145 |Chai Latte |Soy Milk,Large Size | $3.00 |
| | | Extra Sugar | |
+-----------------------------------------------------------+
|146 |Black Tea |Regular Size,No | $2.50 |
| | |Sugar | |
+-----------------------------------------------------------+
|147 |Espresso |Skin Milk,Small Size | $3.50 |
| | | | |
+-----------------------------------------------------------+
Without trying to over complicate things, in my opinion, the orders should go into a separate order table with the fields you just specified i.e. order_id, item_name, sub_item_name and total_price. Each row can contain multiple comma-separated items in the sub_item_name field. (You can easily construct these using PHP codes). The main_menu, sub_menu and sub_type tables should be used to present menu and add-ons options to the customers. There are no needs to couple the menu tables with the order table.
With this schema, you will perform only one insertion for each order. And the same orders are presented to the kitchen staff without having to join or group any other tables.
Will this work?
Related
When I started designing my application database schema few months ago I have been told not to store the same data/calculated data in more than one place in the database(normalization). If I do, I will make a scope of bugs when I update the data in one place and left the other without updating. So I did an orders table and ordersDetails table. Something like this..
-- orders table
+-----+---------+----------+
| ID | clintID | date |
+-----+---------+----------+
| 1 | 1 |2018-02-22|
| 2 | 1 |2018-02-23|
| 3 | 2 |2018-02-24|
+-----+---------+----------+
-- orderDetail table
+-----+---------+------------+----------+----------+
| ID | orderID | itemNumber | quantity | unitPrice|
+-----+---------+------------+----------+----------+
| 1 | 1 | 12345 | 3 | 100.75 |
| 2 | 1 | 12346 | 3 | 100.75 |
| 3 | 2 | 12347 | 3 | 100.75 |
| 4 | 2 | 12345 | 3 | 100.75 |
| 5 | 3 | 12347 | 3 | 100.75 |
| 6 | 3 | 12345 | 3 | 100.75 |
+-----+---------+------------+----------+----------+
And to make the the queries easier for me I made a view "allOrdersSummary" like
-- allOrdersSummary
SELECT
orders.*, SUM(orderDetail.quantity * orderDetail.unitPrice) totalAmount
FROM orders INNER JOIN orderDetail ON orders.ID = orderDetail.orderID
GROUP BY orders.ID;
and I used this view later for my queries, but now I started to get the MAX_JOIN_SIZE error.
So I thought of saving the calculated total order amount along with the orders table ID, clintID, date, totalAmount and whenever I change something in the orderDeatils table I update the calculated totalAmount column in the orders table, I don't know if this is good or bad!
This problem -I don't know if this is considered a problem or not- is encountered many times, for example to know the unread messages of the client making the request I have to do sum(messages) unread from messages where to = ? and isRead = 0
A) should I make another column for calculated totalAmount in the orders table or it is a normal thing in databases to calculate the totalAmount from the orderDetails table every time I need it ?
B) If you recommend making another column in the orders table, what is the best way to update it every time a change happens in the orderDetails table ? should I update it at the PHP layer whenever I update the orderDetails table, or this is something that needs a stored procedure ?
Yes, it is normal to store pre-calculated values, based on other data in the database, in a database. But not necessarily for the reason you mention. I never had a problem with MAX_JOIN_SIZE.
The main, and probably only, reason for storing calculated values is speed. So you do it for values that don't change that often and that may be used in queries that use a lot of data and may therefore be too slow if you didn't use them.
For instance: If you want to know the average value of all the orders in your database the query would be a lot faster if you already have the order totals.
Why, and how, you update the values is completely up to you. However you have got to be consistent about it. If you use the MVC pattern it would make sense to integrate it in the controller. Or in simple terms: Whenever a form is submitted that could change one of the values, out of which the pre-calculated value is computed, you need to recompute it.
This is a clear demonstration where 'normalization' is not entirely maintained. It's not really pretty, but sometimes worth it. You could, of course, argue, that the calculated value represents 'new' information, and therefore does not offend against 'normalization'.
You have an "inflate-deflate" problem.
JOIN the two tables to make a much larger temporary table.
GROUP BY to shrink back to one row per row of the original (orders) table.
This avoids the problem:
SELECT *,
( SELECT SUM(quantity * unitPrice
FROM orderDetail WHERE orderID = orders.ID
) AS totalAmount
FROM orders;
Please let me know how your experience is with this one. It is one of the simplest examples of the inflate-deflate problem.
I'm wondering which method below is faster?
Suppose:
Maximum 10,000 products, each product has 1 user id, 1 cat id, 3 extra fields, and 5 images.
90-99% users come to the website just for the information, not posting.
Method 1: get all data from a table from a query without "JOIN":
SELECT * FROM products WHERE ...
Table: products
id | name | poster_name | cat_name | code_1 | code_2 | content |
dimensions | contact | message | images |
Method 2: get all data from multiple tables with "JOIN":
SELECT ... FROM products
LEFT JOIN cats ON products.cat_id = casts.id
LEFT JOIN users ON ....
table: products
id | name | code_1 | code_2 | content | cat_id | poster_id |
table: cats
id | cat_name |
table: users
id | poster_name |
table: extra
id | product_id | extra_info | extra_data |
table: images
id | product_id | img_src |
The first method will usually be faster for reads, and the second one will help you maintain data integrity and usually will be faster for writes.
The transition from the later form to the former is called denormalization and is usually used in data warehouses, while operational ("live") databases usually prefer the later form (second method).
You have not finished asking the question. Method 2 has no WHERE, so it will deliver 10K rows, plus have to do 20K lookups into the other tables. That makes it the loser.
Since your real question is about performance, then let's discuss the WHERE clause. With that, we can optimize it so that the desired data tends to be in RAM.
Back to your question... JOIN is probably the 'right' way to do it. And it is not that much of a performance hit assuming you have the proper indexes. So provide SHOW CREATE TABLE (even if tentative) and complete WHERE clauses.
Don't over-normalize. For example, do not normalize datetime or any other 'continuous' values.
Normalization can save space, especially in huge tables (eg, millions or billions of rows, and large, frequently repeated, strings being normalized.) This is especially helpful when the table is too big to stay cached in RAM.
I'm working on a website which will be like a marketplace where a registered seller could sell different kind of items. For each item there are common attributes and optional attributes. Take a look to the following, I'll try to explain.
Scenario
The seller add a new item (e.g. iPhone 6 16 gb black)
He builds the insertion specifying item attributes (e.g. price, shipping price, condition, images, description, etc..). This kind of attributes are required and common for any item.
Once all required attributes are filled, the seller have the ability to specify other kind of attributes that are related only with that item (e.g. RAM, capacity, size, weight, model year, OS, number of cores, etc..). This kind of attributes are optional. The seller specify key (e.g. capacity) and value (e.g. 16 gb) and them are related only for that single item. Another iPhone 6 16 gb black sold by another seller may have different attributes.
Actually we have a table called items which contains all the items for sale, and another table called item_attr which contains common item attributes. So an item could be related to 0, 1 or more optional attributes.
We are working on two kind of approaches to store optional values for each item, but both could bring problems.
Case A
Create a new table called item_additional_attr where each record
will represents an additional attribute for a single item. There will
be a one-to-many relationship between items and
item_additional_attr. This seems to be the most "database-friendly" solution, but I'm worried about the size of this
table could have. If items contains 100.000 records and each
item is related to an average of 5 optional attributes,
item_additional_attr will contains 500.000 records. Of course that will be a huge table.
Case B
Create a new field type TEXT or BLOB into item_attr called
optional_attributes. This field will contains an array of optional attributes and will be handled in PHP. Of course the array will be
stored as serialized or json encoded. I think this kind of approach could bring problems with some queries, but it could be handled without problems in PHP.
I'm giving priority to webserver/db performance, but I would also avoid problems with queries. Moreover additional attributes will be used only to show technical specs in a table, never for filtering/sorting. So, in your opinion, what is the best way to achieve that?
You may want to try using EAVs (entity attribute value) tables. Basically you will maintain several tables. One table should store the list of items. Other tables should maintain attributes that all have similar data types. I created a simple schema to demonstrate:
+---------+------------+
| item_id | item_name |
+---------+------------+
| 1 | Cell Phone |
| 2 | Shirt |
+---------+------------+
2 rows in set (0.00 sec)
+---------+--------------+----------------+-----------------+
| item_id | attribute_id | attribute_name | attribute_value |
+---------+--------------+----------------+-----------------+
| 1 | 2 | storage | 8GB |
| 1 | 3 | color | Gray |
| 2 | 4 | size | XL |
| 2 | 6 | shirt_color | Red |
+---------+--------------+----------------+-----------------+
4 rows in set (0.00 sec)
+---------+--------------+----------------+-----------------+
| item_id | attribute_id | attribute_name | attribute_value |
+---------+--------------+----------------+-----------------+
| 1 | 2 | price | 49 |
+---------+--------------+----------------+-----------------+
1 row in set (0.00 sec)
The first table is a list of items. The second table is a list of the items' attributes of type varchar. The third table list items' attributes of type int. This will allow a scalable database that disperses attributes to multiple tables. The only draw back is the amount of join you will need to do in order to get an item and all of its attributes. A textual caching scheme could be used via php in order to store item information for an increase in performance.
I am trying to figure out how to use ONE table JOIN to get a list of vehicle MAKE, MODEL, YEAR, and TRIMS criteria, available for the customer to search from.
There are already master key tables, from which the admin selects from a range of vehicle options and enters these vehicle related details about that product to the PRODUCT table.
I want to now produce a list for the shopper, that reflects only the available vehicle details choices - based on what has been entered into the PRODUCTS table by the admin.
I have been looping / iterating over the MAKE MODEL TRIMS tables with PHP and searching the PRODUCTS table for the existence of the MAKE MODEL YEAR TRIM type in the table of PRODUCTS. But it is taking about 800 individual calls to the PRODUCTS table.
It is understood that this is not the best practice and could cause all sorts of problems - being way to many calls to the database and not efficient.
I am told in another question
https://stackoverflow.com/questions/13960571/sanity-check-mysql-whats-reasonable-800-calls-to-the-database-in-one-second
that this can be done with one call using JOIN and WHERE statements.
I have used table JOINS before, but do not see how this could be done with one call on these many MAKES, MODELS, YEARS, TRIMS to produce one list of available MAKES, MODELS, YEARS, TRIMS criteria for the shopper to choose from.
I would appreciate anything I can learn about this here from your examples : )
Here is an example of the admin master key selection tables for adding vehicle related details to the product entry record:
Table: MAKES
| Id | MAKE | // Admin table for selecting products related vehicle make
------------------
| 1 | FORD |
| 2 | CHEV |
| 3 | GMC |
| 4 | HONDA |
etc.
Table: FORD
| Id | MODEL | // Admin table for selecting products related vehicle model
------------------
| 1 | F150 |
| 2 | ESCAPE |
| 2 | EXPLORER |
etc.
Table: FORD_F150_YEARS_TRIMS
| Id | YEARS| TRIMS | // Admin table for selecting products related vehicle year and trim(s)
--------------------------------------------
| 1 | 1999 | 1999_SPORT+1999_SPORTRAC+1999_XLT+1999_XLS |
| 2 | 2000 | 2000_XLT+2000_XLS+2000_LTD+2000_EDDIE_BAUER |
| 3 | 2001 | 2001_SPORTRAC+2001_XLT+2001_LTD |
etc.
Here is the products table that the admin is entering the product / vehicle details:
Table: PRODUCTS
| PRODUCT_ID | MAKE | MODELS | YEARS | TRIMS |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 123456 | FORD FORD GMC | F150 ESCAPE CANYON | 2000 2001 1999 | FORD_F150_1999_SPORT+FORD_F150_1999_SPORTRAC+GMC_CANYON_1999_LTD+GMC_CANYON_1999_LTD |
| 123457 | FORD GMC CHEV | F150 EXPLORER SILVERADO | 2000 2010 2010 | FORD_F150_2001_XLT+FORD_F150_2001_LTD+GMC_CANYON_2010_XLT+CHEV_SILVERADO_1500_2010_LTD |
etc.
What I want to do is - make a query on the PRODUCTS table where I can produce a table or list of only the vehicle types that there are products for.
So, if there is NOT a product in the PRODUCTS table that fits a 2001 FORD F150 with a SPORTRAC trim - then I do not want to give the shopper the choice of SPORTRAC with 2001 FORD F150 but I do want to give them the choice of 2001 FORD F150 with XLT + LTD
So really - I just want to eliminate choices for the shopper for products vehicle details that don't exist.
I am told that this can be done in one MySQL call to the database. I am told that instead of looping through all the makes, models and trims and making individual calls to the PRODUCTS table - I can somehow use table joins and while statements to get a list of all the potential MAKE MODEL TRIMS choices available for the customer based on what is in the PRODUCTS table only
I see how I could do this by making one call to the PRODUCTS table and then looping through and weeding out duplicates on the result with PHP. But there are thousands of products and these could gro - so I am looking for the best practice method of achieving this.
Well it seems you first and foremost problem is that you have these different tables but you are not using them in a relational manner. You should really spend some time learning about how to properly normalize your tables. As a general guideline, you should really think about how real world items/properties that you are related to one another and express that relationship through proper primary and foreign key usage.
Your products should relate to the makes, models, trims, etc. via the various primary key id's, not by duplicating the data in the products tables. You also shouldn't have a 'Ford' table for example, but rather just a table with of 'models'.
Just as a sample, I might have a schema like this
models
---------
model_id
make_id
model
makes (Make is really just a property of the model of car, and could possibly be de-normalized into models table. Here I am showing it as separate table to show a fully normalized example.)
---------
make_id
make
trims ('SPORT+XLS' unless those represent a specific trim combination. Each different trim package should have its own row)
--------
trim_id
trim
products (I am assuming that a model and year define a product, by looking at your example data)
--------
product_id
year
model_id
product_trims (many-to-many table expressing relation of products to trims - you could have multiple rows with same product_id and different trim id)
-------------
product_id
trim_id
If you really want to have a product defined as a combination of year, model, and trim, you could eliminate the product_trims table and just have a revised product table like this
product
-------------
product_id
year
model_id
trim_id
You could then query across joins to get the data you need. For example, let's say the user has specified a model and a year. The query might look like the following (showed assuming use of both products and products_trim tables)
SELECT p.product_id, p.year, ma.make, mo.model, t.trim
FROM
products AS p
INNER JOIN models AS mo ON p.model_id = mo.model_id
INNER JOIN makes AS ma ON mo.make_id = ma.make_id
INNER JOIN product_trims AS pt ON p.product_id = pt.product_id
INNER JOIN trims AS t ON pt.trim_id = t.trim_id
WHERE p.year = '?' AND p.model_id = ?
Of course, you need to properly index all the filed used for joins and for any WHERE or ORDER BY conditions.
I am asked to give a possibility to the administrator of the site to create attributes in the database tables. There are sellers and buyers on the website, and each seller when adding a certain product, fills out the needed fields for the specific product, and then publishes the product. I am kind of confused on how is this going to work. If every product has specific fields, then that would mean that if the site has 2.000 products, I will have 2.000 tables? I've never worked on such thing, so I really don't know how to handle this. Furthermore, on the admin feature to create attributes. Let's say the product is a tomato. The admin adds field for the tomatoes that is called "condition" and it has options such as "frozen", and "fresh". Then, when some of the sellers tries to create tomato product, they will need to choose if the tomato's condition is fresh or frozen. I thought of a possible solution such as creating a table that will hold the text of the , and then another table that will hold the text of the .
product_tomato ( product_id, user_id, name, description, condition)
product_select( select_id, product_id, select_text)
product_option( option_id, select_id, option_text)
So, this is how I imagined the tables for doing this. So, when the admin adds a field to the product table, I will add a column in the product table, then create new row in the prodcut_select table, and then list the possible options in the product_option table. But then I got confused on how to display that on the product page. How am I going to deal with that in the code, when I don't know what are the names of the columns that the admin created?
The wording of the question is very confusing, but I believe I get the gist of what you're saying.
No, you would not make a table for every single product, that would get ridiculous very quickly. You can handle this easily for multiple products with three tables.
Tables:
Product
Product_Attributes
Seller_Product
Let's take your hypothetical example of a tomato with conditions.
The admin decides that his site will now offer tomatoes as a product. He creates a product, and adds it to the product table. Then, he decides that tomatoes should have a "condition" attribute that has two possible values, fresh and frozen. Therefore, he would add two rows to the Product_Attributes table, with three fields (Product, Condition, Value).
Therefore, your tables would now look like this.
Product Table:
*Name |*
Tomato
Product_Attribute Table:
*Product | Attribute | Value*
Tomato | Condition | Fresh
Tomato | Condition | Frozen
Finally, when your sellers added items to the site store or whatever it is, you would have them enter the data into a form that grabbed the conditions and potential values from the Product_Attribute table for that product. In this case, there's only one attribute so they would just fill out the condition. Let's assume that there are two sellers, Jim and Tom, who sell fresh and frozen tomatoes respectively. The final three tables would look like this.
Product Table:
*Name |*
Tomato
Product_Attribute Table:
*Product | Attribute | Value*
Tomato | Condition | Fresh
Tomato | Condition | Frozen
Seller Product Table:
*Seller | Product| Attribute | Value*
Jim | Tomato | Condition | Fresh
Tom | Tomato | Condition | Frozen
This way, you could store a variety of custom fields about products using three tables. You should normalize or denormalize as needed, you may want a table for seller's products only and store their conditions in a separate table. Either way, the method described above would get the job done.
I believe here's a good scheme:
product_data - product ID, category ID, name, price, description
product_meta - product ID, attribute_name, attribute_value
product_variants - product ID, variant ID, variant value
You'd also like separate tables for variant names and category names/descriptions.
Example:
ID | Category_ID | Name | Price
251 | 14 | Tomato | 5.00
ID | Attribute | Value
251 | Condition | Fresh
251 | Color | Red
ID | Variant_ID | Name | Value
251 | 50 | Size | Small
251 | 50 | Size | Huge
So basically you'll have around 5-10 tables (Google 3 steps of DB normalization). All tables are linked together by IDs.
All you'll need to do - retrieve the values using JOIN statement and WHERE product_id condition.