I need help to find the product of an mysql array sorted in groups.
So what I need is 1.2*1.5, and 1.1*1.6. And store them into some variables.
----------------
|Group_ID|Value|
----------------
| 1 | 1.2 |
----------------
| 1 | 1.5 |
----------------
| 2 | 1.1 |
----------------
| 2 | 1.6 |
----------------
Has said above by Mat, your datamodel is not good, you have two choices :
Alter your datamodel to be able to do all calculations using the SGBD (this is the better choice)
Fetch your data and process them to the application side (may be slow if you're not familiar with code tuning and algorithm)
Edit : if your groups are always composed of two rows you can use the solution just above
Related
I have task that would be quite simple using regular SQL query but the project is built using doctrine and I am looking for an optimal solution. Maybe someone could advise what would be a good way to approach this.
I have a quite complicated db structure but the simplified version of objects in question look like this:
| Category | | Product | | ProductOption |
------------ --------------- --------------------
| id | | id | | id |
| name | | category_id | | product_id |
------------ | name | | some_data |
--------------- --------------------
Product Option and Product have 1 to 1 connection. But options are created per category (I get 1 entity per category, but need to replicate that entity for every product and store that as 1 to 1 since at some point those options will need to be edited individually. Now there are many ways to do that (the dirty way) , but I would like some advice on how to do that in the most optimal way.
I get some issues when i implement product_description table with language .
my process is that i have default table product_description_en to store description and when a client installs new language (Chinese) the php script will create new table product_des_ch and then put the all default data(from the English table) in to the newly created table.then the client can update .
My problems are
Is it a security issue that we create the table dynamically while installing new language
2.If we use same table for all languages(the records will be around 500,000) then are there any per performance issues
3.what is the best way for large amount of records to store , i mean same table or separate tables.
Thanx
Az
Updated:
This is sample product_description table structure for English table and Japan .What you think about this table(we store the all records in a same table and when the client inserts new record for different language only inserting new records ) ,Any feedback please ?
+---------------------------------------------------------------------------+
| product_id | name | desc | meta_name | meta_desc | key_words | lan_code |
+---------------------------------------------------------------------------+
| 1 | A | D| m1 | m_d1 | k1 | en |
+---------------------------------------------------------------------------+
| 1 | A | D| m2 | m_d2 | k2 | jp |
+---------------------------------------------------------------------------+
Basic RDBMS design wisdom would put a huge red flag on anything that dynamically alters the table structure. Relational databases are more than flexible enough to handle pretty much any situation without requiring such measures.
My suggestion as for the structure would be to create a single Languages table to store the available languages, and then a Phrases table to store all the available phrases. Then use a Translations table to provide the actual translations of those phrases into the available languages. Something that might look like this:
Language
+----+---------+
| id | name |
+----+---------+
| 1 | English |
| 2 | Chinese |
+----+---------+
Phrase
+----+-------------+
| id | label |
+----+-------------+
| 1 | header |
| 2 | description |
+----+-------------+
Translations
+-------------+-----------+-----------------+
| language_id | phrase_id | translation |
+-------------+-----------+-----------------+
| 1 | 1 | Header |
| 1 | 2 | Description |
| 2 | 1 | 头 |
| 2 | 2 | 描述 |
+-------------+-----------+-----------------+
For small to medium sized databases, there should be no performance issues at all even using the default database configurations. If you get to huge sizes (where you are counting the database size in terabytes) you can optimize the database in many ways to keep the performance level acceptable.
Recently I have been planning a system that allows a user to customize and add to a web interface. The app could be compared to a quiz creating system. The problem I'm having is how to design a schema that will allow for "variable" numbers of additions to be made to the application.
The first option that I looked into was just creating an object for the additions and then serializing it and putting it in its own column. The content wouldn't be edited often so writing would be minimal, reads however would be very often. (caching could be used to cut down)
The other option was using something other than mysql or postgresql such as cassandra. I've never used other databases before but would be interested in learning how to use them if they would improve the design of the system.
Any input on the subject would be appreciated.
Thank you.
*edit 29/3/14
Some information on the data being changed. For my idea above of using a serialized object, you could say that in the table I would store the name of the quiz, the number of points the quiz is worth and then a column called quiz data that would store the serialized object containing the information on the questions. So overall the object could look like this:
Questions(Array):{
[1](Object):Question{
Field-type(int):1
Field-title(string):"Whats your gender?"
Options(Array):{"Female", "Male"}
}
[2](Object):Question{
Field-type(int):2
Field-title(string):"Whats your name?"
}
}
The structure could vary of course but generally i would be storing integers to determin the type of field in the quiz and then a field to hold the label for the field and the options (if there are any) for that field.
In this scenario I would advise looking at MongoDB.
However if you want to work with MySQL you can think about the entity-attribute-value model in your design. The EAV model allows you to design for entries that contain a variable number of attributes.
edit
Following your update on the datatypes you would like to store, you could map your design as follows:
+-------------------------------------+
| QuizQuestions |
+----+---------+----------------------+
| id | type_id | question_txt |
+----+---------+----------------------+
| 1 | 1 | What's your gender? |
| 2 | 2 | What's your name? |
+----+---------+----------------------+
+-----------------------------------+
| QuestionTypes |
+----+--------------+---------------+
| id | attribute_id | description |
+----+--------------+---------------+
| 1 | 1 | Single select |
| 2 | 2 | Free text |
+----+--------------+---------------+
+----------------------------+
| QuestionValues |
+----+--------------+--------+
| id | question_id | value |
+----+--------------+--------+
| 1 | 1 | Male |
| 2 | 1 | Female |
+----+--------------+--------+
+-------------------------------+
| QuestionResponses |
+----+--------------+-----------+
| id | question_id | response |
+----+--------------+-----------+
| 1 | 1 | 1 |
| 2 | 2 | Fred |
+----+--------------+-----------+
This would then allow you to dynamically add various different questions (QuizQuestions), of different types (QuestionTypes), and then restrict them with different options (QuestionValues) and store those responses (QuestionResponses).
I'm using cakephp and I need to let users design their own forms (this will only be done once during a setup wizard stage, with about 12 forms created in real use). Tables that always will be needed, are users, groups, and logs, settings. Effectively this is a user friendly database app.
I was thinking create Views and having two actual tables to store the form and subsequent data that comes in:
Forms [id | name | structure ]
[1 | Details| {id:{type:key}q2:{type:text;validation:{select:male..}..} ]
[2 | Car | {id:{type:key};car:{type:text;validation:{select:Honda... }}]
The column "structure" would contain a json or xml string listing the fields, type and validation rules.
Forms_data [id | form_id | survey_id | key | value ]
[1 | 1 | 1 | q1 | Male ]
[2 | 1 | 1 | q2 | 1/1/76]
[3 | 2 | 1 | Car | Honda]
[4 | 2 | 1 | Eng | Petrol]
[5 | 1 | 2 | q1 | Fem ]
[6 | 1 | 2 | q2 | 2/3/81]
[7 | 2 | 2 | Car | Ford ]
[8 | 2 | 2 | Eng | Diesel]
The Forms_data table would contain data for each field of the form, survey_id means the subject person the forms are about, one person being surveyed can have many forms done about them. Key will be varchar but value will have to be the size of the largest possible data type (eg "Paragraph text").
Or should I let users (via params/sanitised etc.) execute a "CREATE TABLE" and create real tables in the database, so I get the full advantage of the systems querying and optimisation and in cakephp get all the magic function working?
It depends on what you want to do with the data later and how much there will be. DBMS systems (such as MySQL) deal with large amounts of data very fast. You can also query the data in very sophisticated ways as opposed to your other choices. The trade-off is a little more complexity. I almost always come down on the side of a database solution. It gives you the most flexibility.
Both users and pages on my website have IDs. When a user goes on a certain page, their userID and the pageID will be written to a MySQL table as such:
userID | pageID
3 | 1
2 | 1
3 | 2
etc...
In this table, called user_pages, I would end up with a bunch of raw data that can be turned into a recommendation engine. What I mean by recommendation engine - I want to analyze historical data, and be able to predict, based on a set of viewed pages, the next pages that a user may like. Let's say there is a strong correlation between visiting page with ID 3 after going to pages with IDs 4, 9, 15. If a user goes on pages 4, 9, and 15, then the engine should recommend page 3.
I think I have all of the data input code necessary for creating this. How would I write something that analyzes the data for correlation of pages (i.e. almost everyone who visited page 5 visited page 1 also), and somehow use that to predict in the future the pages that a user may end up liking?
Recommendation systems are a big part of A.I research. I believe you are interested in a collection of algorithms called collaborative filtering. Since the netflix prize in 2007 this field has developed greatly. I would recommend going here and having a read. It explains the basic concepts of recommender systems in a succinct and clear way and also provides a link to Java source code for an approach to the Netflix project, MemReader. You could examine this source code and extrapolate the basic algorithms for building a recommendation engine.
Alternatively if you want a more mathematical explanation of the algorithms employed go here.
It shouldn't take too long to implement at all.
This post posed a similar question: Advanced MySQL: Find correlations between poll responses
I think you would be able to generate a similar response if your primary data table had one additional field in it, specifically the id of the page the used last visited or visited immediately following.
Something like this:
+------+----------+--------------+----------+
| id | page_id | next_page_id | user_id |
+------+----------+--------------+----------+
| 1 | 1 | 1 | 1 |
| 2 | 1 | 2 | 2 |
| 3 | 1 | 2 | 3 |
| 4 | 1 | 2 | 4 |
| 5 | 2 | 3 | 1 |
| 6 | 2 | 3 | 2 |
| 7 | 2 | 3 | 3 |
| 8 | 2 | 4 | 4 |
| 9 | 3 | 5 | 1 |
+------+----------+--------------+----------+
Then you should be able to use a modified version of one of the SQL queries suggested there to generate a list of high-correlation recommendations between the current page and the next page.