Foriegn Keys Yah or Nah - php

Non F-K SCHEMA
human
human_id | name
alien
alien_id | name | planet
comment
comment_id | text
1 hello
vote
to_id | to_type | who | who_type
1 human 1 alien
1 comment 1 human
FK- SCHEMA
human
human_id | name
alien
alien_id | name | planet
comment
comment_id | text
1 hello
entity_id
entity_id | id | type
1 1 human
2 1 comment
3 1 alien
vote
to_id | who_id
1 3
2 1
I want to ask which one is better ?
First one is without foreign key
Second one is with foreign key
Isnt the second one (with fk key) will be slow as i have to do twice inserting and unnecesary joins in order to get human/ alien name etc.
And what will happen if entity_id reaches a maximum of 18446744073709551615 ?

I suggest you add a supertype to unify Human and Alien and use this supertype in relationships. I also suggest separating votes on comments from votes on users into separate relationships. Consider the following tables:
This is the basic idea, though somewhat oversimplified. It allows a User to have both Human and Alien details. If required, disjoint subtypes can be enforced with a few additional columns and triggers.
You ask whether a foreign key and joins will be slower. An argument can be made that normalized databases are likely to be more efficient, since redundant associations are eliminated. In practice, performance has much more to do with effective use of indexes than avoiding joins.
If an auto_increment column overflows, the database engine will return an error and refuse to insert more rows. In this case you can adjust the column to use a larger type. When you exceed the space of the largest types in MySQL, it's probably time for a different (or even custom) solution.

Related

Database Design. Pivot table with 3 foreign keys or two pivot tables?

I'm currently developing an application that allows a customer to register for an event through a custom form. That custom form will be built by the event admin for specific input by the customer.
The customer will go to the form, complete the input and pick a venue that will then display the available time-slots. I'm stuck with these two database designs and wondering which one is a better approach.
Pivot table with 3 foreign keys
Table 'Customers' -
| id | name |
Table 'Events' -
| id | name | form_fields (json)
Table 'Venues' -
| id | address | event_id |
Table 'Timeslots' -
| id | datetime | slots | venue_id |
Pivot Table 'Tickets' -
|id | customer_id | timeslot_id | event_id | form_data (json)
Two pivot tables
Table 'Customers' -
| id | name |
Table 'Events' -
| id | name | form_fields (json)
Table 'Venues' -
| id | address | event_id |
Table 'Timeslots' -
| id | datetime | slots | venue_id |
Pivot Table 'Tickets' -
| id | customer_id | timeslot_id |
Pivot Table 'EventCustomers' -
| id | customer id | event_id | form_data (json)
In addition, I will store the HTML markup of the custom form built by admin in 'form_fields' (json) and have the customer complete the form and store the values in 'form_data' (json).
Is it also sensible to have the custom form and data saved in json?
Thank you.
To answer your question(even if it's a bit off topic):
None of the above.
To model data we must ask ourselves what are the constraints. Data is often easier to define by what it cannot do, not what it can do.
For example, can you have a Tickets record that:
Does not have a customer record ( customer_id = null )
Does not have a timeslot ( timeslot_id = null) -timeslot is related to venue or the location and time of the event.
Does not have an event ( event_id = null )
If you answered no to all of these then we have to bring this data all together at one time (but, not necessarily in the same table).
Now in my mind, it's pretty clear you could/should not have a ticket that:
wasn't assigned to a customer
does not have an event
does not have a timeslot
does not have a venue
whose number exceeds the number of slots for the event (this you mostly missed on)
So I will assume these are our "basic" constraints
Problems with your second case:
you could sell a ticket to a customer for a particular timeslot ( at a venue ), but for an unknown event. Record in Tickets, and No record in the EventCustomers table
you could also have a customer registered to an event, with no ticket or timeslot/venue. Record in EventCustomers and No record in the Tickets table
To me that seems somewhat illogical, and indeed it violates the constraints I outlined above.
Problems with your first case:
On the surface the first case looks fine as far as our constraints above look. But as I worked though it some issues popped up. To understand these, as a general rule, we always want a unique index on all the foreign keys in a pivot table ( aka a unique compound key ).
So in the first case we want this(idealy):
Pivot Table 'Tickets' -
|id | customer_id | timeslot_id | event_id | form_data (json)
//for this table you would want this compound, unique index
Unique Key ticket (customer_id,timeslot_id,event_id)
This lead me to the number of "slots" as this would imply that a customer could only have one tickets record per event and timeslot/venue. This relates back to the part I said that you mostly missed on, i.e. you have no way to track how many you have used. At first you might want to allow duplicates in this table. "We can just add some more tickets in right?" - you think, and this is the easy fix, not.
Exhibit A:
Pivot Table 'Tickets' -
|id | customer_id | timeslot_id | event_id | form_data (json)
| 1 | 1 | 1 | 1 | {}
| 2 | 1 | 1 | 1 | {}
While contemplating Exhibit A consider some basic DB design rules:
In a good DB design you always want ( ideally )
a surrogate primary key, a key with no relation to the data, this is id
a natural key, a unique key that is part of the data. An example would be if you had an email field attracted to customer, you could make this unique to prevent adding duplicate customers. It's a piece of the data that is by it's nature unique and part of the data.
The first one (surrogate keys) allow you use the data with no knowledge of the data itself. This is good as it gives us some separation of concerns, some abstractions between our code and the data. When you join two tables on their primary key, foreign key relationship you don't need to know anything else about the data.
The second (natural key) is essential to prevent duplicate data. In the case of a pivot table the foreign keys, which are surrogate keys in their respective tables, become a natural key in the pivot table. These are now part of the data in the context of the pivot table and they uniquely and naturally identify that data.
Why is uniqueness so important?
Once you allow duplicates with the pivot tables you will run into several issues (especially if you have accessory data like the form_data):
How to tell those records apart?
Which of the duplicates is the authoritative copy, which is in charge.
How do you synchronize that accessory data, if you need to change form_data, which record do you change it in. Only one? Which one? Both? how do you maintain synchronizing all the duplicates.
What if an accidental duplicate gets entered, how will you know it was accidental? How do you know it's a real duplicate or true duplicate and not a valid record.
Even if you knew it was an accidental duplicat, how do you decide which one of the duplicates should be removed, this goes back to which is the authoritative record.
In short order, it really becomes a mess to deal with.
Finally (what I would suggest)
Table 'customer' -
| id | name |
Table 'event' -
| id | name | form_fields (json)
Table 'venue' -
| id | address | slots |
Table 'show' -
| id | datetime | venue_id | event_id |
Table 'purchase' -
| id | show_id | customer_id | slots | created |
Table 'ticket' ( customers_shows )
| id | purchase_id | guid |
I changed quite a few things (you can use some or all of these changes):
I changed the plural names to singular. I only use plurals when I do pivot tables that have a no accessory data, such a name would be venues_events. This is because a record from customer is a single entity, I don't need to do any joins to get useful data. A record from our hypothetical venues_events would encompass 2 entities, so I would know right away I need to do a join no matter what as there is no other data besides the foreign keys.
Now in the case of show, you may notice that is essentially a pivot table. So why did I not name it venues_events as I listed above. The reason is we have a datetime column in there, which is what I mean by "accessory" data. So in this case I could pull data just from show if I just wanted the datetime and I would not need a join to do it. So it can be considered a single entity that has some Many to One relationships. ( A Many to Many is a Many to One and a One to Many that's why we need pivot tables ) More on this table later.
Letter Casing and spacing. I would suggest using all lowercase and no spaces. MySql is case sensitive and doesn't play nice with spaces. It's just easier from a standpoint of not having to remember did we name it venuesEvents or VenuesEvents or Venuesevents etc... Consistency in naming convention is paramount in good DB design.
The above is largely Opinion based, it's my answer so it's my opinion. Deal with it.
Table show
I moved the slotscolumn to venue. I am assuming that the venue will determine how many slots are available, in my mind this is a physical requirement or attribute of the venue itself. For example a Movie theater has only X number of seats, no matter what time the movie is at doesn't change how many seats are there. If those assumptions are correct then it saves us a lot of work trying to remember how many seats a venue has every time we enter a show.
The reason I changed timeslot to show is that in both your original cases, there is some disharmony in the data model. Some things that just don't tie together as well as they should. For example your timeslots have no direct relation to the event.
Exhibit B (using your structure):
Table 'event' -
| id | name | form_fields (json) |
| 1 | "Event A" | "{}" |
| 2 | "Event B" | "{}" |
Table 'Venues' -
| id | address | event_id |
| 1 | "123 ABC SE" | 1 |
| 2 | "123 AB SE" | 2 | //address entered wrong as AB instead ABC
Table 'Timeslots' -
| id | datetime | slots | venue_id |
| 1 | "2018-01-27 04:41:23" | 200 | 1 |
| 2 | "2018-01-27 04:41:23" | 200 | 2 |
In the above exhibit, we can see right away we have to duplicate the address to create more then one event at a given venue. So if the address was entered wrong, it could be correct in some venues and incorrect in others. This can be a real issue as programmatically how do you know that AB was supposed to be ABC when the venue ID and event ID are both different for this record. Basically how do you tell those records apart at run time? You will find that it is very difficult to do. The main problem is you have to much data in Veneues, your trying to do to much with it and the relationship doesn't fit the constraints of the data.
That's not even the worst of it as a further problem creeps in, because now that the venue_id is different we can corrupt our Timeslots table and have 2 records in there at the same time for the same venue. Then, because the slots are tied to this table, we can also corrupt things down stream such as selling more tickets then we should for that time and place. Everything just starts to fracture.
Even counting the numbers of shows at a given venue becomes a real challenge, this "flaw" is in both data models you presented.
The same Data in my Model
#with Unique compound Key datetime_venue_id( show.datetime, show.venue_id)
Table 'event' -
| id | name | form_fields (json) |
| 1 | "Event A" | "{}" |
#| 2 | "Event B" | "{}" |
Table 'venue' -
| id | address | slots |
| 1 | "123 ABC SE" | 200 |
Table 'show' -
| id | datetime | venue_id | event_id |
| 1 | "2018-01-27 04:41:23" | 1 | 1 |
#| 2 | "2018-01-27 04:41:23" | 1 | 2 |
As you can see, you no longer have the duplicate address. And while it looks like you could enter in 2 shows for the same venue at the same time, this is only because we don't have a compound unique key that includes the datetime and venue_id a.k.a. Unique Key datetime_venue_id( datetime, venue_id). If you tried inserting that data with that constraint MySql would blowup on you. And if you included both inserts ( event and show ) in the same "Transaction" (which is how I would do it, in innodb engine) the whole thing would fail and get rolled back and neither the event or show would get inserted.
Now you could try to argue that you could have the same Unique constraint on Exhibit B, but as the Venue ID is different there, you would be wrong.
Anyway, show is our new main pivot table with foreign keys from event and venue and then the accessory data datetime.
Besides what I went over above, this setup gives us several advantages over the old structure, in this one table we now have access to:
what and where is the event (by joining on Table event )
when is the event ( timestamp )
how many slots available for the event (by joining on Table venue)
This centers everything around the show record. We can build a "show" independent of a customer or tickets. Because really a customer is not part of the show, and including them to soon (or to late depending how you look at it) in the data model muddies everything up.
Exhibit C
#in your first case
Pivot Table 'Tickets' -
|id | customer_id | timeslot_id | event_id | form_data (json)
#in your second case
Pivot Table 'Tickets' -
| id | customer_id | timeslot_id |
Pivot Table 'EventCustomers' -
| id | customer id | event_id | form_data (json)
AS I said above, you cant put what I am calling a show the what,where and when together without having a customer ID (in either of your data models). As you build your application around this later it will become a huge issue. This may be insurmountable at run time. Basically, you need all that data assembled and waiting on the customer_id. In both of your models that's not the case, and there is data you may not have easy access to. For example for the first case (of the old structure) how would you know that timeslot_id=20 AND event_id=32 plus a customer equals a valid ticket? There is no direct relationship between timeslot and event outside of the pivot table that contains the customer. timeslot_id=20 could be valid for any event and you have no way to know that.
It's so much easier to grab say show=32 and check how many slots are left and then just do the purchase record. Everything is ready and waiting for it.
Table purchase
I also added purchase or an order table, even if the "shows" are free this table provides us with some great utility. This is also a pivot table, but it has some accessory data just like show does. ( slots and created ).
This table
we bind the customer table to the show table here
we have a 'created' field so you will know when this record was created, when the tickets where purchased
we also have a number of slots the customer will use, we can do an aggregate sum of slots grouped on the show_id to see how many slots we have "sold". With one join from show to venue we can find out how many total slots this "show" has with the same integer key (show.id) that we used above to aggregate. Then it would be a simple matter to compare the two, if you wanted to get fancy you may be able to do this all in one query.
Table ticket
Now you may or may not even need this table. It has a many to one relationship to table purchase. So One order can have Many tickets. The records in here would be generated when a purchase is made, the number dependent on what is in slots. The primary use of this table is just to provide a unique record for each individual ticket. For this I have guid column which can just be a unique hash. Basically this would give you some tracking ability on individual tickets, I don't really have enough information to know how this will work in your case. You may even be able to replace this table with JSON data if searching on it is not a concern, and that would make maintenance of it easier in the case that some tickets are refunded. But as I hinted this is very dependent on your particular use case.
Some brief SQL examples
Joining Everything (just to show the relationships):
SELECT
{some fields}
FROM
ticket AS t
JOIN
puchase AS p ON t.purchase_id = p.id
JOIN
customer AS c ON p.customer_id = c.id
JOIN
show AS s ON p.show_id = s.id
JOIN
venue AS v ON s.venue_id = s.id
JOIN
event AS e ON s.event_id = e.id
Counting the used slots for a show:
SELECT
SUM(slots) AS used_slots
FROM
puchase
WHERE
show_id = :show_id
GROUP BY show_id
Get the available slots for a show:
SELECT
v.name,
v.slots
FROM
venue AS v
JOIN
show AS s ON s.venue_id = v.id
WHERE
v.show_id = :show_id
# or you could do s.id = :show_id
It also works out nice that all the tables start with a different letter, which makes aliasing a bit easier.
-note- The table name event may be a reserved word in MySql, I am not sure off the top of my head if it will work as a table name. Some reserved words still work in some parts of the query based on the context it's used in. Even if that is true, I am sure you can come up with a work around for it. Coincidentally this is why I named purchase that instead of order as "order" is a reserved word. (I just happen to think of event)
I hope that helps and makes sense. I probably spent way more time on this then I should have, but I design things like this for a living and I really enjoy the data architecture part of it, so I can get a bit carried away at times.

insert reference in comment table that joins two sessions (blog and news)

I am finalizing a comments system and was with a doubt.
I have a table for blogs and one for news, and they accept comments.
My comments table receives the text and the id.
I wonder if I need to (or should I) go through some sort of reference to know where the comment comes from.
table comment
id | id_content | text | ref
1 | 1 | test | blog
2 | 1 | test | news
thanks
depending on the number of comments you expect to receive there are two ways of doing this ...
1 - parent_tbl, parent_id - in one big comment table
2 - two tables for comments with a parent_id - one for each primary table
either way you need to index properly, the second will always work faster, but it doesn't expand well if you say add "press_releases" now you have to duplicate code, tables, what not.

Followers/following database structure

My website has a followers/following system (like Twitter's). My dilemma is creating the database structure to handle who's following who.
What I came up with was creating a table like this:
id | user_id | followers | following
1 | 20 | 23,58,84 | 11,156,27
2 | 21 | 72,35,14 | 6,98,44,12
... | ... | ... | ...
Basically, I was thinking that each user would have a row with columns for their followers and the users they're following. The followers and people they're following would have their user id's separated by commas.
Is this an effective way of handling it? If not, what's the best alternative?
That's the worst way to do it. It's against normalization. Have 2 seperate tables. Users and User_Followers. Users will store user information. User_Followers will be like this:
id | user_id | follower_id
1 | 20 | 45
2 | 20 | 53
3 | 32 | 20
User_Id and Follower_Id's will be foreign keys referring the Id column in the Users table.
There is a better physical structure than proposed by other answers so far:
CREATE TABLE follower (
user_id INT, -- References user.
follower_id INT, -- References user.
PRIMARY KEY (user_id, follower_id),
UNIQUE INDEX (follower_id, user_id)
);
InnoDB tables are clustered, so the secondary indexes behave differently than in heap-based tables and can have unexpected overheads if you are not cognizant of that. Having a surrogate primary key id just adds another index for no good reason1 and makes indexes on {user_id, follower_id} and {follower_id, user_id} fatter than they need to be (because secondary indexes in a clustered table implicitly include a copy of the PK).
The table above has no surrogate key id and (assuming InnoDB) is physically represented by two B-Trees (one for the primary/clustering key and one for the secondary index), which is about as efficient as it gets for searching in both directions2. If you only need one direction, you can abandon the secondary index and go down to just one B-Tree.
BTW what you did was a violation of the principle of atomicity, and therefore of 1NF.
1 And every additional index takes space, lowers the cache effectiveness and impacts the INSERT/UPDATE/DELETE performance.
2 From followee to follower and vice versa.
One weakness of that representation is that each relationship is encoded twice: once in the row for the follower and once in the row for the following user, making it harder to maintain data integrity and updates tedious.
I would make one table for users and one table for relationships. The relationship table would look like:
id | follower | following
1 | 23 | 20
2 | 58 | 20
3 | 84 | 20
4 | 20 | 11
...
This way adding new relationships is simply an insert, and removing relationships is a delete. It's also much easier to roll up the counts to determine how many followers a given user has.
No, the approach you describe has a few problems.
First, storing multiple data points as comma-separated strings has a number of issues. It's difficult to join on (and while you can join using like it will slow down performance) and difficult and slow to search on, and can't be indexed the way you would want.
Second, if you store both a list of followers and a list of people following, you have redundant data (the fact that A is following B will show up in two places), which is both a waste of space, and also creates the potential of data getting out-of-sync (if the database shows A on B's list of followers, but doesn't show B on A's list of following, then the data is inconsistent in a way that's very hard to recover from).
Instead, use a join table. That's a separate table where each row has a user id and a follower id. This allows things to be stored in one place, allows indexing and joining, and also allows you to add additional columns to that row, for example to show when the following relationship started.

Which approach is best for storing a list of words in mysql that will later be used for statistics?

DETAILS
I have a quiz (let’s call it quiz1). Quiz1 uses the same wordlist each time it is generated.
If the user needs to, they can skip words to complete the quiz. I’d like to store those skipped words in mysql and then later perform statistics on them.
At first I was going to store the missed words in one column as a string. Each word would be separated by a comma.
|testid | missedwords | score | userid |
*************************************************************************
| quiz1 | wordlist,missed,skipped,words | 59 | 1 |
| quiz2 | different,quiz,list | 65 | 1 |
The problem with this approach is that I want to show statistics at the end of each quiz about which words were most frequently missed by users who took quiz1.
I’m assuming that storing missed words in one column as above is inefficient for this purpose as I'd need to extract the information and then tally it -(probably tally using php- unless I stored that tallied data in a separate table).
I then thought perhaps I need to create a separate table for the missed words
The advantage of the below table is that it should be easy to tally the words from the table below.
|Instance| missed word |
*****************************
| 1 | wordlist |
| 1 | missed |
| 1 | skipped |
Another approach
I could create a table with tallys and update it each time quiz1 was taken.
Testid | wordlist| missed| skipped| otherword|
**************************************************
Quiz1 | 1 | 1| 1| 0 |
The problem with this approach is that I would need a different table for each quiz, because each quiz will use different words. Also information is lost because only the tally is kept not the related data such which user missed which words.
Question
Which approach would you use? Why? Alternative approaches to this task are welcome. If you see any flaws in my logic please feel free to point them out.
EDIT
Users will be able to retake the quiz as many times as they like. Their information will not be updated, instead a new instance would be created for each quiz they retook.
The best way to do this is to have the word collection completely normalized. This way, analyses will be easy and fast.
quiz_words with wordID, word
quiz_skipped_words with quizID, userID, wordID
To get all the skipped words of a user:
SELECT wordID, word
FROM quiz_words
JOIN quiz_skipped_words USING (wordID)
WHERE userID = ?;
You could add a group by clause to have group counts of the same word.
To get the count of a specific word:
SELECT COUNT(*)
FROM quiz_words
WHERE word LIKE '?';
According to database normalization theory, second approach is better, because ideally one relational table cell should store only one value, which is atomic and unsplitable. Each word is an entity instance.
Also, I might suggest to not create Quiz-Word tables, but reserve another column in Missed-Word table for quiz, for which this word was specified, then use this column as a foreign key for Quiz table. Then you probably may avoid real time table generation (which is a "bad practice" in database design).
why not have a quiz table and quiz_words table, the quiz_words table would store id,quizID,word as columns. Then for each quiz instance create records in the quiz_words table for each word the user did use.
You could then run mysql counts on the quiz_words table based on quizID and or quiz type
The best solution (from my pov) for what are you trying to achieve is the normalized aproach:
test table which has test_id column and other columns
missed_words table which has id (AI PK) and word (UQ) , here you can also have a hits column that should be incremented each time that a association to this word is made in test_missed_words table this way you have the stats that you want already compiled and you don't need them to be calculated from a select query
test_missed_words which is a link table that has test_id and missed_word_id (composite PK)
This way you do not have redundant data (missed words) and you can extract easily that stats that you want
Keeping as much information as possible (and being able to compile user-specific stats later as well as overall stats now) I would create a table structure similar to:
Stats
quizId | userId | type| wordId|
******************************************
1 | 1 | missed| 4|
1 | 1 | skipped| 7|
Where type can either be an int defining the different types of actions, or a string representation - depending on if you believe it can ever be more. ^^
Then:
Quizzes
quizId | quizName|
********************
1| Quiz 1|
With the word list made for each quiz like:
WordList (pk: wordId)
quizId | wordId| word|
***************************
1 | 1 | Cat|
1 | 2 | Dog|
You would have your user table however you want, we are just linking the id from it in to this system.
With this, all id fields will be non-unique keys in the stats table. When a user skips or misses a word, you would add the id of that word to the stats table along with relevant quizId and type. Getting stats this way would make it easy as a per-user basis, a per-word basis, or a per-type basis - or a combination of the three. It will also make the word list for each quiz easily available as well for making the quizzes. ^^
Hope this helps!

"horizontal" vs. "vertical" table design, SQL

Apologies if this has been covered thoroughly in the past - I've seen some related posts but haven't found anything that satisfies me with regards to this specific scenario.
I've been recently looking over a relatively simple game with around 10k players. In the game you can catch and breed pets that have certain attributes (i.e. wings, horns, manes). There's currently a table in the database that looks something like this:
-------------------------------------------------------------------------------
| pet_id | wings1 | wings1_hex | wings2 | wings2_hex | horns1 | horns1_hex | ...
-------------------------------------------------------------------------------
| 1 | 1 | ffffff | NULL | NULL | 2 | 000000 | ...
| 2 | NULL | NULL | NULL | NULL | NULL | NULL | ...
| 3 | 2 | ff0000 | 1 | ffffff | 3 | 00ff00 | ...
| 4 | NULL | NULL | NULL | NULL | 1 | 0000ff | ...
etc...
The table goes on like that and currently has 100+ columns, but in general a single pet will only have around 1-8 of these attributes. A new attribute is added every 1-2 months which requires table columns to be added. The table is rarely updated and read frequently.
I've been proposing that we move to a more vertical design scheme for better flexibility as we want to start adding larger volumes of attributes in the future, i.e.:
----------------------------------------------------------------
| pet_id | attribute_id | attribute_color | attribute_position |
----------------------------------------------------------------
| 1 | 1 | ffffff | 1 |
| 1 | 3 | 000000 | 2 |
| 3 | 2 | ffffff | 1 |
| 3 | 1 | ff0000 | 2 |
| 3 | 3 | 00ff00 | 3 |
| 4 | 3 | 0000ff | 1 |
etc...
The old developer has raised concerns that this will create performance issues as users very frequently search for pets with specific attributes (i.e. must have these attributes, must have at least one in this colour or position, must have > 30 attributes). Currently the search is quite fast as there are no JOINS required, but introducing a vertical table would presumably mean an additional join for every attribute searched and would also triple the number of rows or so.
The first part of my question is if anyone has any recommendations with regards to this? I'm not particularly experienced with database design or optimisation.
I've run tests for a variety of cases but they've been largely inconclusive - the times vary quite significantly for all of the queries that I ran (i.e. between half a second and 20+ seconds), so I suppose the second part of my question is whether there's a more reliable way of profiling query times than using microtime(true) in PHP.
Thanks.
This is called the Entity-Attribute-Value-Model, and relational database systems are really not suited for it at all.
To quote someone who deems it one of the five errors not to make:
So what are the benefits that are touted for EAV? Well, there are none. Since EAV tables will contain any kind of data, we have to PIVOT the data to a tabular representation, with appropriate columns, in order to make it useful. In many cases, there is middleware or client-side software that does this behind the scenes, thereby providing the illusion to the user that they are dealing with well-designed data.
EAV models have a host of problems.
Firstly, the massive amount of data is, in itself, essentially unmanageable.
Secondly, there is no possible way to define the necessary constraints -- any potential check constraints will have to include extensive hard-coding for appropriate attribute names. Since a single column holds all possible values, the datatype is usually VARCHAR(n).
Thirdly, don't even think about having any useful foreign keys.
Finally, there is the complexity and awkwardness of queries. Some folks consider it a benefit to be able to jam a variety of data into a single table when necessary -- they call it "scalable". In reality, since EAV mixes up data with metadata, it is lot more difficult to manipulate data even for simple requirements.
The solution to the EAV nightmare is simple: Analyze and research the users' needs and identify the data requirements up-front. A relational database maintains the integrity and consistency of data. It is virtually impossible to make a case for designing such a database without well-defined requirements. Period.
The table goes on like that and currently has 100+ columns, but in general a single pet will only have around 1-8 of these attributes.
That looks like a case for normalization: Break the table into multiple, for example one for horns, one for wings, all connected by foreign key to the main entity table. But do make sure that every attribute still maps to one or more columns, so that you can define constraints, data types, indexes, and so on.
Do the join. The database was specifically designed to support joins for your use case. If there is any doubt, then benchmark.
EDIT: A better way to profile the queries is to run the query directly in the MySQL interpretter on the CLI. It will give you the exact time that it took to run the query. The PHP microtime() function will also introduce other latencies (Apache, PHP, server resource allocation, network if connection to a remote MySQL instance, etc).
What you are proposing is called 'normalization'. This is exactly what relational databases were made for - if you take care of your indexes, the joins will run almost as fast as if the data were in one table.
Actually, they might even go faster: instead of loading 1 table row with 100 columns, you can just load the columns you need. If a pet only has 8 attributes, you only load those 8.
This question is a very subjective. If you have the resources to update the middleware to reflect the column that has been added then, by all means, go with horizontal there is nothing safer and easier to learn than a fixed structure. One thing to remember, anytime you update a tables structure you have to update each one of its dependencies unless there is some catch-all like *, which I suggest you stay aware from unless you are just dumping data to a screen and order of columns is irrelevant.
With that said, Verticle is the way to go if you don't have all of your requirements in place or don't have the desire to update code in n number of areas. Most of the time you just need storage containers to store data. I would segregate things like numbers, dates, binary, and text in separate columns to preserve some data integrity, but there is nothing wrong with verticle storage, as long as you know how to formulate and structure queries to bring back the data in the appropriate format.
FYI, Wordpress uses verticle data storage for majority of the dynamic content it has to store for the millions of uses it has.
First thing from Database point of view is that your data should be grow vertically not in horizontal way. So, adding a new column is not a good design at all. Second thing, this is very common scenario in DB design. And the way to solve this you have to create three tables. 1st is of Pets, 2nd is of Attributes and 3rd is mapping table between theres two. Here is the example:
Table 1 (Pet)
Pet_ID | Pet_Name
1 | Dog
2 | Cat
Table 2 (Attribute)
Attribute_ID | Attribute_Name
1 | Wings
2 | Eyes
Table 3 (Pet_Attribute)
Pet_ID | Attribute_ID | Attribute_Value
1 | 1 | 0
1 | 2 | 2
About Performance:
Pet_ID and Attribute_ID are the primary keys which are indexed (http://developer.mimer.com/documentation/html_92/Mimer_SQL_Engine_DocSet/Basic_concepts4.html), so the search is very fast. And this is the right way to sovle the problem. Hope, now it will be clear to you.

Categories