Normalization or Alternative with MySQL - php

building a site using PHP and MySQL that needs to store a lot of properties about users (for example their DOB, height, weight etc) which is fairly simple (single table, lots of properties (almost all are required)).
However, the system also needs to store other information, such as their spoken languages, instrumental abilities, etc. All in all their are over a dozen such characteristics. By default I assumed creating a separate table (called maybe languages) and then a link table with a composite id (user_id, language_id).
The problem I foresee though is when visitors attempt to search for users using these criteria. The dataset we're looking to use will have over 15,000 users at time of launch and the primary function will be searching and refining users. That means hundreds of queries daily and the prospect of using queries with up a dozen or more JOINs in them is not appealing.
So my question is, is there an alternative that's going to be more efficient? One way I was thinking is storing the M2M values as a CSV of IDs in the user table and then running a LIKE query against it. I know LIKE isn't the best, but is it better than a join?
Any possible solutions will be much appreciated.

Do it with joins. Then, if your performance goals are not met, try something else.

Start with a normalized database (e.g. a languages table, linked to the users table by a mapping table) to make sure you data is represented cleanly and logically.
If you have performance problems, examine your queries and make sure you have suitable indexes.
If you dislike repeatedly coding up queries with many joins, define some views.
If views are very slow to query, consider materialized views.
If you have several thousand records and a few hundred queries per day (really, that's pretty small and low-usage), these techniques will allow your site to run at full speed, with no compromise on data integrity. If you need to scale to many millions of records and millions of queries per day, even these techniques may not be enough; in which case, investigate cacheing and denormalization.

Related

MySQL table separation for virtually same data but different user groups

First of all, I apologize if a similar question has been asked and answered. I searched and found similar questions, but not one quite close enough.
My question is basically whether or not it is a good idea to separate tables of virtually the same data in my particular circumstance. The tables track data track data for two very different groups (product licensing data for individual users and product licensing data for enterprise users). I am thinking of separating them into two tables so that the user verification process runs faster (especially for individual users since the number of records is significantly lower (eg ~500 individual records vs ~10,000 enterprise records)). Lastly, there is a significant difference in the user types that isn't apparent in the table structure - individual users all have a fixed number of activations while enterprise users may have up to unlimited activations and the purpose of tracking is more for activation stats.
The reason I think separating the tables would be a good idea is because each table would be smaller, resulting in faster queries (at least I think it would...). On the other hand, I will have to do two queries to obtain analytical data. Additionally, I may wish to change the data I am tracking from time to time and obviously, this is more of a pain with two duplicate tables.
I am guessing the query time difference is probably insignificant, even with tens of thousands of records?? However, I would like to hear peoples' thoughts on this (mainly regarding efficiency and overall best practices) if they would be so kind to share.
Thanks in advance!
When designing your database structure you should try to normalize your data as much as possible. So to answer your question
"whether or not it is a good idea to separate tables of virtually the same data in my particular circumstance."
If you normalize your database correctly, the answer is no, it's not a good idea to create two tables with almost identical information. With normalization you should be able to separate out similar data into mapping tables which will allow you to create more complex queries that will run faster.
A very basic example of a first normal form normalization would be you have a table of users, and in the table you have a column for role. Instead of having the physical word "admin" or "member" you have an id that is mapped to another table called roles where 1 = admin and 2 = member. The idea is it is more efficient to store repeated ids rather then repeated words like admin and member.

How to scale mysql tables for growth

So I'm working on site that will replace an older site with a lot of traffic, and I will also have a lot of data in the DB, so my question to you guys is what is the best way to design mysql tables for growth?
I was thinking to split let's say a table with 5 000 000 rows in 5 tables,with 1 000 000 rows/table and create a relationship between the tables, but I guess this isn't a good option since I will spend a lot of resources and time to figure out in what table my data is.
Or can you guys give me some tips mabe some useful articles?
No, you're absolutely right on the relationships. This technique is called Normalization where you define separate tables because these individual tables are affected with time and independent of other tables.
So if you have a hotel database that keeps a track of rooms and guests, then you know normalization is necessary because rooms and guests are independent of each other.
But you will have foreign keys/surrogate keys in each table (for instance, room_id) that could relate the particular guest entering for that particular room.
Normalization, in your case, could help you optimize that 5000 rows of yours as it would not be optimal for a loop to go over 5000 elements and retrieve an entire data.
Here is a strong example for why normalization is essential in database management.
Partitioning as mentioned in a comment is one way to go, but the first path to check out is even determining if you can break down the tables with the large amounts of data into workable chunks based on some internal data.
For instance, lets say you have a huge table of contacts. You can essentially break down the data into contacts that start from a-d, e-j, etc. Then when you go to add records you just make sure you add the records to the correct table (I'd suggest checking out stored procedures for handling this, so that logic is regulated in the database). You'd also probably set up stored procedures to also get data from the same tables. By doing this however, you have to realize that using auto-incrementing IDs won't work correctly as you won't be able to maintain unique IDs across all of the tables without doing some work yourself.
These of course are the simple solutions. There are tons of solutions for large data sets which also includes looking at other storage solutions, clustering, partitioning, etc. Doing some of these things manually yourself can give you a little bit of an understanding on some of the possibly "manual solutions".

MySQL many tables or few tables

I'm building a very large website currently it uses around 13 tables and by the time it's done it should be about 20.
I came up with an idea to change the preferences table to use ID, Key, Value instead of many columns however I have recently thought I could also store other data inside the table.
Would it be efficient / smart to store almost everything in one table?
Edit: Here is some more information. I am building a social network that may end up with thousands of users. MySQL cluster will be used when the site is launched for now I am testing using a development VPS however everything will be moved to a dedicated server before launch. I know barely anything about NDB so this should be fun :)
This model is called EAV (entity-attribute-value)
It is usable for some scenarios, however, it's less efficient due to larger records, larger number or joins and impossibility to create composite indexes on multiple attributes.
Basically, it's used when entities have lots of attributes which are extremely sparse (rarely filled) and/or cannot be predicted at design time, like user tags, custom fields etc.
Granted I don't know too much about large database designs, but from what i've seen, even extremely large applications store their things is a very small amount of tables (20GB per table).
For me, i would rather have more info in 1 table as it means that data is not littered everywhere, and that I don't have to perform operations on multiple tables. Though 1 table also means messy (usually for me, each object would have it's on table, and an object is something you have in your application logic, like a User class, or a BlogPost class)
I guess what i'm trying to say is that do whatever makes sense. Don't put information on the same thing in 2 different table, and don't put information of 2 things in 1 table. Stick with 1 table only describes a certain object (this is very difficult to explain, but if you do object oriented, you should understand.)
nope. preferences should be stored as-they-are (in users table)
for example private messages can't be stored in users table ...
you don't have to think about joining different tables ...
I would first say that 20 tables is not a lot.
In general (it's hard to say from the limited info you give) the key-value model is not as efficient speed wise, though it can be more efficient space wise.
I would definitely not do this. Basically, the reason being if you have a large set of data stored in a single table you will see performance issues pretty fast when constantly querying the same table. Then think about the joins and complexity of queries you're going to need (depending on your site)... not a task I would personally like to undertake.
With using multiple tables it splits the data into smaller sets and the resources required for the query are lower and as an extra bonus it's easier to program!
There are some applications for doing this but they are rare, more or less if you have a large table with a ton of columns and most aren't going to have a value.
I hope this helps :-)
I think 20 tables in a project is not a lot. I do see your point and interest in using EAV but I don't think it's necessary. I would stick to tables in 3NF with proper FK relationships etc and you should be OK :)
the simple answer is that 20 tables won't make it a big DB and MySQL won't need any optimization for that. So focus on clean DB structures and normalization instead.

Will more MySql tables slow down searches on MySql database?

I have a classifieds website, and I am thinking about redesigning the database a bit.
Currently I have 7 tables in the db. One table for each "MAIN CATEGORY".
For example, I have a "VEHICLES" table which holds all information about the following categories of classifieds:
cars
mc
mopeds/scooters
trucks
boats
etc etc
However, users on the website usually search in specific categories. For example, the user chooses the "cars" category to search in, and enters a keyword.
My code today, will search the entire VEHICLES table for all records with the field "category" equal to "cars", and then get their details:
"SELECT * IN vehicles WHERE category='cars' AND alot of other conditions" // just for example, not tested
I am thinking about making a table now, for each of these "sub-categories".
Ie, one for cars, one for mc, one for trucks etc, so that search isn't done through information which isn't needed.
Will this increase search speed? Because I have calculated that I will need atleast 30 or so tables for this.
Thanks
With a properly indexed table and a "reasonable" number of rows, you will not gain much speed from this approach. Anything you gain in speed of execution you will lose in time-to-market because your programming will become more complicated.
Do not perform this optimization unless and until you encounter a performance problem in testing with a representative set of data.
It will increase the speed of a search within the same category. It will potentially slow down queries where you need aggregate information from the different categories. You need to decide which is the best option for your site.
How many records do you have in total in the vehicles table. Its quite likely that adding proper indexes will greatly increase the speed of your searches.
Check out the 'EXPLAIN' query option in MySQL. Understanding this will help you optimize your database a lot with indices.
Performance optimization is as much art as science, and to really understand what's the best option requires that you do some benchmarking; anyone offering a definitive answer given the available information is just wrong. That said, a few thoughts on your situation:
You don't say what type your category column is now, but if it's a string type, it's probably using more space than other options, thus making the table larger. Proper indexing can help tremendously with speed, but a larger table with larger indexes will always work to do just the opposite.
As already mentioned by someone else, your queries within a category will be faster in the simple case of a category search. How much faster depends on how much data you have in your current table, and the increases may be negated if you have to join in other tables to satisfy the need for all the other conditions to which you alluded. OTOH, it may actually speed things up in certain join cases (e.g., if you were doing self-joins with your all-encompassing table).
If you're working with a lot of data, splitting into multiple tables can greatly ease backups.
Splitting into multiple tables may also make it easier to shard your data across multiple servers for performance reasons. Similarly, it may make replication setups easier to keep running.
If you're tracking data that's category-specific, separate tables enables you to better normalize your database and likely reap some nice performance as a result of using much smaller tables.
Splitting obviously means modifying your code. If your code is of the old, creaky type, you may very well achieve a performance gain from the clean-up. Of course, there's also the risk that you'll break something....
Check your indexes. Bad indexes are a very common cause of poor performance but are relatively easy to fix with a bit of quality time spent on self-education. MySQL's EXPLAIN can tell you whether your queries are using the indexes, and the index stats (look in the docs) can tell you how efficiently your indexes are working.
Finally, speaking of code, check yours. Try experimenting with a few approaches, regardless of how the database is set up. For example, it may be quicker to do a couple of separate queries and join the results in code than to do the join in the database. Likewise, it's often quicker to do things like sorts in code, particularly in cases where a join or something means the database would have to create a temporary file/table. Again, check the EXPLAIN output, and if you can't eliminate a problem area in your queries, see if it helps to simplify the queries and do more work in the code. This can be particularly beneficial in the common case where the web server has more resources to spare than the database server.
There are many more factors to consider. Ultimately, though, the best way to make these decisions is not to spend time pondering theories but to put both methods to the test. Create some test databases and benchmark the sort of queries you'd run most often, with and without simulated load. You'll get your answer.
if you are using php try something like
$query = mysql_query($sql);
while($row = mysql_fetch_assoc($query)){
$tempvalue[]=$row;
}
and then to loop the info use for like sentence
foreach($tempvalue as $key => $value){
write the table .....
}
maybe mysql isnt slow and the problem is in the code
test dont kill anyone =)

Need Help regarding Optimization

First of all I am an autodidact so I don't have great know how about optimization and stuff. I created a social networking website.
It contains 29 tables right now. I want to extend its functionality by adding things like yellow pages, events etc to make it more like a portal.
Now the question is should I simply add the tables in the same database or should I use a different database?
And in case if I create a new database, I also want users to be able to comment on business listing etc just like reviews. So how will I be able to pull out entries since the reviews will be on one database and user details on other.
Is it possible to join tables on 2 different databases ?
You can join tables in separate databases by fully justifying the name, but the real question is why do you want the information in separate databases? If the information you are storing all relates together, it should go in one database unless there is a compelling (usually performance related) reason against it.
The main reason I could see for separating your YellowPages out is if you wished to have one YellowPages accessible to several different, non-interacting, websites. That said, assumably you wouldn't want cross-talk comments on the listings, so comments would need to be stored in the website databases rather than the YellowPages database. And that just sounds like a maintenance nightmare.
Don't Optimize until you need to.
If performance is ok, go for the easiest to maintain solution.
Monitor the performance of your site and if it starts to get slow, figure out exactly what is causing the slowdown and focus on performance on that section only.
You definitely can query and join tables from two different databases - you just need to specify the tables in a dbname.tablename format.
SELECT a.username, b.post_title
FROM dbOne.users a INNER JOIN dbTwo.posts b USING (user_id)
However, it might make management and maintenance a lot more complicated for you. For example, you'll have to track which table belongs in which database, and will continually need to be adding the database names into all your queries. When it comes time to back up the data, your work will increase there as well. MySQL databases can easily contain hundreds of tables so I see no benefit in splitting it up - just stick with one.
You can prove an algorithm is the fastest it can. math.h and C libraries are very optimized since half a century and other very advances when optimizing is perl strucutres. Just avoid put everything on online to easify debugging. There're conventions, try keep every programmer in the team following same convention. Which convention is "right" makes less optimum than being consequent and consistent. Performance is the last thing you do, security and intelligibility top prios. Read about ordo notation depends on software only while suboptimal software can be faster than optimal relative different hardware. A totally buginfested spaghetti code with no structure can respond many times faster than the most proven optimal software relative hardware.

Categories