How should I store trivial data in a database? - php

I have a web application which allows people to upload flipbook animations. There are always a lot of requests for new features such as:
Tagging users (Like tagging a person in a Facebook post)
Tagging their flipnotes (think: Tagging YouTube videos with categories, or tagging Stack Exchange questions: database-design)
Linking their flipnotes to multiple relevant channels for a better chance at finding viewers
For things like follows/subscriptions, I have a table called follows.
+---------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+----------------+
| followID | int(11) | NO | PRI | NULL | auto_increment |
| followingUser | varchar(16) | NO | | NULL | |
| followedUser | varchar(16) | NO | | NULL | |
+---------------+-------------+------+-----+---------+----------------+
I'm rather hesitant to start creating dozens of tables to deal with metadata, however. There's just too much of it. I'm also hesitant about using TEXT datatypes to store, say, arrays of tags. I've heard bad things about efficiency; and I'm dealing with hundreds of thousands of rows in one part of the site, and almost four million in a single table in another. Small inefficiencies don't always stay small when you consider scalability. Take order by rand() as an example.
So, what approaches might I consider for storing and organizing trivial information in my database? I could significantly improve the user experience if I were able to keep track of more information.
I'm using PHP and MySQL.

The simplest and most efficient way to do tagging is to create a master list of tags and then use a many-to-many relationship to record which tags are applied to each of your FLIPBOOKS. Consider this ERD:
The FLIPNOTE_TAG table is just a simple intersection that contains foreign keys to your FLIPNOTE table and your TAG master list. How you get tags depends on your business rules. In Stack Exchange, tags are a moderated list of items. On YouTube, they are just dumb strings that can be added pretty much at will by users.
Either way, having a master list of tags makes searching for distinct tags to follow or view much easier.
Also, unlike doing a partial text match search on arrays of strings, which is painfully slow at any reasonable scale, searching the foreign key index of an intersection table for one or more tag keys is very fast and scalable.

I think the follows database is quite well structured to be honest, but you only need either followingUser or followedUser (I would go for the latter and called it userBeingFollowed for better clarity) as if Person A is following Person B then it's automatically true that Person B is being followed by Person A and so you don't need both. Also, you need a timestamp column to record the time that the following took place and you should stored it as a long (or BigInt(11)).
The SQL statement is a simple INSERT query which is very easy to understand.

Related

PHP & MySQL performance - One big query vs. multiple small

For an MySQL table I am using the InnoDB engine and the structure of my tables looks like this:
Table user
id | username | etc...
----|------------|--------
1 | bruce | ...
2 | clark | ...
3 | tony | ...
Table user-emails
id | person_id | email
----|-------------|---------
1 | 1 | bruce#wayne-ent.com
2 | 1 | ceo#wayne-ent.com
3 | 2 | clark.k#daily-planet.com
To fetch data from the database I've written a tiny framework. E.g. on __construct($id) it checks if there is a person with the given id, if yes it creates the corresponding model and saves only the field id to an array. During runtime, if I need another field from the model it fetches only the value from the database, saves it to the array and returns it. E.g. same with the field emails for that my code accesses the table user-emails and get all the emails for the corresponding user.
For small models this works alright, but now I am working on another project where I have to fetch a lot of data at once for a list and that takes some time. Also I know that many connections to MySQL and many queries are quite stressful for the server, so..
My question now is: Should I fetch all data at once (with left joins etc.) while constructing the model and save the fields as an array or should I use some other method?
Why do people insist on referring to the entities and domain objects as "models".
Unless your entities are extremely large, I would populate the entire entity, when you need it. And, if "email list" is part of that entity, I would populate that too.
As I see it, the question is more related to "what to do with tables, that are related by foreign keys".
Lets say you have Users and Articles tables, where each article has a specific owner associate by user_id foreign key. In this case, when populating the Article entity, I would only retrieve the user_id value instead of pulling in all the information about the user.
But in your example with Users and UserEmails, the emails seem to be a part of the User entity, and something that you would often call via $user->getEmailList().
TL;DR
I would do this in two queries, when populating User entity:
select all you need from Users table and apply to User entity
select all user's emails from the UserEmails table and apply it to User entity.
P.S
You might want to look at data mapper pattern for "how" part.
In my opinion you should fetch all your fields at once, and divide queries in a way that makes your code easier to read/manage.
When we're talking about one query or two, the difference is usually negligible unless the combined query (with JOINs or whatever) is overly complex. Usually an index or two is the solution to a very slow query.
If we're talking about one vs hundreds or thousands of queries, that's when the connection/transmission overhead becomes more significant, and reducing the number of queries can make an impact.
It seems that your framework suffers from premature optimization. You are hyper-concerned about fetching too many fields from a row, but why? Do you have thousands of columns or something?
The time consuming part of your query is almost always the lookup, not the transmission of data. You are causing the database to do the "hard" part over and over again as you pull one field at a time.

Multitenancy with unknown dynamic data per tenant

I am working on a system, where among the requirements are:
PHP + PostgreSql
Multitenant system, using a single database for all the tenants (tenantId).
Each tenant's data is unknown, so they should have the flexibility to add whatever data they want:
e.g. for an accounts table,
tenant 1 > account_no | date_created | due_date
tenant 2 > account_holder | start_date | end_date | customer_name | ...
The only solution I can see for this case is using the key-value pair database structure:
- e.g.
accounts table
id | tenant_id | key | value
1 1 account_no 12345
accounts_data table
account_id | key | value
1 date_created 01-01-2014
1 due_date 30-02-2014
The draw backs I see for this approach in the long run:
- Monster queries
- Inefficient with large data
- Lots of coding to handle data validation, since no data types are there and everything is saved as string
- Filtering can be lots of work
Having that said, I would appreciate suggestions as well as if any other approach I can use to achieve this.
Warning, you're walking into the inner platform effect and Enterprisey design.
Stop and back away slowly, then revisit your assumptions about why you have to do things this way.
Something has to give here; either:
Use a schemaless free-form database for schemaless, free-form data;
Allow tenant users to define useful schema for their data based on their needs; or
Compromise with something like hstore or json storage
Please, please, please don't create a database within an EAV model of a database. Developers everywhere in the world will cry and your design will soon end up talked about on The Daily WTF.

MySQL and PHP - multiple rows or concatenated into 1 row - which is quicker?

When storing relationship data for a user (potentially a thousand friends per user), would it be faster to create a new row for each relationship, or to concatenate all of of their friends into a string and then parse that later?
I.e.
Primary id | Friend1ID | Friend2ID|
1| 234| 5789|
2| 5789| 234|
Where the IDs are references to primary IDs in a 'Users' table.
Or for the 'Users' table to just have a column called friends which may look like this:
Primary id | Friend1ID |
234| 5789.123.8474|
5789| 234|
I'm of the understanding that string concatenation and parsing is generally quite slow, so I'd be tempted to lean towards the first method. However as the number of users grows, this then becomes a case of selecting one row and parsing it V searching millions of rows for rows which match the WHERE criteria.
Is one method distinctly faster than the other? Particularly as the number of users grows.
You should use a second table to store the friends.
Users Table
----------
userid | username
1 | Bob
2 | Mike
3 | John
Users Friends Table
--------------------
userid | friend_id
1 | 2
3 | 2
Here you can see that Mike is friends with both Bob and John.... This is of course a very simply demonstration.
Your second option will not scale, some people may have hundreds of thousands of friends, storing each Id in a single field is going to cause a headache further down the line. adding friends, removing friends. working out complex relationships between people. Lots of over head.
Querying millions of records with a WHERE clause on a properly indexed table should take no more than a second, the first option is the better one.
The "correct" way would probably be keeping multiple rows. This allows for much easier statistical analysis and more complex queries (like friends of friends) without any hacky stuff. Integer storage size is also often smaller than string storage, even though you're repeating one ID - especially if you use an appropriately sized integer store (like mediumint).
It's also more maintainable, scalable (if they start getting a damn lot of friends) export and importable. The speed gain from concatenation, if any, wouldn't be worth the rest of the benefits.
If you wanted for instance to search if Bob was a friend of Jane, this would be a single row lookup in the multiple row implementation, or in the single row implementation: get Bob's row, decode field, loop through field looking for Jane - found Jane. DBMS optimisation and indexing would make the multiple row implementation much faster in this case - if you had the primary key as (id, friendid) then it'd be pretty much instantaneous as the table would probably be hashed on that key.
I believe the proper way to do it which might be more faster is two do a two columns table
user | friend
1 | 2
1 | 3
It will simple and will make queering and updating much easier and you can have as many relationship as you want.
Don't over complicate the problem...
... Asking for the more "correct" way is wrong itself.
It depends based on case.
If you have low access rate to your web application having more rows won't change anything on the other side of the coins (i'm not English), on large and medium application access it's maybe better to have the minimal access to the db possible.
To obtain this as you've already thinked you can concatenate the values and then split them on login of the user and then put everything into the $_SESSION supervar.
At least this is what i think.

Making a database scalable

I've been developing a website for some time now and so far everything is fast and good, though that is with one active user. I don't know how many people will use my website in the first, week, month or year.
I've been looking into what scaling means and how to achieve it and caching seems to be a big part of it. So I started searching for ways to cache my content. Currently I've just developed the website in XAMPP. I'm using MySQL for the database and simple PHP with MySQLi to edit the data. I also do some simply logging with the built-in System Messages app in OS X Mountain Lion. So I'm thinking about using Memcache for the caching.
Is this a good approach?
How can I test it to really see the difference?
How do I know that it will work great even with many more users?
Are there any good benchmarking apps out there?
There are many ways to make sure that a database scales well, but I think the most important part is that you define proper indexes for your tables. At least the fields that are foreign keys should have an index defined.
For example, if you have a large forum, you might have a table of topics that looks like this:
topic_id | name
---------+--------------------------------
1 | "My first topic!"
2 | "Important topic"
3 | "I really like to make topics!"
... | ...
7234723 | "We have a lot of topics!"
And then another table with the actual posts in the topics:
post_id | user | topic_id
---------+------------+---------
1 | "Spammer" | 1
2 | "Erfa" | 2
3 | "Erfa" | 1
4 | "Spammer" | 1
... | ... | ...
87342352 | "Erfa" | 457454
When you load a topic in your application, you want to load all posts that match the topic id. In this case, you cannot afford to look through all database rows, because there are simply too many! Fortunately, you do not have to do much to make sure this is done, you just have to create an index for the field topic_id and you are done.
This is a very basic thing to do to make your database scale well, but since it is so important, I really thought someone should mention it!
Get and Use jMeter.
with jMeter you can test how quick responses are coming back and how pages are loading in addition to confirming that there aren't any errors currently going on. This way you can simulate a ton of load; while seeing actual performance updates when making an adjustment such as using memcache.

High-performance multi-tier tag filtering

I have a large database of artists, albums, and tracks. Each of these items may have one or more tags assigned via glue tables (track_attributes, album_attributes, artist_attributes). There are several thousand (or even hundred thousand) tags applicable to each item type.
I am trying to accomplish two tasks, and I'm having a very hard time getting the queries to perform acceptably.
Task 1) Get all tracks that have any given tags (if provided) by artists that have any given tags (if provided) on albums with any given tags (if provided). Any set of tags may not be present (i.e. only a track tag is active, no artist or album tags)
Variation: The results are also presentable by artist or by album rather than by track
Task 2) Get a list of tags that are applied to the results from the previous filter, along with a count of how many tracks have each given tag.
What I am after is some general guidance in approach. I have tried temp tables, inner joins, IN(), all my efforts thus far result in slow responses. A good example of the results I am after can be seen here: http://www.yachtworld.com/core/listing/advancedSearch.jsp, except they only have one tier of tags, I am dealing with three.
Table structures:
Table: attribute_tag_groups
Column | Type |
------------+-----------------------------+
id | integer |
name | character varying(255) |
type | enum (track, album, artist) |
Table: attribute_tags
Column | Type |
--------------------------------+-----------------------------+
id | integer |
attribute_tag_group_id | integer |
name | character varying(255) |
Table: track_attribute_tags
Column | Type |
------------+-----------------------------+
track_id | integer |
tag_id | integer |
Table: artist_attribute_tags
Column | Type |
------------+-----------------------------+
artist_id | integer |
tag_id | integer |
Table: album_attribute_tags
Column | Type |
------------+-----------------------------+
album_id | integer |
tag_id | integer |
Table: artists
Column | Type |
------------+-----------------------------+
id | integer |
name | varchar(350) |
Table: albums
Column | Type |
------------+-----------------------------+
id | integer |
artist_id | integer |
name | varchar(300) |
Table: tracks
Column | Type |
-------------+-----------------------------+
id | integer |
artist_id | integer |
album_id | integer |
compilation | boolean |
name | varchar(300) |
EDIT I am using PHP, and I am not opposed to doing any sorting or other hijinx in script, my #1 concern is speed of return.
If you want speed, I would suggest you look into Solr/Lucene. You can store your data, and have very speedy lookups by calling Solr and parsing the result from PHP. And as an added benefit you get faceted searches as well (which is task 2 of your question if I interpret it correctly). The downside is of course that you might have redundant information (once stored in DB, once in the Solr document store). And it does take a while to setup (well, you could learn a lot from Drupal Solr integration).
Just check out the PHP reference docs for Solr.
Here's on article on how to use Solr with PHP, just in case : http://www.ibm.com/developerworks/opensource/library/os-php-apachesolr/.
You probably should try to denormalize your data. Your structure is optimised for insert/update load, but not for queries. As I got it, your will have much more select queries than insert/update queries.
For example you can do something like this:
store your data in normalized structure.
create agregate table like this
track_id, artist_tags, album_tags, track_tags
1 , jazz/pop/, jazz/rock, /heavy-metal/
or
track_id, artist_tags, album_tags, track_tags
1 , 1/2/, 1/3, 4/
to spead up search you probably should create FULLTEXT index on *_tags columns
query this table with sql like
select * from aggregate where album_tags MATCH (track_tags) AGAINST ('rock')
rebuild this table incrementally once a day.
I think the answer greately depends on how much money you wish to spend on your project - there are some tasks that are even theoretically impossible to accomplish given strict conditions(for example that you must use only one weak server). I will assume that you are ready to upgrade your system.
First of all - your table structure forces JOIN's - I think you should avoid them if possible when writing high performace applications. I don't know "attribute_tag_groups" is, so I propose a table structure: tag(varchar 255), id(int), id_type(enum (track, album, artist)). Id can be artist_id,track_id or album_id depending on id_type. This way you will be able too lokup all your data in one table, but of cource it will use much more memory.
Next - you should consider using several databases. It will help even more if each database contains only part of your data(each lookup will be faster). Deciding how to spread your data between databases is usually rather hard task: I suggest you make some statistics about tag length, find ranges of length that will get similar trac/artists results count and hard-code it into your lookup code.
Of cource you should consider MySql tuning(I am sure you did that, but just in case) - all your tables should reside in RAM - if that is impossible try to get SSD discs, raids etc.. Proper indexing and database types/settings are really important too (MySql may even show some bottlenecks in internal statistics).
This suggestion may sound mad - but sometimes it is good to let PHP do some calculations that MySql can do itself. MySql databases are much harder to scale, while a server for PHP processing can be added in in the matter of minutes. And different PHP threads can run on different CPU cores - MySql have problems with it. You can increase your PHP performace by using some advanced modules(you can even write them yourself - profile your PHP scripts and hard code bottlenecks in fast C code).
Last but I think the most important - you must use some type of caching. I know that it is really hard, but I don't think that there was any big project without a really good caching system. In your case some tags will surely be much more popular then others, so it should greately increase performance. Caching is a form of art - depending on how much time you can spend on it and how much resources are avaliable you can make 99% of all requests use cache.
Using other databases/indexing tools may help you, but you should always consider theoretical query speed comparison(O(n), O(nlog(n))...) to understand if they can really help you - using this tools sometimes give you low performance gain(like constant 20%), but they may complicate your application design and most of the time it is not worth it.
From my experience most 'slow' MySQL database doesn't have correct index and/or queries. So I would check these first:
Make sure all data talbes' id fields is primary index. Just in case.
For all data tables, create an index on the external id fields and then the id, so that MySQL can use it in search.
For your glue tables, setting a primary key on the two fields, first the subject, then the tag. This is for normal browsing. Then create a normal index on the tag id. This is for searching.
Still slow? Are you using MyISAM for your tables? It is designed for quick queries.
If still slow, run an EXPLAIN on a slow query and post both the query and result in the question. Preferably with an importable sql dump of your complete database structure.
Things you may give a try:
Use a Query Analyzer to explore the bottlenecks of your querys. (In most times the underlying DBS is quite doing an amazing job in optimizing)
Your table structure is well normalized but personal experience showed me that you can archive much greater performance levels with structures that enable you to avoid joins& subquerys. For your case i would suggest to store the tag information in one field. (This requires support by the underlying DBS)
So far.
Check your indices, and if they are used correctly. Maybe MySQL isn't up to the task. PostgreSQL should be similiar to use but has better performance in complex situations.
On a completely different track, google map-reduce and use one of these new fancy no-SQL databases for really really large data sets. This can do distributed search on multiple servers in parallel.

Categories