Storage - Logic/Performance - PHP/MySQL - php

Okay, here's a standard users table;
Full Name | Birthday | E-Mail | Username | Password | Facebook | MySpace | Twitter | LinkedIn
Nothing unusual about this, it's fairly standard and text book. However, instead of multiple social networking columns for each network it could be stored like so;
Full Name | Birthday | E-Mail | Username | Password | Social
The difference being the information would be stored in the social column as an imploded array rather than in separate columns. It's quite sensible, so if there are into the thousands of users surely it would be quicker to process via script and less hit on the database.
Can anyone think of any DISADVANTAGES of using the suggested method instead of the text book method?

The two disadvantages I can think of:
It will be more difficult to query for social details specific to a certain user. If you knew their Facebook username was fbuser123 then you might have to query for something like SELECT * FROM users WHERE social LIKE '%fbuser123%'.
It will be slightly more difficult to use the information once it's been selected from the database, for example: Requiring that the field be json_decode'ed before it can be used.
Other than that, I can't think of anything else.
I would imagine that if you did this, the most efficient way of storing the data would be in TEXT format and json_encode the data.

Related

Storing a long array inside of a MySql Database

I have my database which stores userdata such as username, email, password..
I am currently rewriting the system and trying to improve it/make it more efficient
Currently I need to access an array for a user in the database.
Structure:
username | memberID | email | password | ... |... | skinsInventory |
Now 'skinsInventory is where I want to store this array.
This array could become very big for some users.
Is there a better way to do it or is this the recommended way?
For example some of the users have;
Database skinsInventory
But I can expect this to get quite big.
Any better ways to do this?
Help is very appreciated

How should I store trivial data in a database?

I have a web application which allows people to upload flipbook animations. There are always a lot of requests for new features such as:
Tagging users (Like tagging a person in a Facebook post)
Tagging their flipnotes (think: Tagging YouTube videos with categories, or tagging Stack Exchange questions: database-design)
Linking their flipnotes to multiple relevant channels for a better chance at finding viewers
For things like follows/subscriptions, I have a table called follows.
+---------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+----------------+
| followID | int(11) | NO | PRI | NULL | auto_increment |
| followingUser | varchar(16) | NO | | NULL | |
| followedUser | varchar(16) | NO | | NULL | |
+---------------+-------------+------+-----+---------+----------------+
I'm rather hesitant to start creating dozens of tables to deal with metadata, however. There's just too much of it. I'm also hesitant about using TEXT datatypes to store, say, arrays of tags. I've heard bad things about efficiency; and I'm dealing with hundreds of thousands of rows in one part of the site, and almost four million in a single table in another. Small inefficiencies don't always stay small when you consider scalability. Take order by rand() as an example.
So, what approaches might I consider for storing and organizing trivial information in my database? I could significantly improve the user experience if I were able to keep track of more information.
I'm using PHP and MySQL.
The simplest and most efficient way to do tagging is to create a master list of tags and then use a many-to-many relationship to record which tags are applied to each of your FLIPBOOKS. Consider this ERD:
The FLIPNOTE_TAG table is just a simple intersection that contains foreign keys to your FLIPNOTE table and your TAG master list. How you get tags depends on your business rules. In Stack Exchange, tags are a moderated list of items. On YouTube, they are just dumb strings that can be added pretty much at will by users.
Either way, having a master list of tags makes searching for distinct tags to follow or view much easier.
Also, unlike doing a partial text match search on arrays of strings, which is painfully slow at any reasonable scale, searching the foreign key index of an intersection table for one or more tag keys is very fast and scalable.
I think the follows database is quite well structured to be honest, but you only need either followingUser or followedUser (I would go for the latter and called it userBeingFollowed for better clarity) as if Person A is following Person B then it's automatically true that Person B is being followed by Person A and so you don't need both. Also, you need a timestamp column to record the time that the following took place and you should stored it as a long (or BigInt(11)).
The SQL statement is a simple INSERT query which is very easy to understand.

Multitenancy with unknown dynamic data per tenant

I am working on a system, where among the requirements are:
PHP + PostgreSql
Multitenant system, using a single database for all the tenants (tenantId).
Each tenant's data is unknown, so they should have the flexibility to add whatever data they want:
e.g. for an accounts table,
tenant 1 > account_no | date_created | due_date
tenant 2 > account_holder | start_date | end_date | customer_name | ...
The only solution I can see for this case is using the key-value pair database structure:
- e.g.
accounts table
id | tenant_id | key | value
1 1 account_no 12345
accounts_data table
account_id | key | value
1 date_created 01-01-2014
1 due_date 30-02-2014
The draw backs I see for this approach in the long run:
- Monster queries
- Inefficient with large data
- Lots of coding to handle data validation, since no data types are there and everything is saved as string
- Filtering can be lots of work
Having that said, I would appreciate suggestions as well as if any other approach I can use to achieve this.
Warning, you're walking into the inner platform effect and Enterprisey design.
Stop and back away slowly, then revisit your assumptions about why you have to do things this way.
Something has to give here; either:
Use a schemaless free-form database for schemaless, free-form data;
Allow tenant users to define useful schema for their data based on their needs; or
Compromise with something like hstore or json storage
Please, please, please don't create a database within an EAV model of a database. Developers everywhere in the world will cry and your design will soon end up talked about on The Daily WTF.

Making a database scalable

I've been developing a website for some time now and so far everything is fast and good, though that is with one active user. I don't know how many people will use my website in the first, week, month or year.
I've been looking into what scaling means and how to achieve it and caching seems to be a big part of it. So I started searching for ways to cache my content. Currently I've just developed the website in XAMPP. I'm using MySQL for the database and simple PHP with MySQLi to edit the data. I also do some simply logging with the built-in System Messages app in OS X Mountain Lion. So I'm thinking about using Memcache for the caching.
Is this a good approach?
How can I test it to really see the difference?
How do I know that it will work great even with many more users?
Are there any good benchmarking apps out there?
There are many ways to make sure that a database scales well, but I think the most important part is that you define proper indexes for your tables. At least the fields that are foreign keys should have an index defined.
For example, if you have a large forum, you might have a table of topics that looks like this:
topic_id | name
---------+--------------------------------
1 | "My first topic!"
2 | "Important topic"
3 | "I really like to make topics!"
... | ...
7234723 | "We have a lot of topics!"
And then another table with the actual posts in the topics:
post_id | user | topic_id
---------+------------+---------
1 | "Spammer" | 1
2 | "Erfa" | 2
3 | "Erfa" | 1
4 | "Spammer" | 1
... | ... | ...
87342352 | "Erfa" | 457454
When you load a topic in your application, you want to load all posts that match the topic id. In this case, you cannot afford to look through all database rows, because there are simply too many! Fortunately, you do not have to do much to make sure this is done, you just have to create an index for the field topic_id and you are done.
This is a very basic thing to do to make your database scale well, but since it is so important, I really thought someone should mention it!
Get and Use jMeter.
with jMeter you can test how quick responses are coming back and how pages are loading in addition to confirming that there aren't any errors currently going on. This way you can simulate a ton of load; while seeing actual performance updates when making an adjustment such as using memcache.

High-performance multi-tier tag filtering

I have a large database of artists, albums, and tracks. Each of these items may have one or more tags assigned via glue tables (track_attributes, album_attributes, artist_attributes). There are several thousand (or even hundred thousand) tags applicable to each item type.
I am trying to accomplish two tasks, and I'm having a very hard time getting the queries to perform acceptably.
Task 1) Get all tracks that have any given tags (if provided) by artists that have any given tags (if provided) on albums with any given tags (if provided). Any set of tags may not be present (i.e. only a track tag is active, no artist or album tags)
Variation: The results are also presentable by artist or by album rather than by track
Task 2) Get a list of tags that are applied to the results from the previous filter, along with a count of how many tracks have each given tag.
What I am after is some general guidance in approach. I have tried temp tables, inner joins, IN(), all my efforts thus far result in slow responses. A good example of the results I am after can be seen here: http://www.yachtworld.com/core/listing/advancedSearch.jsp, except they only have one tier of tags, I am dealing with three.
Table structures:
Table: attribute_tag_groups
Column | Type |
------------+-----------------------------+
id | integer |
name | character varying(255) |
type | enum (track, album, artist) |
Table: attribute_tags
Column | Type |
--------------------------------+-----------------------------+
id | integer |
attribute_tag_group_id | integer |
name | character varying(255) |
Table: track_attribute_tags
Column | Type |
------------+-----------------------------+
track_id | integer |
tag_id | integer |
Table: artist_attribute_tags
Column | Type |
------------+-----------------------------+
artist_id | integer |
tag_id | integer |
Table: album_attribute_tags
Column | Type |
------------+-----------------------------+
album_id | integer |
tag_id | integer |
Table: artists
Column | Type |
------------+-----------------------------+
id | integer |
name | varchar(350) |
Table: albums
Column | Type |
------------+-----------------------------+
id | integer |
artist_id | integer |
name | varchar(300) |
Table: tracks
Column | Type |
-------------+-----------------------------+
id | integer |
artist_id | integer |
album_id | integer |
compilation | boolean |
name | varchar(300) |
EDIT I am using PHP, and I am not opposed to doing any sorting or other hijinx in script, my #1 concern is speed of return.
If you want speed, I would suggest you look into Solr/Lucene. You can store your data, and have very speedy lookups by calling Solr and parsing the result from PHP. And as an added benefit you get faceted searches as well (which is task 2 of your question if I interpret it correctly). The downside is of course that you might have redundant information (once stored in DB, once in the Solr document store). And it does take a while to setup (well, you could learn a lot from Drupal Solr integration).
Just check out the PHP reference docs for Solr.
Here's on article on how to use Solr with PHP, just in case : http://www.ibm.com/developerworks/opensource/library/os-php-apachesolr/.
You probably should try to denormalize your data. Your structure is optimised for insert/update load, but not for queries. As I got it, your will have much more select queries than insert/update queries.
For example you can do something like this:
store your data in normalized structure.
create agregate table like this
track_id, artist_tags, album_tags, track_tags
1 , jazz/pop/, jazz/rock, /heavy-metal/
or
track_id, artist_tags, album_tags, track_tags
1 , 1/2/, 1/3, 4/
to spead up search you probably should create FULLTEXT index on *_tags columns
query this table with sql like
select * from aggregate where album_tags MATCH (track_tags) AGAINST ('rock')
rebuild this table incrementally once a day.
I think the answer greately depends on how much money you wish to spend on your project - there are some tasks that are even theoretically impossible to accomplish given strict conditions(for example that you must use only one weak server). I will assume that you are ready to upgrade your system.
First of all - your table structure forces JOIN's - I think you should avoid them if possible when writing high performace applications. I don't know "attribute_tag_groups" is, so I propose a table structure: tag(varchar 255), id(int), id_type(enum (track, album, artist)). Id can be artist_id,track_id or album_id depending on id_type. This way you will be able too lokup all your data in one table, but of cource it will use much more memory.
Next - you should consider using several databases. It will help even more if each database contains only part of your data(each lookup will be faster). Deciding how to spread your data between databases is usually rather hard task: I suggest you make some statistics about tag length, find ranges of length that will get similar trac/artists results count and hard-code it into your lookup code.
Of cource you should consider MySql tuning(I am sure you did that, but just in case) - all your tables should reside in RAM - if that is impossible try to get SSD discs, raids etc.. Proper indexing and database types/settings are really important too (MySql may even show some bottlenecks in internal statistics).
This suggestion may sound mad - but sometimes it is good to let PHP do some calculations that MySql can do itself. MySql databases are much harder to scale, while a server for PHP processing can be added in in the matter of minutes. And different PHP threads can run on different CPU cores - MySql have problems with it. You can increase your PHP performace by using some advanced modules(you can even write them yourself - profile your PHP scripts and hard code bottlenecks in fast C code).
Last but I think the most important - you must use some type of caching. I know that it is really hard, but I don't think that there was any big project without a really good caching system. In your case some tags will surely be much more popular then others, so it should greately increase performance. Caching is a form of art - depending on how much time you can spend on it and how much resources are avaliable you can make 99% of all requests use cache.
Using other databases/indexing tools may help you, but you should always consider theoretical query speed comparison(O(n), O(nlog(n))...) to understand if they can really help you - using this tools sometimes give you low performance gain(like constant 20%), but they may complicate your application design and most of the time it is not worth it.
From my experience most 'slow' MySQL database doesn't have correct index and/or queries. So I would check these first:
Make sure all data talbes' id fields is primary index. Just in case.
For all data tables, create an index on the external id fields and then the id, so that MySQL can use it in search.
For your glue tables, setting a primary key on the two fields, first the subject, then the tag. This is for normal browsing. Then create a normal index on the tag id. This is for searching.
Still slow? Are you using MyISAM for your tables? It is designed for quick queries.
If still slow, run an EXPLAIN on a slow query and post both the query and result in the question. Preferably with an importable sql dump of your complete database structure.
Things you may give a try:
Use a Query Analyzer to explore the bottlenecks of your querys. (In most times the underlying DBS is quite doing an amazing job in optimizing)
Your table structure is well normalized but personal experience showed me that you can archive much greater performance levels with structures that enable you to avoid joins& subquerys. For your case i would suggest to store the tag information in one field. (This requires support by the underlying DBS)
So far.
Check your indices, and if they are used correctly. Maybe MySQL isn't up to the task. PostgreSQL should be similiar to use but has better performance in complex situations.
On a completely different track, google map-reduce and use one of these new fancy no-SQL databases for really really large data sets. This can do distributed search on multiple servers in parallel.

Categories