Using RDBMS for Users and Link with NoSQL Items - php

I am planning an application and wish to maintain a relational database and other non-relational, respectively, MySQL and MongoDB.
One fact is that in relational database are maintained users, and non-relational, is maintained by this user generated content, where involves geo queries.
The problem now is how to *create a link between user and your item*s using both databases and maintaining the performance or am adopting the wrong approach?
My idea is to create a product table with a foreign key to the user in mysql and objectId of the product in the database non-relational.
Example MySQL Table:
table: products_relationship
| account_id | product_objectid |
| ---------- | -------------------------------- |
| 1 | 0b694fc34c9663883a5d4b32371f8333 |
| 1 | 0b694fc34c9663883a5d4b32371f9837 |
| 2 | 0b694fc34c9663883a5d4b32371f9bfc |
| 5 | 0b694fc34c9663883a5d4b32371fcb5f |
| 1 | 0b694fc34c9663883a5d4b32371fd809 |
So, the user at account_id = 1 has a first name, email and other data. And owns 3 products.
Should I adopt a new methodology? I'll be gaining performance with that? Am I losing the functionality of NoSQL with that?

I work on a system at work that does just this. We have relational databases and NoSQL databases and we use a product called Mule (http://www.mulesoft.org/) to integrate them.
I would strongly recommend picking either MySQL or Mongo and doing all your PHP work against one of those databases. You can move data in near real time from MySQL->Mongo or from Mongo->MySQL. Mule is good at that.
You aren't going to be able to efficiently do "joins" across systems.
Mule will also help you do transformations on the data when you move it. As an example, you can take normalized data in MySQL and denormalize it for storing in Mongo.

I realized that I adopted a complex architecture while I did not need much complexity. There is no reason for me to want my users are in relational databases. Yes they can stay within the MongoDB without any problem, and the products within each user. This is a relationship that I create.
If I want, I can make the connection using _objectId

Related

MySQL - Database with multiple schemas where table and column names are different

Is there a way to have one database that can be represented with different table and column names?
I'm creating a large refactoring of an API, where it only makes sense to rename quite some table and column names in the MySQL database, but the old version of the API still needs to read and write to the existing database. The data is largely the same and the versions can easily co-exist. There is no chance to have two databases running in parallel.
The new API domain layer is fully separated from the persistance layer, but in the persistance layer, I would like to create it properly using new table and column names, instead of having to convert the names for each query.
Is there a way to represent the database structure in multiple ways - effectively making the naming variable? Can you suggest me a solution?
Is there e.g. a way to solve this by replacing a database schema?
I would like the two versions of the API to read and write to database where tables can exist with this (below) difference in naming - but the table and data is the same.
+----------------------------+ +-------------------------+
| old_items | | new_items |
+----+----------+------------+ +----+----------+---------+
| id | meta_key | meta_value | | id | item_key | item_id |
+----+----------+------------+ +----+----------+---------+
The application is written in PHP.
Thanks.
Why you want to rename the column, when you can use alias when getting the column name. like
select metavalue as itemvalue from tablename
Or
Alternatively you can create two seperate tables for each your entity. That makes sense.

"horizontal" vs. "vertical" table design, SQL

Apologies if this has been covered thoroughly in the past - I've seen some related posts but haven't found anything that satisfies me with regards to this specific scenario.
I've been recently looking over a relatively simple game with around 10k players. In the game you can catch and breed pets that have certain attributes (i.e. wings, horns, manes). There's currently a table in the database that looks something like this:
-------------------------------------------------------------------------------
| pet_id | wings1 | wings1_hex | wings2 | wings2_hex | horns1 | horns1_hex | ...
-------------------------------------------------------------------------------
| 1 | 1 | ffffff | NULL | NULL | 2 | 000000 | ...
| 2 | NULL | NULL | NULL | NULL | NULL | NULL | ...
| 3 | 2 | ff0000 | 1 | ffffff | 3 | 00ff00 | ...
| 4 | NULL | NULL | NULL | NULL | 1 | 0000ff | ...
etc...
The table goes on like that and currently has 100+ columns, but in general a single pet will only have around 1-8 of these attributes. A new attribute is added every 1-2 months which requires table columns to be added. The table is rarely updated and read frequently.
I've been proposing that we move to a more vertical design scheme for better flexibility as we want to start adding larger volumes of attributes in the future, i.e.:
----------------------------------------------------------------
| pet_id | attribute_id | attribute_color | attribute_position |
----------------------------------------------------------------
| 1 | 1 | ffffff | 1 |
| 1 | 3 | 000000 | 2 |
| 3 | 2 | ffffff | 1 |
| 3 | 1 | ff0000 | 2 |
| 3 | 3 | 00ff00 | 3 |
| 4 | 3 | 0000ff | 1 |
etc...
The old developer has raised concerns that this will create performance issues as users very frequently search for pets with specific attributes (i.e. must have these attributes, must have at least one in this colour or position, must have > 30 attributes). Currently the search is quite fast as there are no JOINS required, but introducing a vertical table would presumably mean an additional join for every attribute searched and would also triple the number of rows or so.
The first part of my question is if anyone has any recommendations with regards to this? I'm not particularly experienced with database design or optimisation.
I've run tests for a variety of cases but they've been largely inconclusive - the times vary quite significantly for all of the queries that I ran (i.e. between half a second and 20+ seconds), so I suppose the second part of my question is whether there's a more reliable way of profiling query times than using microtime(true) in PHP.
Thanks.
This is called the Entity-Attribute-Value-Model, and relational database systems are really not suited for it at all.
To quote someone who deems it one of the five errors not to make:
So what are the benefits that are touted for EAV? Well, there are none. Since EAV tables will contain any kind of data, we have to PIVOT the data to a tabular representation, with appropriate columns, in order to make it useful. In many cases, there is middleware or client-side software that does this behind the scenes, thereby providing the illusion to the user that they are dealing with well-designed data.
EAV models have a host of problems.
Firstly, the massive amount of data is, in itself, essentially unmanageable.
Secondly, there is no possible way to define the necessary constraints -- any potential check constraints will have to include extensive hard-coding for appropriate attribute names. Since a single column holds all possible values, the datatype is usually VARCHAR(n).
Thirdly, don't even think about having any useful foreign keys.
Finally, there is the complexity and awkwardness of queries. Some folks consider it a benefit to be able to jam a variety of data into a single table when necessary -- they call it "scalable". In reality, since EAV mixes up data with metadata, it is lot more difficult to manipulate data even for simple requirements.
The solution to the EAV nightmare is simple: Analyze and research the users' needs and identify the data requirements up-front. A relational database maintains the integrity and consistency of data. It is virtually impossible to make a case for designing such a database without well-defined requirements. Period.
The table goes on like that and currently has 100+ columns, but in general a single pet will only have around 1-8 of these attributes.
That looks like a case for normalization: Break the table into multiple, for example one for horns, one for wings, all connected by foreign key to the main entity table. But do make sure that every attribute still maps to one or more columns, so that you can define constraints, data types, indexes, and so on.
Do the join. The database was specifically designed to support joins for your use case. If there is any doubt, then benchmark.
EDIT: A better way to profile the queries is to run the query directly in the MySQL interpretter on the CLI. It will give you the exact time that it took to run the query. The PHP microtime() function will also introduce other latencies (Apache, PHP, server resource allocation, network if connection to a remote MySQL instance, etc).
What you are proposing is called 'normalization'. This is exactly what relational databases were made for - if you take care of your indexes, the joins will run almost as fast as if the data were in one table.
Actually, they might even go faster: instead of loading 1 table row with 100 columns, you can just load the columns you need. If a pet only has 8 attributes, you only load those 8.
This question is a very subjective. If you have the resources to update the middleware to reflect the column that has been added then, by all means, go with horizontal there is nothing safer and easier to learn than a fixed structure. One thing to remember, anytime you update a tables structure you have to update each one of its dependencies unless there is some catch-all like *, which I suggest you stay aware from unless you are just dumping data to a screen and order of columns is irrelevant.
With that said, Verticle is the way to go if you don't have all of your requirements in place or don't have the desire to update code in n number of areas. Most of the time you just need storage containers to store data. I would segregate things like numbers, dates, binary, and text in separate columns to preserve some data integrity, but there is nothing wrong with verticle storage, as long as you know how to formulate and structure queries to bring back the data in the appropriate format.
FYI, Wordpress uses verticle data storage for majority of the dynamic content it has to store for the millions of uses it has.
First thing from Database point of view is that your data should be grow vertically not in horizontal way. So, adding a new column is not a good design at all. Second thing, this is very common scenario in DB design. And the way to solve this you have to create three tables. 1st is of Pets, 2nd is of Attributes and 3rd is mapping table between theres two. Here is the example:
Table 1 (Pet)
Pet_ID | Pet_Name
1 | Dog
2 | Cat
Table 2 (Attribute)
Attribute_ID | Attribute_Name
1 | Wings
2 | Eyes
Table 3 (Pet_Attribute)
Pet_ID | Attribute_ID | Attribute_Value
1 | 1 | 0
1 | 2 | 2
About Performance:
Pet_ID and Attribute_ID are the primary keys which are indexed (http://developer.mimer.com/documentation/html_92/Mimer_SQL_Engine_DocSet/Basic_concepts4.html), so the search is very fast. And this is the right way to sovle the problem. Hope, now it will be clear to you.

Searching for a solution of high load access logging for PHP

I'm searching for a high accessing logging solution.
My "table" has this structure:
| ID | Hits | LastUsed
______________________________________
| XYNAME | 34566534 | LastUsedTimeHere
| XYNAMEX | 47845534 | LastUsedTimeHere
| XYNAMEY | 956744 | LastUsedTimeHere
I think a often used database system like a Relational Database Management System isn't the right choise here, do you agree?
The single file has a access (about 100.000-400.000 per day) and I need to log each visit with a up-count on the Hits and a update on LastUsed with the actual time where the ID is like some unique string I specify. I read this data real rarely.
(I just have a single server where already other sites run (with PHP & MySQL) and I don't have any income/ads from/at these sites (and I'm a student). So it should be also a solution which is memory/CPU saving. I want to use the solution within PHP.)
I already thought about CouchDB or MongoDB. Have you any experience and could recommend me something/a solution?
If you're not going to save the individual downloads as separate records, MySQL will handle the load nicely.

Dynamic survey application logic PHP/MSSQL

Firstly I think this question can be related to any language, but I specified what I was using.
Excuse me if I start to bore also, but I am trying to find out the best way to build a dynamic survey management system.
My client basically has said to me that the data has to be stored in MS SQL as his client has only got MS SQL connector for SAS, which is going to do reporting.
My logic so far is this:
1st. Setup the survey itself, i.e. ask for title, quick overview, etc, etc.
2nd. Define your questions.
3rd. Publish survey.
Now what I have done so far is that when they "publish survey", I have created a dedicated database table for this survey which will house the responses.
From the admin side of this, they will not be able to modify the questions, maybe the question title but that is about it. They cant add/remove questions.
Question is, is creating individual database tables a good thing? My only worry really is that say the admin creates like 30 questions, I will have 30 columns in that dedicated table. To go with that, this way might be easy for the SAS system to pull in data for reporting. The administrator will not see the survey responses in the admin panel btw.
I have done something similar for a language grading exam. I opted for a more flexible approach with the following tables
+------+ +-------------+ +-------------+ +-------------+ +----------+
| Exam | | Question | | Choice | | Answer | | User |
+------+ +-------------+ +-------------+ +-------------+ +----------+
| id | | id | | id | | id | | id |
| name | | questionNb | | choice | | user_id | | name |
+------+ | question | | question_id | | exam_id | | email |
| exam_id | | isAnswer | | question_id | | password |
+-------------+ +-------------+ | choice_id | +----------+
| isGood |
+-------------+
This model allowed me to easilly have a 15 questions exam, a 30 questions exam and a 50 questions exam. To adapt this model for survey, you might just have to remove the isAnswer and isGood part and you should be good and replace users data with anonymous general data like age, income, sex.
Creating a column for each question is totally wrong, altering the database at runtime for business oriented purpose is a "never ever do".
Read something about "relational databases" things should look like this:
table_surveys
id
survey_name
table_questions
id
fk_survey (foreign key to table_surveys)
question_text
(question value? maybe)
table_questions_options
id
question_id(foreign key to table_questions)
option_value (this can be true/false for a test or a numeric value for a survey)
option_label
table_users
id
username
pass
name
table_answers
id
options_fk (foreign key to table_question_options)
users_fk (foreign key to table_users)
This way everything is linked together (No reusing of options,or questions or stuff into different surveys)
According to the comments in the documentation, MS SQL Support in PHP is iffy at best. Is PHP the only language you are allowed to use for the project? If not, you might want to consider using C#, VB.Net or something more compatible with SQL Server. Otherwise, you could initially store the data in MySQL, and export it to MS SQL Server when you needed to do analysis.
Dont know, if I really understand your question. But I once built such a survey system. And it came out pretty quick and easy with about the following tables (if I remember right):
USER, SURVEYS, QUESTIONS, ANSWERS, [some mapping tables]
The SAS will fetch the data from virtual any table. If everything in one or two tables, it will even be easier.
With all due respect to Kibbee, PHP/MSSQL support is actually VERY good. We do it quite often, and the performance bests PHP/MySQL and matches compiled C#/MSSQL (in our very limited and unscientific testing). This is assuming you're running PHP on a Win machine. Running PHP with a TLS connector to a separate MSSQL box is another ball of wax and can be a pain to configure.
Anyway, we had a similar scenario and went with one table to manage forms (Forms w/ FormID as the primary), another to manage fields/questions (Fields w/FieldID, FieldType such as Y/N, text, select, etc.), and another to "assign" a field to a form (FormFields w/ FormFieldID, FormID, FieldID, parameters in an array for select items, etc.). Then yet another set of tables to deal with the answering of the questions.
I agree with the rest of the group. Make sure to normalize and don't create a separate column for each question. It'll be more work initially, but you'll appreciate it when you simply have to add a few rows to a table instead of re-writing your queries and re-designing your tables.

Which is the best way to bi-directionally synchronize dynamic data in real time using mysql

Here is the scenario. 2 web servers in two separate locations having two mysql databases with identical tables. The data within the tables is also expected to be identical in real time.
Here is the problem. if a user in either location simultaneously enters a new record into identical tables, as illustrated in the two first tables below, where the third record in each table has been entered simultaneously by the different people. The data in the tables is no longer identical. Which is the best way to maintain that the data remains identical in real time as illustrated in the third table below regardless of where the updates take place? That way in the illustrations below instead of ending up with 3 rows in each table, the new records are replicated bi-directionally and they are inserted in both tables to create 2 identical tables again with 4 columns this time?
Server A in Location A
==============
Table Names
| ID| NAME |
|-----------|
| 1 | Tom |
| 2 | Scott |
|-----------|
| 3 | John |
|-----------|
Server B in Location B
==============
Table Names
| ID| NAME |
|-----------|
| 1 | Tom |
| 2 | Scott |
|-----------|
| 3 | Peter |
|-----------|
Expected Scenario
===========
Table Names
| ID| NAME |
|-----------|
| 1 | Tom |
| 2 | Scott |
| 3 | Peter |
| 4 | John |
|-----------|
There isn't much performance to be gained from replicating your database on two masters. However, there is a nifty bit of failover if you code your application correct.
Master-Master setup is essentially the same as the Slave-Master setup but has both Slaves started and an important change to your config files on each box.
Master MySQL 1:
auto_increment_increment = 2
auto_increment_offset = 1
Master MySQL 2:
auto_increment_increment = 2
auto_increment_offset = 2
These two parameters ensure that when two servers are fighting over a primary key for some reason, they do not duplicate and kill the replication. Instead of incrementing by 1, any auto-increment field will by default increment by 2. On one box it will start offset from 1 and run the sequence 1 3 5 7 9 11 13 etc. On the second box it will start offset at 2 and run along 2 4 6 8 10 12 etc. From current testing, the auto-increment appears to take the next free number, not one that has left before.
E.g. If server 1 inserts the first 3 records (1 3 and 5) when Server 2 inserts the 4th, it will be given the key of 6 (not 2, which is left unused).
Once you've set that up, start both of them up as Slaves.
Then to check both are working ok, connect to both machines and perform the command SHOW SLAVE STATUS and you should note that both Slave_IO_Running and Slave_SQL_Running should both say “YES” on each box.
Then, of course, create a few records in a table and ensure one box is only inserting odd numbered primary keys and the other is only incrementing even numbered ones.
Then do all the tests to ensure that you can perform all the standard applications on each box with it replicating to the other.
It's relatively simple once it's going.
But as has been mentioned, MySQL does discourage it and advise that you ensure you are mindful of this functionality when writing your application code.
Edit: I suppose it's theoretically possible to add more masters if you ensure that the offsets are correct and so on. You might more realistically though, add some additional slaves.
MySQL does not support synchronous replication, however, even if it did, you would probably not want to use it (can't take the performance hit of waiting for the other server to sync on every transaction commit).
You will have to consider more appropriate architectural solutions to it - there are third party products which will do a merge and resolve conflicts in a predetermined way - this is the only way really.
Expecting your architecture to function in this way is naive - there is no "easy fix" for any database, not just MySQL.
Is it important that the UIDs are the same? Or would you entertain the thought of having a table or column mapping the remote UID to the local UID and writing custom synchronisation code for objects you wish to replicate across that does any necessary mapping of UIDs for foreign key columns, etc?
The only way to ensure your tables are synchronized is to setup a 2-ways replication between databases.
But, MySQL only permits one-way replication, so you can't simply resolve your problem in this configuration.
To be clear, you can "setup" a 2-ways replication but MySQL AB discourages this.

Categories