In a relational database which is the best way to store a foriegn key, or relate a key, not sure how best to phrase it.
For example if you had tables:
User
ID
Name
Email
Email:
ID
Email
This is not a table I am actually creating just simple enough to get the idea across.
In the user table if I would create a key with the ID then when I select the user table the ID of the email is the value in the email field. I can reference the actual email field, this fixes that issue, but is that not wasteful?
I am using PHPMyAdmin to setup the databases and keys.
Any advice the best way to do this?
PhpMyAdmin provides a Graphical User Interface (GUI) to MySQL operations.
The best way to do this is databases, which I presume you will have already created, is to simply run a quick query!
MySQL looks pretty scary, but its actually really simple, once you grasp the concept of it!
By looking at your example, you'll most likely be looking to run a query like this one. (This is rough pseudo and may not run in itself)
ALTER TABLE User
ADD FOREIGN KEY (ID) REFERENCES Email(ID);
Easy peasy! :)
The best resource for finding out more about the syntax required is over on W3 Schools! This is the article you'll want to look at!
Edit
As a general database rule, I'd recommend making all your column names (fields) unique! This avoids confusion. For example, you could use UserID and EmailID.
Related
I have this MySQL table, where row contact_id is unique for each user_id.
history:
- hist_id: int(11) auto_increment primary key
- user_id: int(11)
- contact_id: int(11)
- name: varchar(50)
- phone: varchar(30)
From time to time, server will receive a new list of contacts for a specific user_id and need to update this table, inserting, deleting or updating data that is different from previous information.
For example, currenty data is:
So, server receive this data:
And the new data is:
As you can see, first row (John) was updated, second row (Mary) was deleted and some other row (Jeniffer) was included.
Today what I am doing is deleting all rows with a specific user_id, and inserting the new data. But the autoincrement field (hist_id) is getting bigger and bigger...
Obs: Table have about 80 thousand records, and this update will occur 30 times a day or more.
I have some (related) questions:
1. In this scenario, do you think deleting all records from a specific user_id and inserting updated data is a good approach?
2. What about removing the autoincrement field? I don't need it, but I think it is not a good idea to have a table without a primary key.
3. Or maybe the better approach is to loop new data, selecting each user_id / contact_id for comparing values to update?
PS. For better approach I mean the most efficient way
Thank you so much for any help!
In this scenario, do you think deleting all records from a specific user_id and inserting updated data is a good approach?
Short Answer
No. You should be taking advantage of 'upsert' which is short for 'insert on duplicate key update'. What this means is that if they key pair you're inserting already exists, update the specified columns with the specified data. You then shorten your logic and reduce increments. Here's an example, using your table structure that should work. This is also assuming that you have set the user_id and contact_id fields to unique.
INSERT INTO history (user_id, contact_id, name, phone)
VALUES
(1, 23, 'James Jr.', '(619)-543-6222')
ON DUPLICATE KEY UPDATE
name=VALUES(name),
phone=VALUES(phone);
This query should retain the contact_id but overwrite the prexisting data with the new data.
What about removing the autoincrement field? I don't need it, but I think it is not a good idea to have a table without a primary key.
Primary keys do not imply auto incremented values. I could have a varchar field as the primary key containing names of fruits and vegetables. Is this optimized for performance? Probably not. There many situations that might call for auto increment and there are definite reasons to avoid it. It all depends on how you wish to access the data and how this can impact future expansion. In your situation, I would start over on the table structure and re-think how you wish to store and access the data. Do you want to write more logic to control the data OR do you want the data to flow naturally by itself? You've made a history table that is functioning more like a hybrid many-to-one crosswalk at first glance. Without looking at the remaining table structure, I can't necessarily say on a whim that it's not a good idea. What I can say is that I would do this a bit differently. I will answer this more specifically in the next question.
Or maybe the better approach is to loop new data, selecting each user_id / contact_id for comparing values to update?
I would avoid looping through the data in order to update it. That is a job for SQL and it does this job well. Sometimes, we might find ourselves in a situation where we must do this to either extract data in a specific format or to repair data in some way however, avoid doing this for inserting or updating the data. It can negatively impact performance and you will likely paint yourself into a corner.
Back to what I said toward the end of your second question which will help you see what I am talking about. I am going to assume that user_id is a primary key that is auto-incremented in your user table. I will do some guestimation here and show you an example of how you can redesign your user, contact and phone number structure. The following is a quick model I threw together that shows the foreign key relationship between the tables.
Note: The column names and overall data arrangement could be done differently but I did this quickly to give you a decent example of a normalized database structure. All of the foreign keys have a structural layout which separates your data in a way that enables you to control the flow of data as it enters and leaves your system. Here's the screenshot of the database model I threw together using MySQL Workbench.
(source: xonos.net)
Here's the SQL so that you can look at it more closely.
You'll notice that the "person" table is extracted from users but shares data with contacts. This enables you to store all "people" in one place, all "users" in another and all "contacts" in another. Now, why would we do this? The number one reason can be explained in two scenarios.
1.) Say we have someone, in this example I'll call him "Jim Bean". "Jim Bean" works for the company, so he is a user of the system. But, "Jim Bean" happens to own a side business and does contact work for the company at the same time. So, he is both a contact and a user of the system. In a more "flat table" environment, we would have two records for Jim Bean that contain the same data which could become outdated or incorrect, quickly.
2.) Let's say that Jim did some bad things and the company wants nothing to do with him anymore. They don't want any record of him - as if he never existed. All that we have to do is delete Jim Bean from the Person table. That's it. Since the foreign relationship has "CASCADE" on update/delete - this automatically propagate and clears out the other tables related to him.
I highly recommend that you do some reading on normalized data structure. It has saved me many hours once I got the hang of it and I will never go back.
I'm working on an application which allows a moderator to edit information of user.
So, at the moment, i have URL's like
http://xxx.xxx/user/1/edit
http://xxx.xxx/user/2/edit
I'm a bit worried here, as i'm directly exposing the users table primary key (id) from database.
I simply take the id from the URL's (eg: 1 and 2 from above URL's), query the database with the ID and get user information (of course, i sanitize the input i.e ID from URL).
Please note that:
I'm validating every request to check if moderator has access to edit that user
This is what i'm doing. Is this safe? If not, how should i be doing it?
I can think of one alternative i.e. have a separate column for users table with 25 character key and use the keys in URL's and query database with those keys
But,
What difference does it make? (Since key is exposed now)
Querying by primary key yields result faster than other columns
This is safe (and seems to be the best way to do it) as long as the validation of the admin rights is correct and you have prevention for SQL injection. Both of which you mention so I'd say you're good.
The basic question is if exposing primary key is safe or not. I would say that in most situations it is safe and I believe that Stackoverflow is doing it in the same way:
http://stackoverflow.com/users/1/
http://stackoverflow.com/users/2/
http://stackoverflow.com/users/3/
If you check the member for you can see that the time is decreasing, so the number is probably PK as well.
Anyway, obscuring PK can be useful in situation where you want a common user to avoid going through all entries just by typing 1, 2, 3 etc. to URL, in that case obscuring PK for something like 535672571d2b4 is useful.
If you are really unsure, you could also use XOR with a nice(big) fixed value. This way you would not expose your ids. When applying the same "secret number" again with the xor'ed field, you get the original value.
$YOUR_ID xor $THE_SECRET_NUMBER = $OUTPUTTED_VALUE
$PUTPUTTED_VALUE xor $THE_SECRET_NUMBER = $YOUR_ID
Fast answer no
Long answer
You have a primary key to identify some one with, which is unique. If you add an unique key to prevent people from knowing it, you get that they know an other key.
Which still needs to be unique and have an index (for fast search), sound a lot like a primary key.
If it is a matter of nice url's well then you could use an username or something like that.
But it would be security to obscurity. So beter prevent SQL injection and validate that people have access to the right actions
If you have plain autoincrement ids you will expose your data to the world. It is not sequre (e.g. for bruteforcing all available data in your tables). But you can generate ids of your DB entities not sequentially, but in pseudo random manner. E.g. in PostgreSQL:
CREATE TABLE t1 (
id bigint NOT NULL DEFAULT (((nextval('id_seq'::regclass) * 678223072849::bigint)
% (1000000000)::bigint) + 460999999999::bigint),
...
<other fileds here>
)
I am working on a web application that manages the clients of the company. Details such as phone, address, email and name are saved for each client and there are corresponding fields in the database table where I save these details.
The user of the application has to be able to change the different details. For instance, he might decide that we need an extra field to save the fax number of the client or he may decide that the address field is no longer needed and delete it.
Using NoSql is not a option. I have to use PHP and mySql.
I have been considering using a JSON string to save database table fields but I have not come up with a solution yet.
Is altering the structure of my db table the only solution to my problem? I would like to prevent dynamically altering the structure of the db table, if possible.
Would it be a could idea to implement dynamic views? However, I guess that this would not address the necessity to insert new fields.
Thank you in advance.
Wouldn't it make more sense to have another table, let's call it 'information' which has the user_id as a foreign key?
So you have:
CREATE TABLE user (
user_id ...
/* necessary information */
);
CREATE TABLE information (
user_id ...
information_type /* maybe enum, maybe just string, maybe int, depending how you want to do that */
information_blob
);
You then retrieve the information with JOIN, and do not have to alter the table every time somebody wants to add another bit of info.
What you need a key-value pair system for MySQL. The idea of NoSQL databases is that you can create your own schema based on key/values, using essentially anything for the value.
Create a table special_fields with a field_name column, or something named more specifically to field names. Use this table to define the available field names, and another table to store the client_id and special_field_id and then a value.
So client #1 would have an address (special_field record #1) value of "123 x street"
The only other way I can think of is to actually change the schema of a table to add/remove columns. Don't do that.
In the following situation, what would the database design look like?
This is for some sort of inventory system.
I want users to be able to create a item-type (lets say, a Laptop). I also want users to be able to say the serial number and MAC address must be unique. This part confuses me where to check for unique values, since I have no idea how to make a table with all items in it, with unique values..
Let's say a user creates another item-type that has no serial number or any unique fields, this means I can't build my DB with property1 till property10 fields in the database.
I also don't want to build a table for every item type, since that would involve too advanced table management in PHP.
Any suggestions on how to build this DB?
Just to clarify I understand your requirements correctly, you meant to create a table that unique rule only applies to a subset of the table instead of the entire table?
If so, I think there will be two options
have two tables, one with unique rules and one without OR
Enforce the unique rules in application level as business rules instead of database level.
If i got your question correctly , i would just do something simple and maintainable, such as :
you can achieve it by,
assume ,table name is item,here item id is primary key.it makes easy to pick up which item you want.then while posting(inserting) serial number and mac address,you should check with php ,that there is no duplication of data.
did i got u !
I am trying to create a site where users can register and create a profile, therefore I am using two MySQL tables within a database e.g. users and user_profile.
The users table has an auto increment primary key called user_id.
The user_profile table has the same primary key called user_id however it is not auto increment.
*see note for why I have multiple tables.
When a user signs up, data from the registration form is inserted into users, then the last_insert_id() is input into the user_id field of the user_profile table. I use transactions to ensure this always happens.
My question is, is this bad practice?
Should I have a unique auto increment primary key for the user_profile table, even though one user can only ever have one profile?
Maybe there are other downsides to creating a database like this?
I'd appreciate if anyone can explain why this is a problem or if it's fine, I'd like to make sure my database is as efficient as possible.
Note: I am using seperate tables for user and user_profile because user_profile contains fields that are potentially null and also will be requested much more than the user table, due to the data being displayed on a public profile.
Maybe this is also bad practice and they should be lumped in one table?
I find this a good approach, I'd give bonus point if you use a foreign key relation and preferably cascade when deleting the user from the user table.
As too separated the core user data in one table, and the option profile data in another - good job. Nothing more annoying then a 50 field dragonish entry with 90% empty values.
It is generally frowned upon, but as long as you can provide the reasoning for the 1 to 1 relationship I'm sure it is fine.
I have used them when I have hundreds of columns (and it would be more logical to split them out into separate tables)
or I need a thinner table to speed up fullscans
In your case I would use a single table and create a couple of views.
see: http://dev.mysql.com/doc/refman/5.0/en/create-view.html
In general a single table approach is more logical, quicker, simpiler, and uses less space.
I don't think it's a bad practice. Sometimes it's quite useful, especially if you want one class to deal with authentication, and not load all profile data. You can then modify how your authentication works, build web services and so on, with little care about maintaining data structures about profiles information which is likely to change as your project evolves.
This is very good practice.
It's right at the core of writing good, modular, normalised relational database structures.