How to generate id number like 'xxx0000', 'xxx0001'? - php

This is my table :
| ID |
| xxx0000 |
| xxx0001 |
| xxx0002 |
i want to make my id pattern like that, but i dont know how to generate it?

You have two different pieces of data, so make two different columns.
ID INT NOT NULL AUTO_INCREMENT,
SomethingElse SomeOtherType NOT NULL
What SomethingElse is named and what data type it is would be up to you. xxx doesn't tell us much.
If both of these things combined make up your primary key, you can use a composite key of multiple columns:
PRIMARY KEY (SomethingElse, ID)
The same integrity checks for any primary key will continue to apply, the database will simply check both columns for combined uniqueness.
At this point you have the data you want, now it's just a matter of displaying it how you want. Whether you do that in SQL or in PHP is up to you. Whether you want the application to see them as a single value or see the underlying different values, also up to you.
Selecting them as a single value from SQL could be simple enough. Something like:
SELECT CONCAT(SomethingElse, ID) AS ID FROM ...
If you always want those padded zeroes then this question will help. Other string manipulations you might want to do would also be tackled one at a time (and each could result in its own Stack Overflow question if you need assistance with it).
But the basic idea here is that what you have is a composite value. In a relational database you would generally store the underlying values and perform the composition when querying the data.

Related

Normalizing a simple SQL Table

I have two different tables and I am not sure of the best way to get it out of the first normal form and into the second normal form. The first table hold the user information while the second is the products associated with the account. If I do it this way, I know it is only in the NF1 and that the foreign key of User_ID will be repeated many times in Table 2. See the tables below.
Table 1
|User_ID (primary)| Name | Address | Email | Username | Password |
Table 2
| Product_ID (Primary Key) | User_ID (Foreign Key) |
Is this a better way to make table two in which the user ID is not repeated? I have thought about having a separate table in the database for each user, but from all of the other questions I read on StackOverFlow, this is not a good idea.
The constraints I am working with are 1-1000 users and Table Two will have approximately 1-1000 indexes per user. Is there a better way to create this set of tables?
I don't see NF2 violated. It states:
a table is in 2NF if it is in 1NF and no non-prime attribute is dependent on any proper subset of any candidate key of the table.
quoted from Wikipedia article "Second normal form", 2016-11-26
Table 2 has only one candidate key, the primary key. The primary key consists of only one column. So, there is no proper subset of a candidate key. So, NF2 can't be violated unless NF1 is not fulfilled.
you says "to make table two in which the user ID is not repeated"
then why you dont do
Table 1
|User_ID (primary)| Name | Address | Email | Username | Password | Product_ID ( Foreign Key nullable)|
Table 2
| Product_ID (Primary Key)|
There's nothing wrong with a value appearing many times. Redundancy arises when two queries that aren't syntactically equivalent always both return the same value. Only uncontrolled redundancy is bad. Normalization controls some redundancy by replacing a table by smaller ones that join to it.
Normalization decomposes a table independently of other tables. (We define the normal form of a database as the lowest normal form that all of its tables are in.) Foreign keys have nothing to do with violating normal forms.
Learn what it means for a table to be in a given normal form. You will need to learn a definition. And the definitions of the terms it uses. And the definitions of the terms they use. Etc. A table is in 2NF when every non-prime column has a functional dependency that is full on every candidate key. Also learn the algorithm for decomposing a table into components that are in a given normal form. Assuming that these tables can hold more than one row, so that {} is not a candidate key, both these tables are in 2NF.
A table in 2NF is also in 1NF. So you don't want "to get it out of the first normal form".
2NF is unimportant. When dealing with functional dependencies, what matters is BCNF, which decomposes as much as possible but requires certain higher-cost contraints, and 3NF, which doesn't decompose as much as possble but requires certain lower-cost constraints.

Storing assignments between 2 tables in MySQL

I am wondering what is the best solutions to store relations between 2 tables in mysql.
I have following structure
Table: categories
id | name | etc...
_______________________________
1 | Graphic cards | ...
2 | Processors | ...
3 | Hard Drives | ...
Table: properties_of_categories
id | name
_____________________
1 | Capacity
2 | GPU Speed
3 | Memory size
4 | Clock rate
5 | Cache
Now I need them to have connections, and question is what is a better, more efficient and lighter solution, which is important because there may be hundreds of categories and thousands of properties assigned to them.
Should I just create another table with a structure like
categoryId | propertyId
Or perhaps add another column to categories table and store properties in text field like 1,7,19,23
Or maybe create json files named for example 7.json with content like
{1,7,19,23}
As this question is pertaining to Relational World, I would suggest to add another table to store many to many relationship between Category and Property.
You can also use JSON column to store many values in one of the table.
JSON Datatype is introduced in MYSQL 5.7 and it comes with various features for JSON data retrieval and updation. However if you are using older version, you would need to manage it with string column with some cumbersome queries for string manipulation.
The required structure depends on the relationship type: one-to-many, many-to-one, or many-to-many (M2M).
For a one-to-many, a foreign key (FK) on the 'many' side relates many items to the 'one' side. The reverse is correct for many-to-one.
For many-to-many (M2M) you need an intermediate relational (or junction) table exactly as you suggest. This allows you to "reuse" both categories and properties in any combinations. However it's slightly more SQL - requiring 2 JOINs.
If you are looking for performance, then using FKs to primary keys (PKs) would be very efficient and the queries are pretty simple. Using JSON would presumably require you to parse in PHP and construct on-the-fly second queries which would multiply your coding work and testing, data transfer, CPU overhead, and limit scalability.
In your case I'm guessing that both "graphics cards" and "hard drives" could have e.g. "memory size" plus other properties, so you would need a M2M relational table as you suggest.
As long as your keys are indexed (which PKs are), your JOIN to this relational table will be very quick and efficient.
If you use CONSTRAINTs with your relations, they you ensure you maintain data integrity: you cannot delete a category to which a property is "attached". This is a good feature in the long run.
Hundreds and thousands of records is a tiny amount for MySQL. You would use this technique even with millions of records. So there's no worry about size.
RDBMS databases are designed specifically to do this, so I would recommend using the native features than try to do it yourself in JSON. (unless I'm missing some new JSON MySQL feature! *)
* Since posting this, I indeed stumbled across a new JSON MySQL feature. It seems, from a quick read, you could implement all sorts of new structures and relations using JSON and virtual column keys, possibly removing the need for junction tables. This will probably blur the line between MySQL as an RDBMS and NoSQL.
The first solution is better when it comes to relational databases. You should create a table that will pair each category to multiple properties (1:n relationship)
You could structure the table like so:
CREATE TABLE categories_properties_match(
categoryId INTEGER NOT NULL,
propertyId INTEGER NOT NULL,
PRIMARY KEY(categoryId, propertyId),
FOREIGN KEY(categoryId) REFERENCES categories(id) ON UPDATE CASCADE ON DELETE CASCADE,
FOREIGN KEY(propertyId) REFERENCES properties_of_categories(id) ON UPDATE CASCADE ON DELETE CASCADE
);
The primary key ensures that there will be no duplicate entries, that means entries that match one category to the same property twice

How to store 60 Booleans in a MySQL Database?

I'm building a mobile App I use PHP & MySQL to write a backend - REST API.
If I have to store around 50-60 Boolean values in a table called "Reports"(users have to check things in a form) in my mobile app I store the values (0/1) in a simple array. In my MySql Table should I create a different column for each Boolean value or is it enough if I simply use a string or an Int to store it as a "number" like "110101110110111..."?
I get and put the data with JSON.
UPDATE 1: All I have to do is check if everything is 1, if one of them is 0 then that's a "problem". In 2 years this table will have around 15.000-20.000 rows, it has to be very fast and as space-saving as possible.
UPDATE 2: In terms of speed which solution is faster? Making separate columns vs store it in a string/binary type. What if I have to check which ones are the 0s? Is it a great solution if I store it as a "number" in one column and if it's not "111..111" then send it to the mobile app as JSON where I parse the value and analyse it on the user's device? Let's say I have to deal with 50K rows.
Thanks in advance.
A separate column per value is more flexible when it comes to searching.
A separate key/value table is more flexible if different rows have different collections of Boolean values.
And, if
your list of Boolean values is more-or-less static
all your rows have all those Boolean values
your performance-critical search is to find rows in which any of the values are false
then using text strings like '1001010010' etc is a good way to store them. You can search like this
WHERE flags <> '11111111'
to find the rows you need.
You could use a BINARY column with one bit per flag. But your table will be easier to use for casual queries and eyeball inspection if you use text. The space savings from using BINARY instead of CHAR won't be significant until you start storing many millions of rows.
edit It has to be said: every time I've built something like this with arrays of Boolean attributes, I've later been disappointed at how inflexible it turned out to be. For example, suppose it was a catalog of light bulbs. At the turn of the millennium, the Boolean flags might have been stuff like
screw base
halogen
mercury vapor
low voltage
Then, things change and I find myself needing more Boolean flags, like,
LED
CFL
dimmable
Energy Star
etc. All of a sudden my data types aren't big enough to hold what I need them to hold. When I wrote "your list of Boolean values is more-or-less static" I meant that you don't reasonably expect to have something like the light-bulb characteristics change during the lifetime of your application.
So, a separate table of attributes might be a better solution. It would have these columns:
item_id fk to item table -- pk
attribute_id attribute identifier -- pk
attribute_value
This is ultimately flexible. You can just add new flags. You can add them to existing items, or to new items, at any time in the lifetime of your application. And, every item doesn't need the same collection of flags. You can write the "what items have any false attributes?" query like this:
SELECT DISTINCT item_id FROM attribute_table WHERE attribute_value = 0
But, you have to be careful because the query "what items have missing attributes" is a lot harder to write.
For your specific purpose, when any zero-flag is a problen (an exception) and most of entries (like 99%) will be "1111...1111", i dont see any reason to store them all. I would rather create a separate table that only stores unchecked flags. The table could look like: uncheked_flags (user_id, flag_id). In an other table you store your flag definitions: flags (flag_id, flag_name, flag_description).
Then your report is as simple as SELECT * FROM unchecked_flags.
Update - possible table definitions:
CREATE TABLE `flags` (
`flag_id` TINYINT(3) UNSIGNED NOT NULL AUTO_INCREMENT,
`flag_name` VARCHAR(63) NOT NULL,
`flag_description` TEXT NOT NULL,
PRIMARY KEY (`flag_id`),
UNIQUE INDEX `flag_name` (`flag_name`)
) ENGINE=InnoDB;
CREATE TABLE `uncheked_flags` (
`user_id` MEDIUMINT(8) UNSIGNED NOT NULL,
`flag_id` TINYINT(3) UNSIGNED NOT NULL,
PRIMARY KEY (`user_id`, `flag_id`),
INDEX `flag_id` (`flag_id`),
CONSTRAINT `FK_uncheked_flags_flags` FOREIGN KEY (`flag_id`) REFERENCES `flags` (`flag_id`),
CONSTRAINT `FK_uncheked_flags_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`user_id`)
) ENGINE=InnoDB;
You may get a better search out of using dedicated columns, for each boolean, but the cardinality is poor and even if you index each column it will involve a fair bit of traversal or scanning.
If you are just looking for HIGH-VALUES 0xFFF.... then definitely bitmap, this solves your cardinality problem (per OP update). It's not like you are checking parity... The tree will however be heavily skewed to HIGH-VALUES if this is normal and can create a hot spot prone to node splitting upon inserts.
Bit mapping and using bitwise operator masks will save space but will need to be aligned to a byte so there may be an unused "tip" (provisioning for future fields perhaps), so the mask must be of a maintained length or the field padded with 1s.
It will also add complexity to your architecture, that may require bespoke coding, bespoke standards.
You need to perform an analysis on the importance of any searching (you may not ordinarily expect to be searching all. or even any of the discrete fields).
This is a very common strategy for denormalising data and also for tuning service request for specific clients. (Where some reponses are fatter than others for the same transaction).
Case 1: If "problems" are rare.
Have a table Problems with ids, and a TINYINT with the value (50-60) of the problem. With suitable indexes on that table you can lookup whatever you need.
Case 2: Lots of items.
Use a BIGINT UNSIGNED to hold up to 64 0/1 value. Use an expression like 1 << n to build a mask for the nth (counting from 0) bit. If you know, for example, that there exactly 55 bits, then the value of all 1s is (1<<55)-1. Then you can find the items with "problems" via WHERE bits = (1<<55)-1.
Bit Operators and functions
Case 3: You have names for the problems.
SET ('broken', 'stolen', 'out of gas', 'wrong color', ...)
That will build a DATATYPE with (logically) a bit for each problem. See also the function FIND_IN_SET() as a way to check for one problem.
Cases 2 and 3 will take about 8 bytes for the full set of problems -- very compact. Most SELECT that you might perform would scan the entire table, but 20K rows won't take terribly long and will be a lot faster than having 60 columns or a row per problem.

How to convert a string to a unique number?

I have a table like this:
// viewed
+----+------------------+
| id | username_or_ip |
+----+------------------+
As you see, username_or_ip columns keeps username or ip. And its type is INT(11) UNSIGNED. I store IP like this:
INSERT table(ip) VALUES (INET_ATON('192.168.0.1'));
// It will be saved like this: ip = 3232235521
Well, I want to know, is there any approach for converting a string like Sajad to a unique number? (because as I said, username_or_ip just accepts digit values)
int(11) is a 32-bit data type. As such it's just enough to hold an ipv4 address. Your question points that out.
To reversibly convert an arbitary string to a 32-bit data type is difficult: it simply lacks the information storage capacity.
You could use a lookup table for the purpose. Many languages, including php 5.4+, support that using an process called "interning." https://en.wikipedia.org/wiki/String_interning
Or you could build yourself a lookup table in a MySQL table. Its columns would be an id column and a value column. You'd intern each new text string by creating row for it with a unique id value, then use that value.
Your intuition about the slowness of looking up varchar(255) or similar values in MySQL is reasonable. But, with respect, it is not correct. Properly indexed, tables with that kind of data in them are very fast to search.

which one is better composite pk or using natural keys in blacklist design

i am going to design my blacklist and favorite in php list which user can blacklist and favorite other persons and now i just want to know is it better to design like this
+-----+------+------------+
| uid(pk) | name | family |
+-----+------+------------+
and another table like this which is composite primary key
+-----+------+------------------------+
| uid(pk) | blacklisted_person_id(pk) |
+-----+------+------------------------+
or desgin with primary key and foreign key in it, which i dont know how to design in this case that user cant blacklist himself or blacklist some person 2 times.if someonce can describe it a little ill be grateful.
thanks in advance
Your solution is good, but I'd recommend just calling your PK in your user table just id not uid and then in your mapping blacklist table refer to it as user_id (if user is the name of the user table) and blacklisted_user_id. That way there is no ambiguity to what table these individual columns are foreign keys to.
The reason it's a good solution is that it will prevent duplicates from being entered without having to create an additional unique composite key.
I think your use of the terminology in the title is confusing and it doesn't correspond to what you are describing later in the question.
Always use artificial keys and try to avoid using natural keys. It's a good practice as you can almost never know what piece of data will have to be changed later which may affect your natural key.
| uid(pk) | name | family |
The above design is a good start.
Place as little restrictions on the data model as possible without sacrificing the data consistency.
In which case the composite key of UID, black_listed_ID can be a little too restrictive and the reason for that is that later on you may decide to keep the log of who black listed whom at different times, then your composite key design breaks.
Just use simple one to many relationship for now, describing the preferences of one user on multiple-records.
For example, the data model for user actions that can include blacklisting, whitelisting, and more can be like so:
UID, AFFECTED_USER_ID, ACTION_ID, ACTION_DT
where ACTION_ID is FK to an "ACTION lookup table" describing black-listing, white_listing, grey listing, etc, etc.
You can also have the table describing the current state of affairs, but that current state can also be calculated or retrieved from the above data.

Categories