I have a problem while designing my database and I don't know how to solve it:
i have following table (relevant columns):
CREATE TABLE IF NOT EXISTS `prmgmt_tasks` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(300) NOT NULL,
`project_id` int(11) NOT NULL,
);
What I want: every task has a unique id (autoincrement). The name of the task is not unique, but it should be unique for each project. For example "Design userinterface" can occur in project with id 1 and 2, but not twice in project with id 1. Something like 'unique for: group by project_id'.
Of course, I could check that in every query, but I am looking for a way to model this in the database, so it will allways be consistent, no matter what queries are executed.
Thanks for help!
Create a unique composite index on the combined fields.
CREATE UNIQUE INDEX tasks_name_project
ON prmgmt_tasks (name, project_id);
Related
Well, I have this attendance system who mark an attendance every day, well what I am looking is for a restriction other than PHP code like a restriction that can restrict users to enter duplicate record over time.
For e.g I have already marked my attendance .myself
DATE 22-05-2018 and trackingid = 1
if I try to insert mark attendance one more time it should not insert the statement.
It can be done via php and its a long code and i mean like it is possible but is there any way around with MySQL , through which we can make 2 columns unique if they both already exist just dont let user insert .
Use unique_index on your columns
ALTER TABLE `tablename` ADD UNIQUE `unique_index`(`columnOneName`, `columnTwoName`);
You can also use the following like sql while creating your table:-
CREATE TABLE `tableName` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`columnOne` int(11) DEFAULT NULL,
`columnTwo` int(11) DEFAULT NULL,
`columnThree` varchar(128),
PRIMARY KEY (`id`),
UNIQUE KEY `columnOne_columnTwo_unique_index` (`id_box_elements`,`id_router`)
);
I want to create a table like below:
id| timestamp | neighbour1_id | neighbour1_email | neighbour2_id | neighbour2_email
and so on upto max neighbour 20.
I have two questions:
Should I create columns statically or is there a way to create columns dynamically using php based on the count of json Array?
In either case, how would I refer to the columns dynamically and assign value to them based on jsonArray?
My jsonArray would look something like:
{id:123, email_id:abc, neighbours: [{neighbour1_id:234, neighbour1_email: bcd}, {neighbour2_id:345, neighbour2_email:dsf}, {}, {}...]}
Please advice. Thanks.
It looks like you need to rethink your database structure a bit. To me it looks like you need a single users (or whatever they are) table:
CREATE TABLE `users` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
`cretaed_at` timestamp NOT NULL,
PRIMARY KEY (`id`)
);
And another table that defines relations between those users:
CREATE TABLE `neighbors` (
`parent` int(11) unsigned NOT NULL,
`child` int(11) unsigned NOT NULL,
PRIMARY KEY (`parent`,`child`)
);
Now you can add as many neighbors to each user as you want. Fetching them is as easy as:
SELECT * FROM `users`
LEFT JOIN `neighbors` ON `users`.`id` = `neighbors`.`child`
WHERE `neighbors`.`parent` = ?
Where that question mark would become the id of the user from which you are fetching the neighbors, preferably by using a prepared statement.
If it is all JSON you will be working with, and querying isn't much of an issue, you could consider working with a noSql database or document store (like redis or mongoDb), but that is an entirely different story.
Just repeating a bunch of columns x times is definitely not the way to go. Vertical size (# rows) of tables in relational databases is no big issue, they are designed for that. Horizontal size (# columns) however is something to be careful with, as it may make your db uanessacry large, and decrease performance.
Just consider what you would if you want to find a user that has a neighbor with an email address [x]. You would have to repeat your where statement 20 times for each possible email column. And that is just one example...
well, the answer i was working on while pevara was posting theirs faster is almost the same...
CREATE TABLE `neighbours` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`neighbour_email` char(64) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
CREATE TABLE `neighbour_email_collections` (
`id` int(10) unsigned NOT NULL,
`email_id` char(64) NOT NULL,
`neighbour_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`,`neighbour_id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
insert into neighbours values (234, "bcd");
insert into neighbours values (345, "dsf");
insert into neighbour_email_collections values(123, "abc", 234);
insert into neighbour_email_collections values(123, "abc", 345);
select *
from neighbours
left join neighbour_email_collections
on neighbour_email_collections.neighbour_id=neighbours.id
where neighbour_email_collections.id=123;
I am making a system where users can upload any file they want, and not use it to execute any kind of code. As a part of that, I rename every file, and store its original name in a MySQL table. This table contains the id of the user who uploaded it, and a unique id of the upload. Currently I am doing it like this:
CREATE TABLE `uploads` (
`user_id` INT(11) NOT NULL,
`upload_id` INT(11) NOT NULL AUTO_INCREMENT,
`original_name` VARCHAR(30) NOT NULL,
`mime_type` VARCHAR(30) NOT NULL,
`name` VARCHAR(50) NOT NULL,
PRIMARY KEY (`user_id`, `upload_id`)
) ENGINE=MyISAM;
This means I will always have a unique combination of user_id and upload_id, and every users first upload has an id of 1. However I want to use a foreign key for the user_Id, so if I delete a user, its uploads would also be deleted. This means I have to do it in InnoDB. How would i go about that, since the above setup only works in MyISAM.
My users table (wich i would get user_id from) looks like this:
CREATE TABLE `".DATABASE."`.`users` (
`user_id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
`username` VARCHAR(30) NOT NULL,
`email` VARCHAR(50) NOT NULL,
`password` CHAR(128) NOT NULL,
`salt` CHAR(128) NOT NULL
) ENGINE = InnoDB;
What i want is for the uploads table to look like this:
user_id | upload_id
1 | 1
1 | 2
2 | 1
2 | 2
2 | 3
1 | 3
If that makes sense
If I understood correctly:
Replace the primary key to a unique index with the two fields. Make the upload_id the primary key and user_id the foreign key then.
Unfortunately for this problem, the auto increment column will not start over at 1 for each user. It will just increment for each added row, regardless of any particular column value.
Edit: displaying my ignorance there, MyISAM tables apparently will start over at 1 for each user when a multi-column index this way. https://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
Fortunately, it is probably not necessary for upload_id to start at 1 for each user. Using auto increment on that column means you will always have a unique combination of user_id and upload_id even without using both columns as the primary key, or even creating a unique index, because every record will have a different upload_id. You should still be able to implement the cascade delete with this setup.
I want to know if it's possible to INSERT records from a SELECT statement from a source table into a destination table, get the INSERT ID's and UPDATE a field on all the corresponding records from source table.
Take for example, the destination table 'payments':
CREATE TABLE `payments` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`txid` TEXT NULL,
`amount` DECIMAL(16,8) NOT NULL DEFAULT '0.00000000',
`worker` INT(10) UNSIGNED NOT NULL,
PRIMARY KEY (`id`)
)
The source table 'log':
CREATE TABLE `log` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`solution` VARCHAR(80) NOT NULL,
`worker` INT(11) NOT NULL,
`amount` DECIMAL(16,8) NOT NULL DEFAULT '0.00000000',
`pstatus` VARCHAR(50) NOT NULL DEFAULT 'pending',
`payment_id` INT(10) UNSIGNED NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
The "log" table contains multiple "micro-payments" for a completed task. The purpose of the "payments" table is to consolidate the micro-payments into one larger payment:
INSERT INTO payments p (amount, worker)
SELECT SUM(l.amount) AS total, l.worker FROM log l
WHERE l.pstatus = "ready"
AND l.payment_id IS NULL
AND l.amount > 0
GROUP BY l.worker
I'm not sure if clear from the code above, but I would like the field "payment_id" to be given the value of the insert id so that it's possible to trace back the micro-payment to the larger consolidated payment.
I could do it all client side (PHP), but I was wondering if there was some magical SQL query that would do it for me? Or maybe I am going about it all wrong.
You can use mysql_insert_id() to get the id the inserted record.
See mysql_insert_id()
But the above function is deprecated.
If you're using PDO, use PDO::lastInsertId.
If you're using Mysqli, use mysqli::$insert_id.
Well, the linking column between the tables is the column worker. After you inserted your values, just do
UPDATE log l
INNER JOIN payments p ON l.worker = p.worker
SET l.payment_id = p.id;
and that's it. Or did I get the question wrong? Note, that the columns differ in the attribute signed/unsigned. You might want to change that.
I think you should use ORM in php as follows:
Look into Doctrine.
Doctrine 1.2 implements Active Record. Doctrine 2+ is a DataMapper ORM.
Also, check out Xyster. It's based on the Data Mapper pattern.
Also, take a look at DataMapper vs. Active Record.
I have a create database and have about 8 tables in Database also created Primary keys and foreign-keys in appropriate tables. But when I insert data in primary-table, my other table doesn't show updated data.
I mean, say I have a table which has data for names like ;
N is (name)
N1 = George, N2 = Ross, N3 = Rim ...etc now that means i have Primary key N1,N2,N3 etc..
Now, when I insert this primary keys in others table it should shows me name like George, ross and rim instead of primary-key number it self(N1,N2,N3).
How can I get names instead PK itself?
You are misunderstanding the concept of keys in relational databases. Keys are there, not to copy data from similar tables but to show the relations between data in different tables. They help to understand how the data between different tables is related - that is where the name "relational database" comes from. They also speed up querying of that data if indexed.
You can read more about the usage of keys here: Keys and normalization
I am still unclear on what exactly you want to do with the database. but let me demonstrate you on the basic of database and how you should be using it. Consider a table users where you will be storing the data related to user.
CREATE TABLE `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) NOT NULL,
`email` varchar(50) DEFAULT NULL,
`phone` varchar(30) DEFAULT NULL,
PRIMARY KEY (`id`),
);
the column id holds the primary key and have an attribute called auto_increment, now what this means is every time you insert a record to this table the id attribute gets incremented and you don't have to worry about inserting any value in this column because your database will take care of that. for example take a look at insert query below.
INSERT INTO users(name,email,phone) VALUES('First Name', 'first#domain.com', '9999999999');
INSERT INTO users(name,email,phone) VALUES('Second Name', 'second#domain.com', '8888888888');
INSERT INTO users(name,email,phone) VALUES('Third Name', 'third#domain.com', '2222222222');
INSERT INTO users(name,email,phone) VALUES('Fourth Name', 'fourth#domain.com', '3333333333');
did you see you did not insert any id here. this is because it is database who will handle the logic. now the first record will hold the value 1 the second will have 2 the third one 3 and the fourth one 4 and so on.
hope this helps you.