To take the backup I follow the below procedure.
First I take the list of table in my DB using SHOW TABLES LIKE
Then take table structure using SHOW CREATE TABLE
Then Saving all table structure and its value in to the file.
Now Backup works fine.
While restoring I am facing the problem.
If some of tables contain foreign key constraint with reference of other table.
I am not able to create the table.
I found the problem because I am taking backup table by table .
For example there are 4 table
A,B,C,D
A - Contain Constraint with C
C - Contain constraint with D.
I Take the backup above table and store them into file like this order
A,B,C,D. while Restoring causing error.
My question is How to handle while backup the database if table contain constraints?
I searched lot but I was not able to get good solution. So, Please share how to do this or share if I any thing done wrong.
Thank you.
As for the update to this question, I temporarily disabled the key check now it's worked. this solution posted in this question
http://stackoverflow.com/questions/15501673/how-to-temporarily-disable-a-foreign-key-constraint-in-mysql
SET FOREIGN_KEY_CHECKS=1;
Related
i dont know how to exactly describe it, but my MYSQL tables are losing their Primary Keys + Auto Increment and have data with 0 as an Id(prim+AI). After looking at these tables, the similar thing is that all of tables have duplicate entries of the same entries that they have inside of them.
As you can see in image 1.png, there are no Primary keys (there was a primary key when i first made the table and it was working the whole time). and as you can see in the 2nd image 2.png, entries are duplicated(dont know how) and there are records with 0 in the id.
Update 2:
When this table was first created, the ID column was set as Prim+AI and it was working great. HOW DID IT LOSE THE PK+AI, i have no idea that is why am asking.
Also, in my project there are no codes(MYSQL) to add/remove PKs, Duplicate Tables etc.. the only codes i have are INSERT/UPDATE/DELETE/SELECT.
I am trying to import well worked database to my phpmyadmin, though there is no duplicate entry for primary key. Since I have made auto increment for primary key, there is no chance for duplicate entry for it.I am using mysql 5.6.11 version.
there may be to reasons for that
1)let there are two tables in other table the first one table primary key is using as foreign key when you are importing the data the insert statements must be in order so that the table which primary key is using in another table must be insert data first
2)truncate the table and try import again if the same error occur then the first step follow that
It may not be the best solution but I think it may give you the solution if your database and the data is not too large. Separate your sql file into two pieces one for creating database ,table and relation and another for inserting data managing according to the foreign key. Once I have solved by this way this may help you.
You should not import .sql sql script simply, import create table script first then insert script later, as mentioned above truncanting table first then import script with the correct order if you occured at last 1 conflict duplicate entry problem, try truncanting your table again to fix the problem
quick question.
In my user database I have 5 separate tables all containing different information. 4 tables are connected by foreign key to the primary key of the first table.
I am wanting to trigger row inserts on the other 4 tables when I do an insert on the first (primary). I thought that with ON UPDATE CASCADE would do this for me but after trying it I realised it did not...I know clue is in the name ON UPDATE!!!!!
I also tried and failed at multiple triggers on the same table but found this was not possible either.
What I am planning on doing is putting a trigger on the first to INSERT on the second and then putting a trigger on the second to insert on the third......etc
Would just like to know if this is a wise thing to do or not or if I am missing a better and simpler way of doing this.
Any help/advice much appreciated.
Based on the given information, it "feels" as if there might be a flaw in the database design if each of the child tables requires a row for every single row in the parent table. There is a reason that "ON INSERT CASCADE" does not exist; it is typically not considered meaningful.
The first thought that comes to mind is that the child tables should actually be part of the parent table; it sounds as if there is a one-to-one relationship. It still may make sense to have separate tables from an organizational standpoint (and size of records), but it is something to think about.
If there is not a one-to-one relationship, then the ability to add meaningful data beyond default values to the child tables would imply there might be a bit more normalization of data required. If the only values to be added are NULLs, then one could maybe argue that there is no real point in having the record because a LEFT JOIN could produce the same results without that record.
Having said all that, if it is required, I would think that it would be better to have a single trigger on the parent table add all the records to the child tables rather than chain them in several triggers. That way the logic would be contained in a single location.
Not understanding your structure (the information you need in each of these tables is pertinent to correctly answer), I can only guess that a trigger might not be what you want to do this. If your tables have other fields beyond what is in table 1 and they do not have default values, how will you get the value for those other fields inthe trigger? Personally I would use a stored proc to insert to table1 and get the id value back from the insert and then insert to the other tables with the additonal information needed and put it all in a transaction so that if one insert fails all are rolled back.
I have created a PHP script and I am lacking to extract the primary key, I have given flow below, please help me in how can i modify to get primary key
I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key
Flow of my script is :
I fetch out all table from DB.
I check whether the table have any trigger or not.
If yes then it moves to check for next table and so on.
If it does'nt find any trigger then it creates the triggers for the table, such that,
it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made)
if it has the primary key then it uses it further in creation of trigger.
if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table
Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB..
Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done?
please guide me...GEEKS!!!
If I understand your question right, you need a unique identifier for table rows that have no primary key and no other kind of unique identifier. That's not easy to do as far as I can see. Other databases have unique Row IDs, but mySQL does not. You could use the value of every column to try and identify the row, but that is far from duplicate-safe - there could be two or more rows containing the exact same values. So I'd say, without a unique identifier, this is something that simply cannot be done.
Some ideas in this SO question:
MySQL: is there something like an internal record identifier for every record in a MySQL table?
referring to this question, I've decided to duplicate the tables every year, creating tables with the data of the year, something like, for example:
orders_2008
orders_2009
orders_2010
etc...
Well, I know that probably the speed problem could be solved with just 2 tables for each element, like orders_history and order_actual, but I thought that once the handler code is been wrote, there will be no difference.. just many tables.
Those tables will have even some child with foreign key;
for example the orders_2008 will have the child items_2008:
CREATE TABLE orders_2008 (
id serial NOT NULL,
code character(5),
customer text
);
ALTER TABLE ONLY orders_2008
ADD CONSTRAINT orders_2008_pkey PRIMARY KEY (id);
CREATE TABLE items_2008 (
id serial NOT NULL,
order_id integer,
item_name text,
price money
);
ALTER TABLE ONLY items_2008
ADD CONSTRAINT items_2008_pkey PRIMARY KEY (id);
ALTER TABLE ONLY items_2008
ADD CONSTRAINT "$1" FOREIGN KEY (order_id) REFERENCES orders_2008(id) ON DELETE CASCADE;
So, my problem is: what do you think is the best way to replicate those tables every 1st january and, of course, keeping the table dependencies?
A PHP/Python script that, query after query, rebuild the structure for the new year (called by a cron job)?
Can the PostgreSQL's functions be used in that way?
If yes, how (an little example will be nice)
Actually I'm going for the first way (a .sql file containing the structure, and a php/python script loaded by cronjob that rebuild the structure), but i'm wondering if this is the best way.
edit: i've seen that the pgsql function CREATE TABLE LIKE, but the foreigns keys must be added in a second time.. or it will keep the new tables referencied tot he old one.
PostgreSQL has a feature that lets you create a table that inherits fields from another table. The documentation can be found in their manual. That might simplify your process a bit.
You should look at Partitioning in Postgresql. It's the standard way of doing what you want to do. It uses inheritance as John Downey suggested.
Very bad idea.
Have a look around partitioning and keep your eyes on your real goal:
You don't want table sets for every year, because this is not your problem. Lots of systems are working perfectly without them :)
You want to solve some performance and/or storage space issues.
I'd recommend orders and order_history... just periodically roll the old orders into the history, which is a read-only dataset now, so you add an index to cater for every single query you require, and it should (if your data structures are half decent) remain performant.
If your history table starts getting "too big" it's probably time to start think about data warehousing... which really is marvelous, but it certainly ain't cheap.
As others mentioned in your previous question, this is probably a bad idea. That said, if you are dead set on doing it this way, why not just create all the tables up front (say 2008-2050)?