Scenario:
I Created a POS (point of sale) system using mysql database. I am managing all shops data in one database. All operation was on server before but now the requirement is changed and i want to make it local too. The challenge i face is Duplicate entry for key primary
For example:
The system is used by two shop. If one shop added record where id=1 in item table in his local database and the second shop also added record where id = 1 in same table in his local database. Now when i send both data to my server database, it will give me error on Duplicate entry for key primary.
Conclusion:
I am not using MYSQL replication because it not suit my database structure so what will be the best solution for this issue?
You can solve this problem in many ways:
You should not sync the primary key as well from the local to remote, rather you can have some order ID (SHOPID_SOMERANDOM-NUMBER) which will be unique for shops .
Otherwise you can keep a composite key as primary key like Autoincrement_ID+SHOP_ID so that while syncing this will never be duplicate.
This shop_ID should be generated from the server at the time of installation and should not be duplicate.
Related
I am fairly new to web development and currently working on a website using an MVC framework that can capture maintenance work conducted. I have managed to make the forms and it correctly displays any errors in filling the form and if there aren't any errors successfully inserts it into the database. What I would like to achieve is having the main table with the general details of the maintenance such as (date, time, technician, department, location, recommendations) and another table for which records what tasks were done during the maintenance such as sweeping, mopping, wiping the windows, cutting grass, etc. I have a single form that requires all the details required in both the tables to be filled. both tables will have primary keys that will be auto-increment. I would then like to simultaneously insert the data into the relevant tables only while inserting data into the tasks table I would like to have a foreign key to the main table for that particular record so it corresponds accordingly. How can I achieve this without manual input by the user if the primary key of the main table is an auto increment?
This isn't a big problem. It can't be done as a single query, but using transactions you can achieve an all-or-nothing result.
In pseudocode:
Validate data
Start a transaction
Insert data into main record
Get the last inserted ID
Insert one or more records into the child table, using the ID retrieved above
Commit the transaction (or roll back if some error occurred)
The exact mechanics vary between MySQLi and PDO, but the principle is the same.
I have database with 300 tables without primary key, currently i want to add add primary key to the table with auto increment value because i am not able to edit or delete any data through MYSQL PHPMYADMIN. I know its possible, but it consist of billions of data.
Is there any problem for adding new primary key to all tables those tables not having primary key currently?.
Is there any issues will effect to existing queries written previously?.
What is best way to do it without effecting any ongoing process in live server?.
Is it possible to edit or delete sql table rows without adding primary key in the live server without effecting ongoing process in live server?. Please help me out.
Is there any problem for adding new primary key to all tables those tables not having
Is there any issues will effect to existing queries written previously?.primary key currently?
You can add the PK. But if your queries are build like
SELECT *
Then you will have an extra column and that may broke your other systems.
What is best way to do it without effecting any ongoing process in live server?.
You have to do it in a test server first.
Is it possible to edit or delete sql table rows without adding primary key in the live server without effecting ongoing process in live server?
If you have some process running they may have table lock so the table updates cant happen. That is why you should schedule a downtime for maintenance where you can do the changes. And as I said before you have to plan and test everything on the test server first.
Good day, I have developed a system using php and mysql. From 5 systems turned into 1 system. Basically, there is a huge possibility that in system 1 there will be a user table and has primary keys. The same with other systems.
My problem is there are identical primary ids to be migrated in current developed system.
In system 1, there are 70,000 data in user table. System 2, 22,000 records. and less than 3000 in other systems. Unfortunately, primary keys were foreign keys in other tables.
How can I migrate those data without conflict in primary keys and how can I update foreign keys?
Please help.
You have to create a mapping between the various user records and the new consolidated user master table. Pls note that this may be true not only for the users, but also for other type of data as well.
When you create the mapping table, then have a new user id field, an old user id field and a field identifying the source system. Then copy over the users by source system to the mapping table and during the copy specify the source system. This way you can distinguish between the users with the same user ids in the various source systems and generate the new user ids.
When you migrate other data from the source systems containing user id, you need to use the mapping table to replace the old user ids with the new one.
I am developing a inventory software with MySQL and PHP where local database will sync to online database.
Suppose i have a table sell and sell_id is the primary key of the table. I usually use INT and auto increment with primary key.
In local database 1 sell table has 2 entry(sell_id 1,2) and local database 2 sell table has 2 entry(sell_id 1,2).
If i sync/insert these 2 local sell table entries to online sell table it will become (sell_id 1,2,3,4).
As sell id changes it effects those entries in other table which are using sell_id as foreign key.
How should i plan to create primary key in this situation.
I am planning to use alpha-numeric id which will be unique for both database. Will it create any problem or slow my db query further for millions of sell_id??
Are there any other ways to solve the problem ?
This is too long for a comment.
Often, when you have a replicated system, the goal is to maintain the same data on all servers. That does not seem to be your business requirement.
Instead, you might consider having a composite primary key on all the servers. This would combine the auto incremented primary key with a server id. All tables referencing the foreign key would need to incorporate the "server" column as well as the "id".
In general, I'm not a fan of composite primary keys. However, you have a distributed database and need to identify the specific database "partition" where the data is located. This seems like a good use-case for composite primary keys.
An alternative approach -- if you are willing to take the risk -- is to set the auto numbering to a different start value on each server. Use a big int and a big value such as 1,000,000,000,000 for one server, 2,000,000,000,000 for the next, and so on. My preference is to have the "server" explicitly represented as a column, however.
So the situation is that I am going to have two or more "insert" machines where my web application just inserts data that we want to log into the machines (they are all behind a load balancer). Every couple hours, one by one the machines will be disconnected from the load balancer and upload their information into the "master" database machine should have a relatively up to date version of all the data we are collecting.
Originally I was going to use mysqldump, but found that you cannot specify the command to not grab the auto_increment id column I have (which would lead to collisions on primary key). I saw another post recommending using a temporary table to put the data in and then drop the column, but the "insert" machines have very low specs, and the amount of data could be pretty significant on the order of 50,000 rows. Other than just programatically just taking x rows at a time and inserting them into the remote "master" database, is there an easier way to do this? Currently I have php installed on the "insert" machines.
Thank you for your input.
Wouldn't you want the master database record to have the same primary key for each record as the slave database? If not, that could lead to problems where a query will produce different results based on which machine it's on.
If you want an arbitrary primary key that will avoid collisions, consider removing the auto-increment ID and constructing an ID that's guaranteed to be unique for every record on each server. For example, you could concatenate the unix time (with microseconds) with an identifier that's different for each server. A slightly lazier solution would be to concatenate time + a random 10-digit number or something. PHP's uniqid() function does something like this automatically.
If you don't intend to ever use the ID, then just remove it from your tables. There's no rule saying that every table has to have a primary key. If you don't use it, but you want to encode information about when each record was inserted, add a timestamp column instead (and don't make it a key).