I have a mysql\PHP application hosted on intranet and on internet. Both mysql servers are replicated i.e., synchronized on real time.
I have some tables which have auto increment id as primary key. When sync goes off, for new transactions same auto increment value is used on online as well as intranet server.
So even when servers get connected and sync starts; records with same auto increment id do not get sync. Ids with non overlapping values get synced soon the servers get connected.
To resolve this issue, I am thinking of using manual increment values with different range on intranet and online.
Please suggest, what could be the best solution for this problem.
Also if I have to go with manual increment ids, what would be the best technique OR algo to assign ids separately on online and on intranet.
I figured out the solution to this problem.
While configuring the replication of the mysql servers auto increment settings should be adjusted such the ids on the servers never overlap. Example if you have 2 servers replicated one server should only generate even auto increment ID's and other only odd ids.
Here's the link for detail information on this.
http://jonathonhill.net/2011-09-30/mysql-replication-that-hurts-less/
Updating the settings on both the servers resolved this issue.
There are a two things you can do. The first would be to change the starting value of the live server to a very high number (higher then the expected number of rows)
EG:
ALTER TABLE tbl AUTO_INCREMENT = 10000;
Now the numbers wont overlap. If that is not an option you can change the interval with
SET ##auto_increment_increment=10;
But this would also mean there is an overlap at one point. because the server with increment steps of 1 will catch up with the steps of 10 after.. you guessed.. 10 rows!
But you could bypass this by setting one server to start increment at 1 and the other at 2, and then make both have increment steps of 2.
That would make something like
intranet 1, 3, 5, 7, 9
live 2, 4, 6, 8, 10
You could also use a two column primary key to prevent duplication. Now you have an auto increment field in combination with a varchar field (live and intr) and that is your unique key.
CREATE TABLE `casetest`.`manualid` (
`id` INT( 10 ) NOT NULL AUTO_INCREMENT ,
`server` VARCHAR( 4 ) NOT NULL DEFAULT 'live',
`name` INT NOT NULL ,
PRIMARY KEY ( `id` , `server` )
) ENGINE = MYISAM ;
Related
On azure I created a new MySQL Database instance. In this db I create a table using this script:
CREATE TABLE ROLES(
ID INTEGER PRIMARY KEY AUTO_INCREMENT,
ROLE_NAME VARCHAR(30) NOT NULL
);
Then I insert values using this script:
INSERT INTO `beezzy`.`roles` (`ROLE_NAME`) VALUES ('admin');
INSERT INTO `beezzy`.`roles` (`ROLE_NAME`) VALUES ('owner');
INSERT INTO `beezzy`.`roles` (`ROLE_NAME`) VALUES ('consultant');
after execution table contains such rows:
Why DB generates IDs like '11' and '21'?
I run the same script on my local machine and everything works fine. IDs was '1', '2', '3'
Please run the following query.
SELECT ##auto_increment_increment
If the value is more than 1 then set it to 1 by the following query:
SET ##auto_increment_increment=1;
Note: This change is visible for the current connection only.
EDIT:
In order to set it globally so that other connections can also see the change you need to set it for global and session too.
SET ##GLOBAL.auto_increment_increment = 1;
SET ##SESSION.auto_increment_increment = 1;
So other connections can see this change now.
More:
This value will be reset if you restart your MySQL server. In order to make this change permanent you need to write this variable under [mysqld] secion in your my.cnf [for linux] or my.ini [for windows] file.
[mysqld]
auto-increment-increment = 1
Your autoincrement is probably 10, however this is probably by design. Azure uses ClearDB which uses an autoincrement of 10 with a reason: namely replication.
When I use auto_increment keys (or sequences) in my database, they
increment by 10 with varying offsets. Why?
ClearDB uses circular replication to provide master-master MySQL
support. As such, certain things such as auto_increment keys (or
sequences) must be configured in order for one master not to use the
same key as the other, in all cases. We do this by configuring MySQL
to skip certain keys, and by enforcing MySQL to use a specific offset
for each key used. The reason why we use a value of 10 instead of 2 is
for future development.
You should not change the autoincrement value.
cleardb faq
I am creating a system to generate unique keys. It works for now. But, I haven't tested it with many users. Users may click a button and then get his unique number, as simple as that.
But, How to prevent multiple users getting the same unique keys,if they press the button exactly in the same time (even in ms scale)? The button is on client side, so I must do something in the back end.
This is the unique key looks like:
19/XXXXXX-ABC/XYZ
The XXXXXX is auto increment number from 000001 to 999999. I have this code but didn't know if it's reliable enough to handle my issue.
$autoinc = $this->MPenomoran->get_surat($f_nomor)->jumlah_no+1; //count data in table and added 1
$no_1 = date('y')+2;
$no_2 = str_pad($autoinc, 6, '0', STR_PAD_LEFT);
$no_3 = "-ABC/XYZ";
$nomor = $no_1."/".$no_2.$no_3;
$returned_nomor = $nomor;
$success = array ('nomor' => $returned_nomor); //sent unique keys to user's view
It seems like you don't want to come out and tell us what the platform is for this, or what the limitations to that platform are.
The first thing that jumps out is that your format is limited by year, to 999999 total unique keys. Very odd, but presumably you understand that limit, and would need to put in some code to deal with hitting the maximum number.
Approaches
REDIS based
This would be very simple with a REDIS server using the INCR. Since INCR is atomic, you essentially have a solution just by creating a key named for your year + 2, should it not exist, and using INCR on it from there on out.
You would need to utilize some php redis client, and there are a variety of them with strengths and weaknesses to each that I'm not going to go into.
Redis is also great for caching, so if at all possible that is the first thing I would look into.
MySQL Based
There are a few different solutions using mysql. They are involved, so I'll just outline them because I don't want to spend time writing a novel.
Note: You will need to translate these into the appropriate PHP code (mysqli or PDO) where as noted, parameters are passed, transactions started etc.
MySQL - create your own sequence generator
Create a table named "Sequence" with this basic structure:
name varchar(2) PK
nextval unsigned int default 1
engine=InnoDB
The underlying query would be something like this:
BEGIN_TRANS;
SELECT nextval FROM Sequence WHERE name = '$no_1' FOR UPDATE;
UPDATE Sequence SET nextval = nextval + 1;
END_TRANS;
This code emulates a serialized Oracle style sequence. It is safe from a concurrency standpoint, because MySQL will lock the row briefly, then increment it upon completion.
MySQL - Autoincrement on multi-value PK
This comes with some caveats.
It is generally incompatible with replication.
The underlying table must be myisam
name varchar(2) PK
lastval unsigned int AUTO_INCREMENT PK
engine=MyISAM
Your underlying query would be:
INSERT INTO Sequence (name) VALUES ('$no_1')
This depends on mysql supporting a multi-column key where the 2nd column is an AUTO_INCREMENT. It's behavior is such that it acts like a sequence for each unique name.
You would then use the relevant api's built-in approach to getting the mysql LAST_INSERT_ID(). For example with PDO
Other alternatives
You could also use semaphores, files with locking, and all sorts of other ideas to create a sequence generator that would work well in a monolithic (one server for everything) environment. MySQL and Redis would serve a cluster, so those are more robust options from that standpoint.
The important thing is that whatever you do, you test it out using a load tester like siege or Boom to generate multiple requests at your web level.
I am developing a inventory software with MySQL and PHP where local database will sync to online database.
Suppose i have a table sell and sell_id is the primary key of the table. I usually use INT and auto increment with primary key.
In local database 1 sell table has 2 entry(sell_id 1,2) and local database 2 sell table has 2 entry(sell_id 1,2).
If i sync/insert these 2 local sell table entries to online sell table it will become (sell_id 1,2,3,4).
As sell id changes it effects those entries in other table which are using sell_id as foreign key.
How should i plan to create primary key in this situation.
I am planning to use alpha-numeric id which will be unique for both database. Will it create any problem or slow my db query further for millions of sell_id??
Are there any other ways to solve the problem ?
This is too long for a comment.
Often, when you have a replicated system, the goal is to maintain the same data on all servers. That does not seem to be your business requirement.
Instead, you might consider having a composite primary key on all the servers. This would combine the auto incremented primary key with a server id. All tables referencing the foreign key would need to incorporate the "server" column as well as the "id".
In general, I'm not a fan of composite primary keys. However, you have a distributed database and need to identify the specific database "partition" where the data is located. This seems like a good use-case for composite primary keys.
An alternative approach -- if you are willing to take the risk -- is to set the auto numbering to a different start value on each server. Use a big int and a big value such as 1,000,000,000,000 for one server, 2,000,000,000,000 for the next, and so on. My preference is to have the "server" explicitly represented as a column, however.
I have several servers running their own instance of a particular MySQL database which unfortunately cannot be setup in replication/cluster. Each server inserts data into several user-related tables which have foreign key constraints between them (e.g. user, user_vote). Here is how the process goes about:
all the servers start with the same data
each server grows its own set of data indepedently from the other servers
periodically, the data from all the servers is merged manually together and applied back to each server (the process therefore repeats itself from step 1).
This is made possible because in addition to its primary key, the user table contains a unique email field which allows identifying which users are already existing in each database, and merging those who are new while changing the primary and foreign keys to avoid collisions and maintain the correct foreign key constraints. It works, but it's quite some effort because primary and foreign keys have to be changed to avoid collision, hence my question:
Is there a way to have each server use primary keys that don't collide with other servers to facilitate the merging?
I initially wanted to use a composite primary key (e.g. server_id, id) but I am using Doctrine which doesn't support primary keys composed of multiple foreign keys so I would have problems with my foreign key constraints.
I thought about using a VARCHAR as an id and using part of the string as a prefix (SERVER1-1,SERVER1-2, SERVER2-1, SERVER2-2...) but I'm thinking it will make the DB slower as I will have to do some manipulations with the ids (e.g. on insert, I have to parse existing ids and extract highest, increment it, concatenate it with server id...).
PS: Another option would be to implement replication with read from slaves and write to master but this option was discarded because of issues such as replication lag and single point of failure on the master which can't be solved for now.
You can make sure each server uses a different incrementation of autoincrement, and a different start offset:
Change the step auto_increment fields increment by
(assuming you are using auoincrements)
I've only ever used this across two servers, so my set-up had one with even ids and one with odd.
When they are merged back together nothing will collide, as long as you make sure all tables follow the above idea.
in order to implement for 4 servers
You would say, set-up the following offsets:
Server 1 = 1
Server 2 = 2
Server 3 = 3
Server 4 = 4
You would set your incrementation as such (I've used 10 to leave space for extra servers):
Server 1 = 10
Server 2 = 10
Server 3 = 10
Server 4 = 10
And then after you have merged, before copying back to each server, you would just need to update the autoinc value for each table to have the correct offset again. Imagine each server had created 100 rows, autoincs would be:
Server 1 = 1001
Server 2 = 1002
Server 3 = 1003
Server 4 = 1004
This is where it does get tricky due to having four servers. For imagine certain tables may not have had any rows inserted from a particular server. So you could end up with some tables having their last autoinc id not being from server 4, but from being from server 2 instead. This would make it very tricky to work out what the next autoinc should be for any particular table.
For this reason it is probably best to also include a column in each of your tables that records the server number when any rows are inserted.
id | field1 | field2 | ... | server
That way you can easily find out what the last autoinc value should be for a particular server by selecting the following on any of your tables:
SELECT MAX(id) FROM `table` WHERE `server`=4 LIMIT 0,1
Using this value you can reset the next autoinc value you need for each table on each server, before rolling the merged dataset out to the server in question.
UPDATE information_schema.tables SET Auto_increment = (
SELECT MAX(id) FROM `table` WHERE `server`=s LIMIT 0,1
)+n WHERE table_name='table' AND table_schema = DATABASE();
Where s is the server number and n is set to the offset, so in my example it would be 10.
Prefixing ID ould do the trick. As for DB being slower - depends how big traffic is served there. You can also have "prefixed id" splitted into two columns, "prefix" and "id" and these can be of any type. Would require some logic to cope with it in requests, but may be worth evaluating
guys, how can i set the auto_increment of my userid something like this:
i want to start it in 200, then increment by 10 with a maximum value of 1000..
how will i code it using php?
please help me.. :-(
You can set a starting point to an auto increment value, but the rest you ask (increasing by 10, and limiting at 1000) is impossible on the mySQL level.
You would need to do this in your PHP code, as a pre-check before creating a new user account. Also, I would recommend doing this in a separate, indexed int column.
Update: There is the auto_increment_increment mySQL setting but it seems replication speficic, doesn't apply to your normal, single-database, myISAM setup, and is applied database-wide - it's not what you want.