I need to weekly sync a large (3GB+ / 40+ tables) local MySQL database to a server database.
The two databases are exactly the same. The local DB is constantly updated and every week or so the server DB need to be updated with the local data. You can call it 'mirrored DB' or 'master/master' but I'm not sure if this is correct.
Right now the DB only exist locally. So:
1) First I need to copy the DB from local to server. With PHPMyAdmin export/import is impossible because of the DB size and PHPMyAdmin limits. Exporting the DB to a gzipped file and uploading it through FTP probably will break in the middle of the transfer because of connection to the server problems or because of the server file size limit. Exporting each table separately will be a pain and the size of each table will also be very big. So, what is the better solution for this?
2) After the local DB us fully uploaded to the server I need to weekly update the server DB. What the better way to doing it?
I never worked with this kind of scenario, I don't know the different ways for achieving this and I'm not precisely strong with SQL so please explain yourself as good as possible.
Thank you very much.
This article should get you started.
Basically, get Maatkit and use the sync tools in there to perform a master-master-synchronization:
mk-table-sync --synctomaster h=serverName,D=databaseName,t=tableName
You can use a DataComparer for mysql.
Customize the template synchronization, which specify the data which tables to synchronize.
Schedule a weekly update on the template.
I have 2 servers daily synchronized with dbForge Data Comparer via command line.
Related
I have a website where I query one of our external sql servers and insert the records into the local server of the website.
I'm simply connecting to the external database, querying the table, truncating the local table, and running a foreach to insert data into the local table.
The process works fine, the problem is that it takes a long time.
I just want to see if you guys could give me some hints on how to speed up the process. If there is another way to do this, please let me know.
There are so many factors to determine the best approach. Is the database you are copying to supposed to always be the same as the source one, or will it have entries that are not in the source one. If you just want them to be identical, and the web site database is simply a read only clone, you have a bunch of ways in SQL Server to do this: replication, log shipping, mirroring, SSIS packages. It all depends on how frequently you want to synchronize the databases and a lot of other factors.
I have a large chunk of database available at about 100 GB. The records would be synced with the remote database. Each record must be checked with db online and if not available then to be update/inserted to db. I have tried some methods to increase the speed but it is transferring at too slow a speed. The methods I tried are:
Simple script to match the record and uploading the database to which the speed is very very slow.
Generated MySQL dump then compressed it and transferred to online, then online check and update them. The dump was too big to transfer (it was taking a long time to transfer).
Kindly suggest other methods to transfer the DB.
try mysqldumper :
http://sourceforge.net/projects/mysqldumper/files/
it can take mysql dump and restore, its awesome its speed is good.
I have to develop a Project with php mysql for sales management system.There are many
outlet. I want to keep a databse centrally and every outlet have a databse locally. user
entry data to local databse. after a while local data can be uploaded to central databse.
Local data will go to central database, but central data will not go to local database.
what will be the procedure for that. (e.g: Synchronization, Replication)
I wouldn't use syncronisation or replication. I would use an import/export mechanism.
Write a little tool which will export the last day/week/month and than send it with an secure line to your main database for import.
Depending on the specs of your project (size of data, longevity of data, frequency of sync, etc.) you might have to implement a one-way synchronization. E.g. your clients will upload data incrementally where only new changes (no need to re-send all information on each sync) are uploaded to the server.
You can achieve this in various ways. The simple way is uploading your data to the server and removing them from the local storage. If your clients need to keep the uploaded data then introduce the additional field “Dirty” in your tables on the client side and use it for designating the new changes.
Recently I blogged about bi-directional sync algorithm, which includes the upload changes functionality using dirty field, which might be helpful to you.
Maybe SymmetricDS (http://www.symmetricds.org) can solve your problem. We're having a similar problem and we've decided to use it.
Well I'm looking for a way how I can transfer selected MySQL data from one server to another every minute or at least every few minutes. Here an example:
(Connect to the source SQL server and select the needed data)
SELECT name, email, online, session FROM example_table WHERE session!=0
(Process the data, connect to the external target SQL server and INSERT/REPLACE the data)
I want to transfer ONLY the output of the query to the target server which has of course a fitting table structure.
I have made already a simple PHP script which is being executed every minute by a cronjob on Linux but I guess that there are performance wise better ways, nor it supports arrays right now.
Any kind of suggestions / code examples which are Linux compatible are welcome.
I'm not entirely sure what data it is you're trying to transfer, but luckily MySQL supports replication between different servers. If you save the data on the local source server and set up the target server to fetch all updates from the source server, you'll have two identical databases. This way, you won't need any scripts or cronjobs.
You can find more information at http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html.
Here is a good open source replication engine:
http://code.google.com/p/tungsten-replicator/
I have a 2 databases hosted on the different server. What the best thing do, to copy all the contents of master table database to the table of slave database? I am not the owner of the master database but they are willing to give an access. Before the data from master database is outputted via RSS and my PHP script parse it to insert into another server where another database is located, but due to huge data content it takes 24 hours to update and insert the data to remote database, that's probably because of 2 databases overhead. So what we plan is create a script that download the data from master database and save a local copy and then FTP to the 2nd server and dump the contents into the database. Is that advisable even though the size of the file either CSV or SQL is around 30MB and still growing? What is the best solution for this?
NOTE: all of the scripts from downloading, to FTP, to inserting in 2nd database is handled by cron for automatic update.
You really should consider MySQL Master-Slave replication. This means every insert/update is als being done on the slave server.
The master server needs to be configured to keep a (binary) transaction-log which the slave uses to keep track of updates.
Other than ease of use, replication also keeps the load low since it is a continuous process.
What type of database are we talking about? Have you looked into replication?