I have a 2 databases hosted on the different server. What the best thing do, to copy all the contents of master table database to the table of slave database? I am not the owner of the master database but they are willing to give an access. Before the data from master database is outputted via RSS and my PHP script parse it to insert into another server where another database is located, but due to huge data content it takes 24 hours to update and insert the data to remote database, that's probably because of 2 databases overhead. So what we plan is create a script that download the data from master database and save a local copy and then FTP to the 2nd server and dump the contents into the database. Is that advisable even though the size of the file either CSV or SQL is around 30MB and still growing? What is the best solution for this?
NOTE: all of the scripts from downloading, to FTP, to inserting in 2nd database is handled by cron for automatic update.
You really should consider MySQL Master-Slave replication. This means every insert/update is als being done on the slave server.
The master server needs to be configured to keep a (binary) transaction-log which the slave uses to keep track of updates.
Other than ease of use, replication also keeps the load low since it is a continuous process.
What type of database are we talking about? Have you looked into replication?
Related
I have a website where I query one of our external sql servers and insert the records into the local server of the website.
I'm simply connecting to the external database, querying the table, truncating the local table, and running a foreach to insert data into the local table.
The process works fine, the problem is that it takes a long time.
I just want to see if you guys could give me some hints on how to speed up the process. If there is another way to do this, please let me know.
There are so many factors to determine the best approach. Is the database you are copying to supposed to always be the same as the source one, or will it have entries that are not in the source one. If you just want them to be identical, and the web site database is simply a read only clone, you have a bunch of ways in SQL Server to do this: replication, log shipping, mirroring, SSIS packages. It all depends on how frequently you want to synchronize the databases and a lot of other factors.
We are working on the CRM site. Now we have to shift the site from one server to another server. We are not having the problem for shifting the files since we are working on that so we know the changes or update in those files. But we have to shift the DB from one server to another server without losing a single data.We planned to make access the old server DB from the new server still we are changing the DB from old server to new server. But the problem is when we are doing this process the user have the possibilities to inserting the records to old DB. So we lose those data since we already taken the DUMP.So kindly suggest us what is the best way to doing the DB transfer with losing single data.
we are thinking to take the records between the times from dumping to changing to new server. Is that possible?.... if yes kindly suggest us how?...
Allow updates from both servers, that is, the old DB server will take updates. When the DNS migration is finished, only then migrate the DB data. Still, there's likely to be some downtime, but scripts can speed this up.
We are developing a web application using mysql and php.
In our application, we are expected to sync our local mysql database with a remote mysql database ( running on a different host) based on a trigger from the user interface.
User trigger is in the form of a webpage and when the user clicks on a button, the php server script is fired which should perform this synchronization in the background.
We planned to do it in a simple way by opening db connection with the remote and local db and inserting the rows, one row at a time. But the size of the remote DB table can be very high ( as big as a few million entries) and hence we need a more efficient solution.
Can someone help us with the sql query / php code which can do this db sync in an efficient manner without burdening the remote DB too much.
Thanks in advance
Shyam
UPDATE
The remote DB is not in my control so I cannot configure it as master or do any other settings on it. So that is one major limitation I have. That is why I want to do it programatically using php. Another option I have is to read blocks of 1000 rows from remote DB and insert into the local DB. But I wanted to know if there is a better way ?
You shouldn't concern yourself with MySQL replication from an application layer when the data layer has this functionality built-in. Please read up on "Master/Slave replication with MySQL". https://www.digitalocean.com/community/articles/how-to-set-up-master-slave-replication-in-mysql is a starting point.
The cost of replication is minimal as far as the Master is concerned. It basically works like this:
The Master logs all (relevant) activity into a flat log file
The Slave downloads this log every now and then and replays the binary log locally
Therefore, the impact on the Master is only writing linear data to a log file, plus a little bit of bandwidth. If you still worry about the impact on the Master, you could throttle the link between Master and Slave (at the system level), or just open the link during low activity times (by issuing STOP/START SLAVE commands at appropriate times).
I should also mention that built-in replication takes place at a low level inside the MySQL engine. I do not think you can achieve better performance with an external process. If you want to fully sinchronise your local dtabase when you hit this "Synchronise" button, then look no further.
If you can live with partial synchronisation, then you could have this button resume replication for a short timeframe (eg. START SLAVE for 10 secondes and STOP SLAVE again automatically; the user needs to click again to get more data) synchronised.
Well I'm looking for a way how I can transfer selected MySQL data from one server to another every minute or at least every few minutes. Here an example:
(Connect to the source SQL server and select the needed data)
SELECT name, email, online, session FROM example_table WHERE session!=0
(Process the data, connect to the external target SQL server and INSERT/REPLACE the data)
I want to transfer ONLY the output of the query to the target server which has of course a fitting table structure.
I have made already a simple PHP script which is being executed every minute by a cronjob on Linux but I guess that there are performance wise better ways, nor it supports arrays right now.
Any kind of suggestions / code examples which are Linux compatible are welcome.
I'm not entirely sure what data it is you're trying to transfer, but luckily MySQL supports replication between different servers. If you save the data on the local source server and set up the target server to fetch all updates from the source server, you'll have two identical databases. This way, you won't need any scripts or cronjobs.
You can find more information at http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html.
Here is a good open source replication engine:
http://code.google.com/p/tungsten-replicator/
I need to weekly sync a large (3GB+ / 40+ tables) local MySQL database to a server database.
The two databases are exactly the same. The local DB is constantly updated and every week or so the server DB need to be updated with the local data. You can call it 'mirrored DB' or 'master/master' but I'm not sure if this is correct.
Right now the DB only exist locally. So:
1) First I need to copy the DB from local to server. With PHPMyAdmin export/import is impossible because of the DB size and PHPMyAdmin limits. Exporting the DB to a gzipped file and uploading it through FTP probably will break in the middle of the transfer because of connection to the server problems or because of the server file size limit. Exporting each table separately will be a pain and the size of each table will also be very big. So, what is the better solution for this?
2) After the local DB us fully uploaded to the server I need to weekly update the server DB. What the better way to doing it?
I never worked with this kind of scenario, I don't know the different ways for achieving this and I'm not precisely strong with SQL so please explain yourself as good as possible.
Thank you very much.
This article should get you started.
Basically, get Maatkit and use the sync tools in there to perform a master-master-synchronization:
mk-table-sync --synctomaster h=serverName,D=databaseName,t=tableName
You can use a DataComparer for mysql.
Customize the template synchronization, which specify the data which tables to synchronize.
Schedule a weekly update on the template.
I have 2 servers daily synchronized with dbForge Data Comparer via command line.