mysql incremental replication from remote DB - php

We are developing a web application using mysql and php.
In our application, we are expected to sync our local mysql database with a remote mysql database ( running on a different host) based on a trigger from the user interface.
User trigger is in the form of a webpage and when the user clicks on a button, the php server script is fired which should perform this synchronization in the background.
We planned to do it in a simple way by opening db connection with the remote and local db and inserting the rows, one row at a time. But the size of the remote DB table can be very high ( as big as a few million entries) and hence we need a more efficient solution.
Can someone help us with the sql query / php code which can do this db sync in an efficient manner without burdening the remote DB too much.
Thanks in advance
Shyam
UPDATE
The remote DB is not in my control so I cannot configure it as master or do any other settings on it. So that is one major limitation I have. That is why I want to do it programatically using php. Another option I have is to read blocks of 1000 rows from remote DB and insert into the local DB. But I wanted to know if there is a better way ?

You shouldn't concern yourself with MySQL replication from an application layer when the data layer has this functionality built-in. Please read up on "Master/Slave replication with MySQL". https://www.digitalocean.com/community/articles/how-to-set-up-master-slave-replication-in-mysql is a starting point.

The cost of replication is minimal as far as the Master is concerned. It basically works like this:
The Master logs all (relevant) activity into a flat log file
The Slave downloads this log every now and then and replays the binary log locally
Therefore, the impact on the Master is only writing linear data to a log file, plus a little bit of bandwidth. If you still worry about the impact on the Master, you could throttle the link between Master and Slave (at the system level), or just open the link during low activity times (by issuing STOP/START SLAVE commands at appropriate times).
I should also mention that built-in replication takes place at a low level inside the MySQL engine. I do not think you can achieve better performance with an external process. If you want to fully sinchronise your local dtabase when you hit this "Synchronise" button, then look no further.
If you can live with partial synchronisation, then you could have this button resume replication for a short timeframe (eg. START SLAVE for 10 secondes and STOP SLAVE again automatically; the user needs to click again to get more data) synchronised.

Related

MySQL database replication methods

I am building a site to be used by clients it would store there basic information and projects or services there paying for to my company. The entire login + panel would run under SSL/HTTPS but my main concern comes down to Database Replication to prevent any events where something is lost.
Because some of the projects are hosted by me for the clients I need a way to assure there data is safe and sound. At the moment I am using Media Temple GS service but will move to DV service ones more customers start to pickup.
Based on my personal knowledge I was thinking of doing something like you would do with Hard Drives. Where there is a Master and then Slave. In SQL terms there would be a Master (Index) Database and there would be few Slaves (Cache) Databases.
But the question is, what would be the best way to replicate or to backup the Master onto Slave(s) and should I have additional GS or DV servers or is using the same server but with different DB name good enough?
Edit
I did some looking around MT and came accross there MySQL GridContainer which seems to do the same as owing 2nd server. Would this be a good alternative to actuall 2nd server?
Idea of replication for backup is replicating database to another database that you can stop and create full backup of that stopped database, while your production database is running.
You can use same server for creating backup files, but don't forget, that backup can ruin server performance (hard disk load). Additionally - when database is big, and you need historical backup files - you may need to compress backup files, and compression opertion will ruin your server performance totally.
You can't avoid second server, because you have to copy backup to another machine anyway (backup on same machine makes no sense).
So in general - it's better to replicate to another machine, which can be used also in crisis situations, when master server is down.
I found nice article about many solutions for high availability MySQL: link to mysql.com.

What is the best way to transfer selected MySQL data every minute to another MySQL server?

Well I'm looking for a way how I can transfer selected MySQL data from one server to another every minute or at least every few minutes. Here an example:
(Connect to the source SQL server and select the needed data)
SELECT name, email, online, session FROM example_table WHERE session!=0
(Process the data, connect to the external target SQL server and INSERT/REPLACE the data)
I want to transfer ONLY the output of the query to the target server which has of course a fitting table structure.
I have made already a simple PHP script which is being executed every minute by a cronjob on Linux but I guess that there are performance wise better ways, nor it supports arrays right now.
Any kind of suggestions / code examples which are Linux compatible are welcome.
I'm not entirely sure what data it is you're trying to transfer, but luckily MySQL supports replication between different servers. If you save the data on the local source server and set up the target server to fetch all updates from the source server, you'll have two identical databases. This way, you won't need any scripts or cronjobs.
You can find more information at http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html.
Here is a good open source replication engine:
http://code.google.com/p/tungsten-replicator/

Bring data periodically from Linked Database in SQLServer 2008

We are developing a system on PHP with SQL Server 2008. Is a system that must work with the invoices stored in another SQL Server instance, that I have linked to my Database using sp_addlinkedserver.
The problem is that I think I need to have it loaded locally (because of performance). Si I'm thinking to make a my own "invoices" table, and two times per day somehow bring the data from the linked table to the locally stored one.
How can I program SQL to do this every X amount of time?
What approach I should use to program the importing?
It first I though to make my own script to do this, but I would preffer to have SQL Server to handle this, but that depends on your opinion :)
Thnak you!
Guillermo
NOTE: Replication sounds overkill for me.. I dont need to have real-time synconization. Neither I need to update the database, just read.
One option is to use replication to copy the data. However, it may take more administration than you're planning. Replication is great for managing a consistent and timely copy of the data.
Another option is to setup a SQL Server job that will run a SQL script to insert into your target table using a select from your linked server.
You could also use SQL Server Integration Services (SSIS). You would create a SSIS package where you would build a data flow that transfers your data from the source table to the target table. You wouldn't need a linked server for this approach, because your data sources are defined within the SSIS package. And, you can use a SQL Server job to schedule the package run times.

Sync large local DB with server DB (MySQL)

I need to weekly sync a large (3GB+ / 40+ tables) local MySQL database to a server database.
The two databases are exactly the same. The local DB is constantly updated and every week or so the server DB need to be updated with the local data. You can call it 'mirrored DB' or 'master/master' but I'm not sure if this is correct.
Right now the DB only exist locally. So:
1) First I need to copy the DB from local to server. With PHPMyAdmin export/import is impossible because of the DB size and PHPMyAdmin limits. Exporting the DB to a gzipped file and uploading it through FTP probably will break in the middle of the transfer because of connection to the server problems or because of the server file size limit. Exporting each table separately will be a pain and the size of each table will also be very big. So, what is the better solution for this?
2) After the local DB us fully uploaded to the server I need to weekly update the server DB. What the better way to doing it?
I never worked with this kind of scenario, I don't know the different ways for achieving this and I'm not precisely strong with SQL so please explain yourself as good as possible.
Thank you very much.
This article should get you started.
Basically, get Maatkit and use the sync tools in there to perform a master-master-synchronization:
mk-table-sync --synctomaster h=serverName,D=databaseName,t=tableName
You can use a DataComparer for mysql.
Customize the template synchronization, which specify the data which tables to synchronize.
Schedule a weekly update on the template.
I have 2 servers daily synchronized with dbForge Data Comparer via command line.

Synchronization with master database

I have a 2 databases hosted on the different server. What the best thing do, to copy all the contents of master table database to the table of slave database? I am not the owner of the master database but they are willing to give an access. Before the data from master database is outputted via RSS and my PHP script parse it to insert into another server where another database is located, but due to huge data content it takes 24 hours to update and insert the data to remote database, that's probably because of 2 databases overhead. So what we plan is create a script that download the data from master database and save a local copy and then FTP to the 2nd server and dump the contents into the database. Is that advisable even though the size of the file either CSV or SQL is around 30MB and still growing? What is the best solution for this?
NOTE: all of the scripts from downloading, to FTP, to inserting in 2nd database is handled by cron for automatic update.
You really should consider MySQL Master-Slave replication. This means every insert/update is als being done on the slave server.
The master server needs to be configured to keep a (binary) transaction-log which the slave uses to keep track of updates.
Other than ease of use, replication also keeps the load low since it is a continuous process.
What type of database are we talking about? Have you looked into replication?

Categories