Synchronize local database and live site database - php

I have a local website in which user adds items to Mysql DB everyday. Now I want to make a live version of the site.
But instead of adding items to both databases, I want to only add to the local database. And sync the remote database.
Local site uses XAMPP. Also I don't think replication is the way I want to do it.
I'm looking more of a PHP way of doing this task.
Currently I have no idea on how to achieve this.
Any idea on how to do this?

The quickest way to do this would be with MySQL, but if you want to use strictly PHP, there're two ways to go about this if you want the live data to only reflect the local data (i.e. you're fine with deleting all data and re-pulling data). Because this solution is in PHP, you will have to loop through each individual table. You can run a PHP script that uses either PDO or MySQLi, but you will need one of two deletion strategies listed below:
1) TRUNCATE and SELECT FROM (Fast, but has potential security risks) This is a bigger risk because the MySQL permissions to TRUNCATE are DROP and ALTER. Not commands you want a regular database user to have access to. Here's the SQL to pull it off:
TRUNCATE live_database_name.table_name
INSERT INTO live_database_name.table_name SELECT * FROM local_database_name.table_name
2) DELETE FROM and SELECT FROM (Slower the more data you have, but safer). This solution is slower because you have to walk through each entry in a table rather than dropping and re-creating the table. However, DELETE is seen as a safer permission to give a database user as they can't DROP entire tables. Here's what you'll need to pull it off:
DELETE FROM live_database_name.table_name
INSERT INTO live_database_name.table_name SELECT * FROM local_database_name.table_name

If you don't want to setup a replication scheme, why not just use (2 or 3) cron jobs, dump local / dump remote / update remote with local dump. And no, that isn't the best way to do this, but it works...
With just (2), dump local / update remote with local dump, run daily # 1:00AM dump, and !:15AM update
0 1 * * * mysqldump --host="localhost" --user="user" --password="password" database_name > backup_name.sql
15 1 * * * mysql --host="remote_host" --user="user" --password="password" --port="3306" database_name < backup_name.sql

Related

Restoring MYSQL User Accounts to new Database

I am currently trying to combine two MYSQL Database installations into a single installation. I have already used a batch script to export each individual database to SQL files so they can be imported into the MYSQL that is being kept.
The problem is each individual database has a unique user assigned to it which also needs to be brought over. When doing this in the past, I imported the "mysql" database along with the result, and this caused corruption.
What is the best way to export ONLY the users from the "mysql" database and import them into a a different MySQL instance?
Use the --no-create-info option to mysqldump to keep it from dropping the old table on the target server.
If you have any overlap in the usernames on the two installations, use the --ignore option so that they will be ignored when merging.
So the command is:
mysqldump --no-create-info --ignore mysql user > user.sql
IF you are USING SQL yog then,
go to the TABLE which you need to export to other host/database
right click on the TABLE
SELECT copy TABLE TO different HOST/Database
Hope it is helpful

dump selected data from one db to another in mysql

Here's the situation:
I have a mySQL db on a remote server. I need data from 4 of its tables. On occasion, the schema of these tables is changed (new fields are added, but not removed). At the moment, the tables have > 300,000 records.
This data needs to be imported into the localhost mySQL instance. These same 4 tables exist (with the same names), but the fields needed are a subset of the fields in the remote db tables. The data in these local tables is considered read-only and is never written to. Everything needs to be run in a transaction so there is always some data in the local tables, even if it is a day old. The localhost tables are used by an active website, so this entire process needs to complete as quickly as possible to minimize downtime.
This process runs once per day.
The options as I see them:
Get a mysqldump of the structure/data of the remote tables and save to file. Drop the localhost tables, and run the dumped sql script. Then recreate the needed indexes on the 4 tables.
Truncate the localhost tables. Run SELECT queries on the remote db in PHP and retrieve only the fields needed instead of the entire row. Then loop through the results and create INSERT statements from this data.
My questions:
Performance wise, which is my best option?
Which one will complete the fastest?
Will either one put a heavier load on the server?
Would indexing the
tables take the same amount of time in both options?
If there is no good reason for having the local d/b be a subset of the remote, make the structure the same and enable database replication on the needed tables. Replication works by the master tracking all changes made, and managing each slave d/b's pointer into the changes. Each slave says give me all changes since the last request. For a sizeable database, this is far more efficient than any alternative you have selected. It comes with only modest cost.
As for schema changes, I think the alter information is logged by the master, so the slave(s) can replicate those as well. The mechanism definitely replicates drop table ... if exists and create table ... select, so alter logically should follow, but I have not tried it.
Here it is: confirmation that alter is properly replicated.

PHP / MySQL Conceptual Database 'Sync' question

I am working on a PHP class implementing PDO to sync a local database's table with a remote one.
The Question
I am looking for some ideas / methods / suggestions on how to implement a 'backup' feature to my 'syncing' process. The ideas is: Before the actual insert of the data takes place, I do a full wipe of the local table's data. Time is not a factor so I figure this is the cleanest and simplest solution and I wont have to worry about checking for differences in the data and all that jazz. The Problem is, I want to implement some kind of security measure in case there is a problem during the insert of data, like loss of internet connection or something. The only idea I have so far is: Copy said table to be synced -> wipe said table -> insert remote tables data into local table -> if successful delete backup copy.
Check out mk-table-sync. It compares two tables on different servers, using checksums of chunks of rows. If a given chunk is identical between the two servers, no copying is needed. If the chunk differs, it copies just the chunk it needs. You don't have to wipe the local table.
Another alternative is to copy the remote data to a distinct table name. If it completes successfully, then DROP the old table and RENAME the new local copy to the original table's name. If the copy fails or is interrupted, then drop the local copy with the distinct name and try again. Meanwhile, your other local table with the previous data is untouched.
Following is Web tool that sync database between you and server or other developer.
It is Git Based. So you should use Git in project.
But it only helpful while developing Application. it is not tool for compare databases.
For Sync Databases you regularly push code to Git.
Git Project : https://github.com/hardeepvicky/DB-Sync

How to convert query from phpMyAdmin SQL Dump to an sql server legible query

I have undertaken a small project which already evolved a current database. The application was written in php and the database was mysql.
I am rewriting the application, yet I still need to maintain the database's structure as well as data. I have received an sql dump file. When I try running it in sql server management studio I receive many errors. I wanted to know what work around is there to convert the sql script from the phpMyAdmin dump file that was created to tsql?
Any Ideas?
phpMyAdmin is a front-end for MySQL databases. Dumping databases can be done in various formats, including SQL script code, but I guess your problem is that you are using SQL Server, and T-SQL is different from MySQL.
EDIT: I see the original poster was aware of that (there was no MySQL tag on the post). My suggestion would be to re-dump the database in CSV format (for example) and to import via bulk insert, for example, for a single table,
CREATE TABLE MySQLData [...]
BULK
INSERT MySQLData
FROM 'c:\mysqldata.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
This should work fine if the database isn't too large and has only few tables.
You do have more problems than making a script run, by the way: Mapping of data types is definitely not easy.
Here is an article about migration MySQL -> SQL Server via the DTS Import/Export wizard, which may well be a good way if your database is large (and you still have access, ie, not only have the dump).
The syntax between Tsql and Mysql is not a million miles off, you could probably rewrite it through trial and error and a series of find and replaces.
A better option would probably be to install mysql and mysqlconnector, and restore the database using the dubp file.
You could then create a Linked Server on the SQL server and do a series of queries like the following:
SELECT *
INTO SQLTableName
FROM OPENQUERY
(LinkedServerName, 'SELECT * FROM MySqlTableName')
MySQL's mysqldump utility can produce somewhat compatible dumps for other systems. For instance, use --compatible=mssql. This option does not guarantee compatibility with other servers, but might prevent most errors, leaving less for you to manually alter.

SQL/PHP: How to upload big database to server when I have import file size limit? And then update

I'm creating locally a big database using MySQL and PHPmyAdmin. I'm constantly adding a lot of info to the database. I have right now more than 10MB of data and I want to export the database to the server but I have a 10MB file size limit in the Import section of PHPmyAdmin of my web host.
So, first question is how I can split the data or something like that to be able to import?
BUT, because I'm constantly adding new data locally, I also need to export the new data to the web host database.
So second question is: How to update the database if the new data added is in between all the 'old/already uploaded' data?
Don't use phpMyAdmin to import large files. You'll be way better off using the mysql CLI to import a dump of your DB. Importing is very easy, transfer the SQL file to the server and afterwards execute the following on the server (you can launch this command from a PHP script using shell_exec or system if needed) mysql --user=user --password=password database < database_dump.sql. Of course the database has to exist, and the user you provide should have the necessary privilege(s) to update the database.
As for syncing changes : that can be very difficult, and depends on a lot of factors. Are you the only party providing new information or are others adding new records as well? Are you going modify the table structure over time as well?
If you're the only one adding data, and the table structure doesn't vary then you could use a boolean flag or a timestamp to determine the records that need to be transferred. Based on that field you could create partial dumps with phpMyAdmin (by writing a SQL command and clicking Export at the bottom, making sure you only export the data) and import these as described above.
BTW You could also look into setting up a master-slave scenario with MySQL, where your data is transferred automatically to the other server (just another option, which might be better depending on your specific needs). For more information, refer to the Replication chapter in the MySQL manual.
What I would do, in 3 steps:
Step 1:
Export your db structure, without content. This is easy to manage on the exporting page of phpmyadmin. After that, I'd instert that into the new db.
Step 2:
Add a new BOOL column in your local db in every table. The function of this is, to store if a data is new, or even not. Because of this set the default to true
Step 3:
Create a php script witch connects to both databases. The script needs to get the data from your local database, and put it into the new one.
I would do this with following mysql methods http://dev.mysql.com/doc/refman/5.0/en/show-tables.html, http://dev.mysql.com/doc/refman/5.0/en/describe.html, select, update and insert
then you have to run your script everytime you want to sync your local pc with the server.

Categories