Copying an MySQL/Debian8 Database to a different platform - php

I found variations on how to make copies of a DB, and fill in tables, within the same platform, but I want to port to a different platform.
I tried Archiving the folder with the same name as my DB form /var/lib/mysql, changing the folder name, then extracting back to /var/lib/mysql.
When I fired up phpMyAdmin, it showed the new database name, but no tables.
I expect the same thing to happen when I extract on the new platform, but there I won't be able to use the Insert from a local copy.
What directories and/or files does MySQL on Debian8 need to be tweaked to let the tables show?

Related

How to merge local and live databases?

We've been developing for Wordpress for several years and whilst our workflow has been upgraded at several points there's one thing that we've never solved... merging a local Wordpress database with a live database.
So I'm talking about having a local version of the site where files and data are changed, whilst the data on the live site is also changing at the same time.
All I can find is the perfect world scenario of pulling the site down, nobody (even customers) touching the live site, then pushing the local site back up. I.e copying one thing over the other.
How can this be done without running a tonne of mysql commands? (it feels like they could fall over if they're not properly checked!) Can this be done via Gulp's (I've seen it mentioned) or a plugin?
Just to be clear, I'm not talking about pushing/pulling data back and forth via something like WP Migrate DB Pro, BackupBuddy or anything similar - this is a merge, not replacing one database with another.
I would love to know how other developers get around this!
File changes are fairly simple to get around, it's when there's data changes that it causes the nightmare.
WP Stagecoach does do a merge but you can't work locally, it creates a staging site from the live site that you're supposed to work on. The merge works great but it's a killer blow not to be able to work locally.
I've also been told by the developers that datahawk.io will do what I want but there's no release date on that.
It sounds like VersionPress might do what you need:
VersionPress staging
A couple of caveats: I haven't used it, so can't vouch for its effectiveness; and it's currently in early access.
Important : Take a backup of Live database before merging Local data to it.
Follow these steps might help in migrating the large percentage of data and merging it to live
Go to wp back-end of Local site Tools->Export.
Select All content radio button (if not selected by default).
This will bring an Xml file containing all the local data comprised of all default post types and custom post types.
Open this XML file in notepad++ or any editor and find and replace the Local URL with the Live URL.
Now visit the Live site and Import the XML under Tools->Import.
Upload the files (images) manually.
This will bring a large percentage of data from Local to Live .
Rest of the data you will have to write custom scripts.
Risk factors are :
When uploading the images from Local to Live , images of same name
will be overriden.
Wordpress saves the images in post_meta generating a serialized data for the images , than should be taken care of when uploading the database.
Serialized data in post_meta for post_type="attachment" saves serialized data for 3 or 4 dimensions of the images.
Usernames or email ids of users when importing the data , can be same (Or wp performs the function of checking unique usernames and emails) then those users will not be imported (might be possible).
If I were you I'd do the following (slow but affords you the greatest chance of success)
First off, set up a third database somewhere. Cloud services would probably be ideal, since you could get a powerful server with an SSD for a couple of hours. You'll need that horsepower.
Second, we're going to mysqldump the first DB and pipe the output into our cloud DB.
mysqldump -u user -ppassword dbname | mysql -u root -ppass -h somecloud.db.internet
Now we have a full copy of DB #1. If your cloud supports snapshotting data, be sure to take one now.
The last step is to write a PHP script that, slowly but surely, selects the data from the second DB and writes it to the third. We want to do this one record at a time. Why? Well, we need to maintain the relationships between records. So let's take comments and posts. When we pull post #1 from DB #2 it won't be able to keep record #1 because DB #1 already had one. So now post #1 becomes post #132. That means that all the comments for post #1 now need to be written as belonging to post #132. You'll also have to pull the records for the users who made those posts, because their user IDs will also change.
There's no easy fix for this but the WP structure isn't terribly complex. Building a simple loop to pull the data and translate it shouldn't be more then a couple of hours of work.
If I understand you, to merge local and live database, until now I'm using other software such as NavicatPremium, it has Data Sycn feature.
This can be achieved live using spring-xd, create a JDBC Stream to pull data from one db and insert into the other. (This acts as streaming so you don't have to disturb any environment)
The first thing you need to do is asses if it would be easier to do some copy-paste data entry instead of a migration script. Sometimes the best answer is to suck it up and do it manually using the CMS interface. This avoids any potential conflicts with merging primary keys, but you may need to watch for references like the creator of a post or similar data.
If it's just outright too much to manually migrate, you're stuck with writing a script or finding one that is already written for you. Assuming there's nothing out there, here's what you do...
ALWAYS MAKE A BACKUP BEFORE RUNNING MIGRATIONS!
1) Make a list of what you need to transfer. Do you need users, posts, etc.? Find the database tables and add them to the list.
2) Make a note all possible foreign keys in the database tables being merged into the new database. For example, wp_posts has post_author referencing wp_users. These will need specific attention during the migration. Use this documentation to help find them.
3) Once you know what tables you need and what they reference, you need to write the script. Start by figuring out what content is new for the other database. The safest way is to do this manually with some kind of side-by-side list. However, you can come up with your own rules on how to automatically match table rows. Maybe to check for $post1->post_content === $post2->post_content in cases the text needs to be the same. The only catch here is the primary/foreign keys are off limits for these rules.
4) How do you merge new content? The general idea is that all primary keys will need to be changed for any new content. You want to use everything except for the id of post and insert that into the new database. There will be an auto-increment to create the new id, so you wont need the previous id (unless you want it for script output/debug).
5) The tricky part is handling the foreign keys. This process is going to vary wildly depending on what you plan on migrating. What you need to know is which foreign key goes to which (possibly new) primary key. If you're only migrating posts, you may need to hard-code a user id to user id mapping for the post_author column, then use this to replace the values.
But what if I don't know the user ids for the mapping because some users also need to be migrated?
This is where is gets tricky. You will need to first define the merge rules to see if a user already exists. For new users, you need record the id of the newly inserted users. Then after all users are migrated, the post_author value will need to be replaced when it references a newly merged user.
6) Write and test the script! Test it on dummy databases first. And again, make backups before using it on your databases!
I've done something simillar with ETL (Extract, Transform, Load) process when I was moving data from one CMS to another.
Rather than writing a script I used a Pentaho Data Integration (Kettle) tool.
The Idea of ETL is pretty much straight forward:
Extract the data (for instance from one database)
Transform it to suit your needs
Load it to the final destination (your second database).
The tool is easy to use and it allows you to experiment with various steps and outputs to investigate the data. When you design a right ETL proces, you are ready to merge those databases of yours.
How can this be done without running a tonne of mysql commands?
No way. If both local and web sites are running at the same time how can you prevent not having the same ids' with different content?
so if you want to do this you can use mysql repication.i think it will help you to merge with different database mysql.

Wordpress Woocommerce 2nd sql database connection issue

I have just transferred a backup of a WordPress site to a new domain. I loaded all of the sites files using FTP and I connected the main database (called boxedsco_master.sql) to the new site's database using phpMyAdmin. The new prefix for the databases is "boxeipxy_" rather than "boxedsco_". I also updated wp-config.php to connect the master database, which worked fine.
However there was also a second database called boxedsco_boxes.sql. Which I also uploaded using phpMyAdmin, and is now called "boxeipxy_boxes.sql".
The site is mainly working, however, there are no notification emails when an order is placed, either to me customer or my client. I believe this could be because the second database is not correctly connected, as the database contains tables called things like "wp_easycontactforms_customforms".
How do I connect this second database? Would there be a php file to update with the new database name? I can't find a php file that references "boxedsco_boxes.sql" in order to update it?
Using Wordpress 3.8.5 and WooCommerce 2.0.20
No wonder no-one responded! There should not be 2 databases! I thought it was very strange!
The email notifications issue was a bug with woocommerce not liking the send-from address! Changing it to "wordpress#[yourdomain].com" fixed the issue completely!
Thanks anyway folks!
the fact that you have a two files with .sql suffixes doesn't mean there are two databases. Crack open the files and look inside. They are most likely two dumps from the same database; have a look if they contain the same data and/or tables or not.
As the philosopher says, "a poem about the moon is not moon". A sqldump isn't a database, it's a "representation" of a database. Most likely you have two poems about the same moon.

How to export database to another host which doesn't allow to create a new database with my old database name?

I want to create a copy of my site on another host. I'm under free plan on both the hosts.
The new host in which i'm trying to copy my site allows to create a new database with a name xyzhost_id_* only. But my previous database name is abchostid_*
There are 40+ tables in my abchostid_* database.
What shall i do?
If you perform a mysqldump there is usually no database information there, just table schema. If the options to create the database are there just open your sql file and remove them. They will look something like "create database your_database" and "use your_database". Then, using phpmyadmin you can simply create the new database and load your SQL file.
Basically you can treat "xyzhost_id_" and "abchostid_" as table prefixes. Go ahead and rename your tables and make sure that your scripts will have the new prefix in queries.

PHP / MySQL Conceptual Database 'Sync' question

I am working on a PHP class implementing PDO to sync a local database's table with a remote one.
The Question
I am looking for some ideas / methods / suggestions on how to implement a 'backup' feature to my 'syncing' process. The ideas is: Before the actual insert of the data takes place, I do a full wipe of the local table's data. Time is not a factor so I figure this is the cleanest and simplest solution and I wont have to worry about checking for differences in the data and all that jazz. The Problem is, I want to implement some kind of security measure in case there is a problem during the insert of data, like loss of internet connection or something. The only idea I have so far is: Copy said table to be synced -> wipe said table -> insert remote tables data into local table -> if successful delete backup copy.
Check out mk-table-sync. It compares two tables on different servers, using checksums of chunks of rows. If a given chunk is identical between the two servers, no copying is needed. If the chunk differs, it copies just the chunk it needs. You don't have to wipe the local table.
Another alternative is to copy the remote data to a distinct table name. If it completes successfully, then DROP the old table and RENAME the new local copy to the original table's name. If the copy fails or is interrupted, then drop the local copy with the distinct name and try again. Meanwhile, your other local table with the previous data is untouched.
Following is Web tool that sync database between you and server or other developer.
It is Git Based. So you should use Git in project.
But it only helpful while developing Application. it is not tool for compare databases.
For Sync Databases you regularly push code to Git.
Git Project : https://github.com/hardeepvicky/DB-Sync

Creating Tables at runtime vs Creating Databases at runtime

I am building a customer sales and invoicing app for a company.The app is in PHP MYSQL, but I guess that shouldn't matter much.
The app structure is as follows:
website files: .php, ,.htm, images and css
database: containing 20+ tables
The app is currently being used by the company and 2 other sister concerns(beta testing mostly). Since the user base is small, I manually copy the website files and the database to set the app up for usage by a new compnay.
I am looking for a way to make the app more 'scalable' without having to manually do the 'scaling'.(meaning I don't want to manage three different filesets and dbs manually)
Since the code is company neutral and the databases contain the company info, I will only have to recreate the database when a user requests a new company to be setup. There are multiple ways that I can create the database for a new compnay.
At runtime I can create a new databse with the 20+ tables using CREATE DATABASE
At runtime I can create additional 20+ tables with the company name as prefix for the tables using CREATE TABLES
I can add a company column to all of my tables and then continue adding info as before.
The new database method appeals to me because backup and maintenance would be easy, it would probably be a bit more secure since a hacker will only be able to access the details of one company(probably...). This option wont work on a shared hosting with a limit on number of databases.
The second option would mean I can create everything in one database. But this option is a bit more 'shared'.
I wouldn't go for the third option due to table level locking issues in MySQL (I am not using InnoDb for all my tables).
So my choices are between option 1 and 2. Developers who've managed financial apps , please advice, as once the beta testing phase is done with, the usage base will increase, and I don't wish to manually change the same thing in 10 databases and filesets. What will be the best thing to do?
From the security point of view, customers should have separate databases, which restricted access from MySQL users.
That user should only have the permissions needed by the application (often SELECT, INSERT, DELETE and UPDATE), and not administrative permissions (DROP, CREATE, GRANT, ...). In this way, you've a clear overview on databases and tables.
When you need to alter a table structure, you just executes the (thoroughly tested) SQL query on your database.
CSS, images and other static content could be put in a subdomain, or Alias (Apache)
Libraries and neutral classes should be put in one directory too, using include_path to include such a file, so you have only one fileset that needs to be changed.

Categories