We are working on the CRM site. Now we have to shift the site from one server to another server. We are not having the problem for shifting the files since we are working on that so we know the changes or update in those files. But we have to shift the DB from one server to another server without losing a single data.We planned to make access the old server DB from the new server still we are changing the DB from old server to new server. But the problem is when we are doing this process the user have the possibilities to inserting the records to old DB. So we lose those data since we already taken the DUMP.So kindly suggest us what is the best way to doing the DB transfer with losing single data.
we are thinking to take the records between the times from dumping to changing to new server. Is that possible?.... if yes kindly suggest us how?...
Allow updates from both servers, that is, the old DB server will take updates. When the DNS migration is finished, only then migrate the DB data. Still, there's likely to be some downtime, but scripts can speed this up.
Related
I have a website where I query one of our external sql servers and insert the records into the local server of the website.
I'm simply connecting to the external database, querying the table, truncating the local table, and running a foreach to insert data into the local table.
The process works fine, the problem is that it takes a long time.
I just want to see if you guys could give me some hints on how to speed up the process. If there is another way to do this, please let me know.
There are so many factors to determine the best approach. Is the database you are copying to supposed to always be the same as the source one, or will it have entries that are not in the source one. If you just want them to be identical, and the web site database is simply a read only clone, you have a bunch of ways in SQL Server to do this: replication, log shipping, mirroring, SSIS packages. It all depends on how frequently you want to synchronize the databases and a lot of other factors.
I'm helping a friend migrate her wordpress server to GoDaddy, and I think I may have bitten off more than I can chew... I've never migrated a wordpress before. This page here is the Wordpress wiki for moving Wordpress when your domain isn't changing. It doesn't seem to complex, but I'm terrified of accidentally ruining this website and I don't understand a couple of things on the wiki.
The Wiki says
If database and URL remains the same, you can move by just copying your files and database.
Does this mean that I can just log in to her server from Filezilla and copy all of the files on the server? What does database mean, is that something separate from the files on the server?
If database name or user changes, edit wp-config.php to have the correct values.
This sort of goes with my first question.. What initiates a database name or user change?
Apologies for my ignorance, but after an hour or so of searching around for these answers I'm left just as confused.
Last but not least, is there anything else I should be aware of when migrating a wordpress? I'm a little nervous..
You are going to need to migrate you instalation in two parts.
Part 1 you already eluded to. You will need to copy the files from one server to another. I am guessing you know how to do this so I will not dive any deeper into it. If you do need more explanation, please let me know and I will edit the question.
Part 2 is what you mentioned but said you did not understand. Copying the database of wp install. Wordpress runs off of PHP and MySQL. The "files" part in part 1 is the PHP files (along with some html and css). You need to log into his MySQL server and do an export of his database. You should be able to export the database (How to export mysql database to another computer?) and import it into his new server on GoDaddy. (Error importing SQL dump into MySQL: Unknown database / Can't create database).
Just take things slow, follow the guides that I have linked and do not delete anything from the first server until everything is working on the second. Please let me know if you do not understand anything.
if you don't feel confortable with database exports and imports, try using plugins like:
http://wordpress.org/plugins/duplicator/
or
http://wordpress.org/plugins/wordpress-move/
Check his docs for info.
Luck!
• A database is literally a data base. It's where websites (and other applications) store their data eg. For Wordpress, it would be data such as posts, user information etc.
If you are using a cPanel setup then you would need to get access to it and navigate to phpMyAdmin which is the GUI for managing a database.
Now I'm not sure what type of setup you're using but that should be a start.
• A database has a connection server address (usually localhost), a database name, username and password. These are setup at the time of setting up a database.
When migrating servers, you would need to update those details in the wp-config.php file (I think around line 19 or so).
• The annoying part about migrating Wordpress to another server is the domain change as you have to update the old domain with the new domain throughout the database. However since you're not changing domain names, it should be a smooth ride as long as the new server supports PHP and has a database.
We are developing a web application using mysql and php.
In our application, we are expected to sync our local mysql database with a remote mysql database ( running on a different host) based on a trigger from the user interface.
User trigger is in the form of a webpage and when the user clicks on a button, the php server script is fired which should perform this synchronization in the background.
We planned to do it in a simple way by opening db connection with the remote and local db and inserting the rows, one row at a time. But the size of the remote DB table can be very high ( as big as a few million entries) and hence we need a more efficient solution.
Can someone help us with the sql query / php code which can do this db sync in an efficient manner without burdening the remote DB too much.
Thanks in advance
Shyam
UPDATE
The remote DB is not in my control so I cannot configure it as master or do any other settings on it. So that is one major limitation I have. That is why I want to do it programatically using php. Another option I have is to read blocks of 1000 rows from remote DB and insert into the local DB. But I wanted to know if there is a better way ?
You shouldn't concern yourself with MySQL replication from an application layer when the data layer has this functionality built-in. Please read up on "Master/Slave replication with MySQL". https://www.digitalocean.com/community/articles/how-to-set-up-master-slave-replication-in-mysql is a starting point.
The cost of replication is minimal as far as the Master is concerned. It basically works like this:
The Master logs all (relevant) activity into a flat log file
The Slave downloads this log every now and then and replays the binary log locally
Therefore, the impact on the Master is only writing linear data to a log file, plus a little bit of bandwidth. If you still worry about the impact on the Master, you could throttle the link between Master and Slave (at the system level), or just open the link during low activity times (by issuing STOP/START SLAVE commands at appropriate times).
I should also mention that built-in replication takes place at a low level inside the MySQL engine. I do not think you can achieve better performance with an external process. If you want to fully sinchronise your local dtabase when you hit this "Synchronise" button, then look no further.
If you can live with partial synchronisation, then you could have this button resume replication for a short timeframe (eg. START SLAVE for 10 secondes and STOP SLAVE again automatically; the user needs to click again to get more data) synchronised.
So the scenario is this:
I have a mySQL database on a local server running on Windows 2008 Server. The server is only meant to be accessible to users on our network and contains our companies production schedule information. I have what is essentially the same database running on a hosted server running linux, which is meant to be accessible online so our customers can connect to it and update their orders.
What I want to do is a two-way sync of two tables in the database so that the orders are current in both databases, and a one-way sync from our server to the hosted one with the data in the other tables. The front end to the database is written in PHP. I will say what I am working with so far, and I would appreciate if people could let me know if I am on the right track or barking up the wrong tree, and hopefully point me in the right direction.
My first idea is to make (at the end of the PHP scripts that generate changes to the orders tables) an export of the changes that have been made, perhaps using INSERT into OUTFILE WHERE account = account or something similar. This would keep the size of the file small rather than exporting the entire orders table. What I am hung up on is how to (A) export this as an SQL file rather than a CSV (B) how to include the information about what has been deleted as well as what has been inserted (C) how to fetch this file on the other server and execute the SQL statement.
I am looking into SSH and PowerShell currently but can't seem to formulate a solid vision of exactly how this will work. I am looking into cron jobs and Windows scheduled tasks as well. However, it would be best if somehow the updates simply occurred whenever there was a change rather than on a schedule to keep them synced in real time, but I can't quite figure that one out. I'd want to be running the scheduled task/cronjob at least once every few minutes, though I guess all it would need to be doing is checking if there were any dump files that needed to be put onto the opposing server, not necessarily syncing anything if nothing had changed.
Has anyone ever done something like this? We are talking about changing/adding/removing from 1(min) to 160 lines(max) in the tables at a time. I'd love to hear people's thoughts about this whole thing as I continue researching my options. Thanks.
Also, just to clarify, I'm not sure if one of these is really a master or a slave. There isn't one that's always the accurate data, it's more the most recent data that needs to be in both.
+1 More Note
Another thing I am thinking about now is to add at the end of the order updating script on one side another config/connect script pointing to the other servers database, and then rerun the exact same queries, since they have identical structures. Now that just sounds to easy.... Thoughts?
You may not be aware that MySQL itself can be configured with databases on separate servers that opportunistically sync to each other. See here for some details; also, search around for MySQL ring replication. The setup is slightly brittle and will require you to learn a bit about MySQL replication. Or you can build a cluster; much higher learning curve but less brittle.
If you really want to roll it yourself, you have quite an adventure in design ahead of you. The biggest problem you have to solve is not how to make it work, it's how to make it work correctly after one of the servers goes down for an hour or your DSL modem melts or a hard drive fills up or...
Start a query on a local and a remote server can be a problem if the connection breaks. It is better to each query locally stored in the file, such as GG-MM-DD-HH.sql, and then send the data every hour, when the hour expired. Update period can be reduced to 5 minutes for example.
In this way, if the connection breaks, the re-establishment take on all the left over files.
At the end of the file insert CRC for checking content.
I have a 2 databases hosted on the different server. What the best thing do, to copy all the contents of master table database to the table of slave database? I am not the owner of the master database but they are willing to give an access. Before the data from master database is outputted via RSS and my PHP script parse it to insert into another server where another database is located, but due to huge data content it takes 24 hours to update and insert the data to remote database, that's probably because of 2 databases overhead. So what we plan is create a script that download the data from master database and save a local copy and then FTP to the 2nd server and dump the contents into the database. Is that advisable even though the size of the file either CSV or SQL is around 30MB and still growing? What is the best solution for this?
NOTE: all of the scripts from downloading, to FTP, to inserting in 2nd database is handled by cron for automatic update.
You really should consider MySQL Master-Slave replication. This means every insert/update is als being done on the slave server.
The master server needs to be configured to keep a (binary) transaction-log which the slave uses to keep track of updates.
Other than ease of use, replication also keeps the load low since it is a continuous process.
What type of database are we talking about? Have you looked into replication?