Sync/Transfer DB from Localhost to Remote too slow - php

I have a large chunk of database available at about 100 GB. The records would be synced with the remote database. Each record must be checked with db online and if not available then to be update/inserted to db. I have tried some methods to increase the speed but it is transferring at too slow a speed. The methods I tried are:
Simple script to match the record and uploading the database to which the speed is very very slow.
Generated MySQL dump then compressed it and transferred to online, then online check and update them. The dump was too big to transfer (it was taking a long time to transfer).
Kindly suggest other methods to transfer the DB.

try mysqldumper :
http://sourceforge.net/projects/mysqldumper/files/
it can take mysql dump and restore, its awesome its speed is good.

Related

Android: cant download big size Database from real website to SQLite database

I am making an app that needs to download data from web. It converts the data in mysql -> JSON -> sqlite database. The download features worked fine with the dummy database using xampp in emulator. But, when I change the URL address, the emulator couldn't get the database from web. So I tried using my phone, the application worked fine when it download small database (around 50-100 rows, with total around 200KB), but it failed when I tried to download 2MB sized database (which has 22.000 rows in .sql)
Is there any limitation in size or rows to download in apps, especially SQLite database? Or did I miss something? Also, how to see the database I downloaded in my phone? I already checked Show Hidden Folders in MyFiles setting, but I couldn't find my apps package..
Please help me. Thank you.
Android does not have swap paging memory like a PC. You can only use the physical memory that the device has. With this information the limit becomes the memory occupied by the JSON, your class, your database.
You'll have to download N rows process N rows in a loop. But the plus side is you could restart such an operation and include a cool progress bar for the user to look at while her storage is being eaten up by a large database.

Writing a SQLite db loaded using :memory to disk

I'm generating quite a substantial (~50Mb) SQLite3 database in memory that I need to be able to write to disk once the generation of said database is complete. What is the best way of approaching this using PHP?
I have tried creating a structurally identical SQLite3 db on disk, and then using INSERTS to populate it, but it is far too slow. I have also drawn a blank looking at the online PHP SQLite3 docs.
What I have found is the SQLite3 Backup API, but not sure how best to approach interfacing with it from PHP. Any ideas?
The backup API is not available in PHP.
If you wrap all INSERTs into a single transaction, the speed should be OK.
You could avoid the separate temporay database and make the disk database almost as fast by increasing the page cache size to be larger than 50 MB, disabling journaling, and disabling synchronous writes.

Muzak replication advice and techniques

I am attempting my first large scale database project on my own.
I have a myisam mysql db on server 1 with a php app consuming large amount of various data.
I have mysql myisam on server 2 with php app selecting and displaying data.
I want to replicate this data on server 2.
Questions:
Should I change server 1 mysql db to innodb
Can you replicate server1 innodb to server2 myisam
I'm storing media as blobs with intention of using cache to offload stress on live server. Should I use filesystem storage and rsync.
Any general advice from other experienced people ?
Here's what I suggest based on my experience.
You can use one type of engine (MyISAM or InnoDB) for both servers.
I you mix both engines, you might get Deadlock, transaction problems etc... and time fixing them can be painful.
I had problems a little while ago with InnoDB -> MyISAM. Now I used MyISAM on all servers.
For storing media (such as images, video or documents) you can create a NFS and mount a folder like /usermedia/ that both servers access.
Therefore you don't have to rsync every time. In addition, you can save the meta data or media information to the database for reference and where the file is saved on the disk.
Note: using a blob to save files can be good depending on the media. If you have a file that is approximately 1 gig might not be a good idea to save on the database for example).
Use a caching system to retrieve data (such as memcached).
For example, if you request data and you need to display them to the user, look first in the cache. If it's not in the cache, query the database, save it to the cache and display.
Next time the same information is requested, you won't request it from the server but from memory. This solution will avoid numerous calls on the Database server which will improve performance.
Let me know if you need additional help.
I would recommend InnoDB (for transactions, row locking and not table locking) and redis as the caching very fast and efficient

Sync large local DB with server DB (MySQL)

I need to weekly sync a large (3GB+ / 40+ tables) local MySQL database to a server database.
The two databases are exactly the same. The local DB is constantly updated and every week or so the server DB need to be updated with the local data. You can call it 'mirrored DB' or 'master/master' but I'm not sure if this is correct.
Right now the DB only exist locally. So:
1) First I need to copy the DB from local to server. With PHPMyAdmin export/import is impossible because of the DB size and PHPMyAdmin limits. Exporting the DB to a gzipped file and uploading it through FTP probably will break in the middle of the transfer because of connection to the server problems or because of the server file size limit. Exporting each table separately will be a pain and the size of each table will also be very big. So, what is the better solution for this?
2) After the local DB us fully uploaded to the server I need to weekly update the server DB. What the better way to doing it?
I never worked with this kind of scenario, I don't know the different ways for achieving this and I'm not precisely strong with SQL so please explain yourself as good as possible.
Thank you very much.
This article should get you started.
Basically, get Maatkit and use the sync tools in there to perform a master-master-synchronization:
mk-table-sync --synctomaster h=serverName,D=databaseName,t=tableName
You can use a DataComparer for mysql.
Customize the template synchronization, which specify the data which tables to synchronize.
Schedule a weekly update on the template.
I have 2 servers daily synchronized with dbForge Data Comparer via command line.

Synchronization with master database

I have a 2 databases hosted on the different server. What the best thing do, to copy all the contents of master table database to the table of slave database? I am not the owner of the master database but they are willing to give an access. Before the data from master database is outputted via RSS and my PHP script parse it to insert into another server where another database is located, but due to huge data content it takes 24 hours to update and insert the data to remote database, that's probably because of 2 databases overhead. So what we plan is create a script that download the data from master database and save a local copy and then FTP to the 2nd server and dump the contents into the database. Is that advisable even though the size of the file either CSV or SQL is around 30MB and still growing? What is the best solution for this?
NOTE: all of the scripts from downloading, to FTP, to inserting in 2nd database is handled by cron for automatic update.
You really should consider MySQL Master-Slave replication. This means every insert/update is als being done on the slave server.
The master server needs to be configured to keep a (binary) transaction-log which the slave uses to keep track of updates.
Other than ease of use, replication also keeps the load low since it is a continuous process.
What type of database are we talking about? Have you looked into replication?

Categories