optimize large Mysql database - php

I am developing an application software. For db, I am using Mysql. Some tables have huge data.The table is showing overloaded. That time from Mysql optimize table option I am optimizing the table.
But I want to know - are there any mysql query by which I can optimize full database from my code....

i think you need the read the following article on how to optimize your database and table maintenance
http://dev.mysql.com/doc/refman/5.0/en/mysqlcheck.html

Related

MySQL - transfer between two MySQL servers

Normally I am using sqlite DB for my cache purposes but queries etc. are not as good as mysql ones. So I want to rebuild my cache structure with MySQL based one.
My current cache structure with SqLite db:
main data srv -> get data and write it in to local sqLite db
I am iterating every piece of data comes from main data server query and write it in to locale sqLite db.
Here is my current way. I don't wanna use directly query to show my visitors, I have usually +50K visitors daily, as you know it will be very hard to MySQL database to keep it up, in addition, there are over +15 million rows in these tables.
What am I planning; I am going to create a local mysql database in my current web cluster, and transfer these selected data to this server and show them to my visitors through local mysql db.
What my question is:
Is there any specific way to transfer query to query data? Or should I use same way, iterate everything through "for" and write em to local mysql db? What should I do or any ideas?
MySQL replication might be solution to your problem.

Idea for handle more insertion at a time

I am doing a booking site using PHP and MySql where i will get lots of data for insertion for a single insertion. Means if i get 1000 booking at a time i will be very slow. So what i am thinking to dump those data in MongoDb and run task to save in MySql. Also i am thing to use Redis for caching most viewed data.
Right now i am directly inserting in db.
Please suggest any one has any idea/suggestion about it.
In pure insert terms, it's REALLY hard to outrun MySQL... It's one of the fastest pure-append engines out there (that flushes consistently to disk).
1000 rows is nothing in MySQL insert performance. If you are falling at all behind, reduce the number of secondary indexes.
Here's a pretty useful benchmark: https://www.percona.com/blog/2012/05/16/benchmarking-single-row-insert-performance-on-amazon-ec2/, showing 10,000-25,000 inserts individual inserts per second.
Here is another comparing MySQL and MongoDB: DB with best inserts/sec performance?

Mysql large db migration

Hi I'm building a enterprise management system (php based) for a medium size company. I'm trying to migrate their existing customer records about (9000 records) into my db. Our db schemas are different.
Here are the steps I'm planning to take:
1.) Get the .csv file for each table and clean it up (get rid of unnecessary columns, remove blanks rows which seem to be littered throughout table)
2.) Import the tables into my database via phpmyadmin
3.) Write a php script to loop grab tables with this old data and then process and insert them into MY db tables
I was wondering if this plan I outlined above make sense or is the optimal way to do it?
Thanks
There is an data migration is possible in MySQL Workbench 6.0. I have migrated more than millions of record so this is not big deal.
Try
http://www.mysql.com/products/workbench/migrate/

Migrating Data from a MySQL database to a PostgreSQL database with a different schema

I am migrating my site from PHP to Rails.
At the same time I want to migrate my database from MySQL to PostgreSQL. However, the schema I have in the MySQL database is poor. Therefore, I want to implement a new schema in the PostgreSQL database.
Basically, I want take the data from the MySQL database and I want to fit it to the new schema in the PostgreSQL database. The new tables in the PostgreSQL database consisted of joined views from the MySQL database.
I am new to this sort of thing and I don't really know to start.
I had to to this in the past - your answer is a called "taps":
http://adam.heroku.com/past/2009/2/11/taps_for_easy_database_transfers/
It's basically a middle-man between mysql and postgres and will be able to handle all the differences between them.
If your schemas are radically different, you are going to have to write a script to do the necessary transformations. You can use a Database Abstraction Layer to handle the differences between MySQL and PostgreSQL, but for the most part you're on your own.

Synchronize Firebird with MySQL in PHP

I have two databases, one is a Firebird database, other is a MySQL database.
Firebird database is the main one where the information changes. I have to synchronize those changes to the other MySQL database.
I have no control over the Firebird one - I can just SELECT from it. I cannot add triggers, events or similar. I have all the control on the MySQL database.
The synchronization has to be done through 'internet' as these two servers are not connected in any way and are on different locations.
Synchronization has to be done in PHP on the server that also hosts the MySQL database.
Currently I just go through every record (every 15 minutes), calculate the hash of the rows, compare two hashes and if they don't match, I update the whole row. It works but just seems very wrong and not optimized in any way.
Is there any other way to do this? I am missing something?
Thank you.
I have made the same thing once and I don't think there is a generaly better solution.
You can only more or less optimize what you have so far. For example:
If some of the tables have a column with the "latest update" information, you can select only those that were changed since the last sync.
You can change the comparison mechanism - instead of comparing and updating whole rows, you can compare individual columns and on the MySQL side update only the changed ones. I believe that it would speed things up in case of MyISAM tables, but probably not if you use InnoDB.

Categories