MySQL - transfer between two MySQL servers - php

Normally I am using sqlite DB for my cache purposes but queries etc. are not as good as mysql ones. So I want to rebuild my cache structure with MySQL based one.
My current cache structure with SqLite db:
main data srv -> get data and write it in to local sqLite db
I am iterating every piece of data comes from main data server query and write it in to locale sqLite db.
Here is my current way. I don't wanna use directly query to show my visitors, I have usually +50K visitors daily, as you know it will be very hard to MySQL database to keep it up, in addition, there are over +15 million rows in these tables.
What am I planning; I am going to create a local mysql database in my current web cluster, and transfer these selected data to this server and show them to my visitors through local mysql db.
What my question is:
Is there any specific way to transfer query to query data? Or should I use same way, iterate everything through "for" and write em to local mysql db? What should I do or any ideas?

MySQL replication might be solution to your problem.

Related

Using PHP to output MySQL query as SQLite

We have an iOS app which must download a large amount of user data from a remote server (in JSON format) and then insert this data into the local SQLite database. Because there is so much data, the insertion process takes more than 5 mins to complete, which is unacceptable. The process must be less than 30 seconds.
We have identified a potential solution: get the remote server to store the user's data in an SQLite database (on the remote machine). This database is compressed and then downloaded by the app. Therefore, the app will not have to conduct any data insertion, making the process much faster.
Our remote server is running PHP/MySQL.
My question:
What is the fastest and most efficient way to create the SQLite database on the remote server?
Is it possible to output a MySQL query directly into an SQLite table?
Is it possible to create a temporary MySQL database and then convert it to SQLite format?
Or do we have to take the MySQL query output and insert each record into the SQLite database?
Any suggestions would be greatly appreciated.
I think it's better to have a look at why the insert process is taking 5 minutes.
If you don't do it properly in SQLite, every insert statement will be executed in a separate transaction. This is known to be very slow. It's much better to do all the inserts in one single SQLite transaction. That should make the insert process really fast, even if you are talking about a lot of records.
In pseudo code, you will need to the following:
SQLite.exec('begin transaction');
for (item in dataToInsert) {
SQLite.exec('insert into table values ( ... )');
}
SQLite.exec('end transaction');
The same applies by the way if you want to create the SQLite database from PHP.
You can read a lot about this here: Improve INSERT-per-second performance of SQLite?

Converting the Server MYSQL DB to SQLite DB

I have a huge Database on the server and i need that DB in SQLite so that i can use it in my application in android as well as in ios app.
I got one solution for this when i go to phpmyadmin and select my db on server i exported the Tables one by one into CSV file and then imported those in my SQLite Browser one by one to get all the tables (And Then corrected column names and type manually by editing every table columns).
This way i made it as a .sqlite DB to be used in the app.
But i want to know more on these points below :
Is their some kind of a backend application that most developers use to convert their DB into SQLite DB. (If yes then what kind of stuff do they use)
Is their any PHP script that can do this stuff. (If yes then what script is used and how ?).
Is their any other simple way to deal with this problem of getting SQLite DB from the server. (If yes then what are the possible ways to do this ?).
Can any one get me some idea about this ?

is it quicker to do 24 database queries or 1 database query and sort in php?

I have a site accepts entries from users. I want to create a graph that displays the entries over time. Is it more efficient to make 24 calls to the database have sql return the number of entries per hour or should i just do one call and return all the entries and organize them in php?
Depends on the data, the database schema and the query.
Usually the less queries you can make, the better.
If it's still slow after optimising the query, cache the result in PHP?
I think it depends on your settings, such as database, whether database on the same machine as web server, traffic, ... each call uses some over header on database server, but you do not need sort on web server. I would suggest test it with a loop.
Ok, let's compare:
1) Query the database for all the values. Return a large chunk of data across your network to the application server. Parse all of that data through the client's DBD interface. Build your own data structures to store the data the way you want it. Write, document and maintain client code to loop across the detailed data, adding/updating your data structure with each row.
2) Query the database for the data you want in the format you want it. Let the highly tuned database create the summary buckets. Return less data across the network. Parse less data on the app server. Use the data exactly as it was returned.
There's no "it depends". Use the database.

dump selected data from one db to another in mysql

Here's the situation:
I have a mySQL db on a remote server. I need data from 4 of its tables. On occasion, the schema of these tables is changed (new fields are added, but not removed). At the moment, the tables have > 300,000 records.
This data needs to be imported into the localhost mySQL instance. These same 4 tables exist (with the same names), but the fields needed are a subset of the fields in the remote db tables. The data in these local tables is considered read-only and is never written to. Everything needs to be run in a transaction so there is always some data in the local tables, even if it is a day old. The localhost tables are used by an active website, so this entire process needs to complete as quickly as possible to minimize downtime.
This process runs once per day.
The options as I see them:
Get a mysqldump of the structure/data of the remote tables and save to file. Drop the localhost tables, and run the dumped sql script. Then recreate the needed indexes on the 4 tables.
Truncate the localhost tables. Run SELECT queries on the remote db in PHP and retrieve only the fields needed instead of the entire row. Then loop through the results and create INSERT statements from this data.
My questions:
Performance wise, which is my best option?
Which one will complete the fastest?
Will either one put a heavier load on the server?
Would indexing the
tables take the same amount of time in both options?
If there is no good reason for having the local d/b be a subset of the remote, make the structure the same and enable database replication on the needed tables. Replication works by the master tracking all changes made, and managing each slave d/b's pointer into the changes. Each slave says give me all changes since the last request. For a sizeable database, this is far more efficient than any alternative you have selected. It comes with only modest cost.
As for schema changes, I think the alter information is logged by the master, so the slave(s) can replicate those as well. The mechanism definitely replicates drop table ... if exists and create table ... select, so alter logically should follow, but I have not tried it.
Here it is: confirmation that alter is properly replicated.

Inserting several hundred records into mysql database using mysqli & PHP

I'm making a Flex 4 application and using ZendAMF to interact with a MySQL database. I got Flex to generate most of the services code for me (which utilizes mysqli) and roughly edited some of the code (I'm very much a novice when it comes to PHP). All works fine at this point.
My problem is - currently the application inserts ~400 records into the database when a user is created (it's saving their own data for them to load at a later date) but it does this with separate calls to the server - i.e. each record is sent to the server, then saved to the database and then the next one is sent.
This worked fine in my local environment, but since going on a live webserver it only adds these records some of the times. Other times it will totally ignore it. I'm assuming it's doing this because the live database doesn't like getting spammed with hundred of requests at practically the same time
I'm thinking a more efficient way would be to package all of these records into an array, send that to the server just the once and then get the PHP service to do multiple inserts on each item in the array. The problem is, I'm not sure how to go about coding this in PHP using mysqli statements.
Any help or advice would be greatly appreciated - thanks!
Read up on LOAD DATA LOCAL INFILE. It seems it's just what you need for inserting multiple records: it inserts many records from a file (though not an array, unfortunately) into your table in one operation.
It's also much, much faster to do multiple-record UPDATEs with LOAD DATA LOCAL INFILE than with one-per-row UPDATEs.
ee you should handle the user defaults separate and only store the changes
and if they are not saved you must check warnings form mysql or errors form php, data don't disappear
try
error_reporting(-1);
just before insering
and
mysqli::get_warnings()
aferwards

Categories