We have an iOS app which must download a large amount of user data from a remote server (in JSON format) and then insert this data into the local SQLite database. Because there is so much data, the insertion process takes more than 5 mins to complete, which is unacceptable. The process must be less than 30 seconds.
We have identified a potential solution: get the remote server to store the user's data in an SQLite database (on the remote machine). This database is compressed and then downloaded by the app. Therefore, the app will not have to conduct any data insertion, making the process much faster.
Our remote server is running PHP/MySQL.
My question:
What is the fastest and most efficient way to create the SQLite database on the remote server?
Is it possible to output a MySQL query directly into an SQLite table?
Is it possible to create a temporary MySQL database and then convert it to SQLite format?
Or do we have to take the MySQL query output and insert each record into the SQLite database?
Any suggestions would be greatly appreciated.
I think it's better to have a look at why the insert process is taking 5 minutes.
If you don't do it properly in SQLite, every insert statement will be executed in a separate transaction. This is known to be very slow. It's much better to do all the inserts in one single SQLite transaction. That should make the insert process really fast, even if you are talking about a lot of records.
In pseudo code, you will need to the following:
SQLite.exec('begin transaction');
for (item in dataToInsert) {
SQLite.exec('insert into table values ( ... )');
}
SQLite.exec('end transaction');
The same applies by the way if you want to create the SQLite database from PHP.
You can read a lot about this here: Improve INSERT-per-second performance of SQLite?
Related
Maybe this is an obvious question, but it's just something I'm unsure of. If I have two standalone PHP applications running on one LAMP server, and the two PHP applications share the same MySQL database, do I need to worry about data integrity during concurrent database transactions, or is this something that MySQL just takes care of "natively"?
What happens if the two PHP applications both try to update the same record at the same time? What happens if they try to update the same table at the same time? What happens if they both try to read data from the database at the same time? Or if one application tries to read a record at the same time as the other application is updating that record?
What happens if the two PHP applications both try to update the same record at the same time?
What happens if they try to update the same table at the same time?
What happens if they both try to read data from the database at the same time?
Or if one application tries to read a record at the same time as the other application is updating that record?
This depend from several factor ..
the db engine you are using
the locking policy / transaction you have setted for you envirement .. or for you query
https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html
https://dev.mysql.com/doc/refman/8.0/en/innodb-locks-set.html
the code you are using .. you could use a select for update for lock only the rows you want modify
https://dev.mysql.com/doc/refman/8.0/en/update.html
and how you manage transaction
https://dev.mysql.com/doc/refman/8.0/en/commit.html
this is just a brief suggestion
Normally I am using sqlite DB for my cache purposes but queries etc. are not as good as mysql ones. So I want to rebuild my cache structure with MySQL based one.
My current cache structure with SqLite db:
main data srv -> get data and write it in to local sqLite db
I am iterating every piece of data comes from main data server query and write it in to locale sqLite db.
Here is my current way. I don't wanna use directly query to show my visitors, I have usually +50K visitors daily, as you know it will be very hard to MySQL database to keep it up, in addition, there are over +15 million rows in these tables.
What am I planning; I am going to create a local mysql database in my current web cluster, and transfer these selected data to this server and show them to my visitors through local mysql db.
What my question is:
Is there any specific way to transfer query to query data? Or should I use same way, iterate everything through "for" and write em to local mysql db? What should I do or any ideas?
MySQL replication might be solution to your problem.
I am doing a booking site using PHP and MySql where i will get lots of data for insertion for a single insertion. Means if i get 1000 booking at a time i will be very slow. So what i am thinking to dump those data in MongoDb and run task to save in MySql. Also i am thing to use Redis for caching most viewed data.
Right now i am directly inserting in db.
Please suggest any one has any idea/suggestion about it.
In pure insert terms, it's REALLY hard to outrun MySQL... It's one of the fastest pure-append engines out there (that flushes consistently to disk).
1000 rows is nothing in MySQL insert performance. If you are falling at all behind, reduce the number of secondary indexes.
Here's a pretty useful benchmark: https://www.percona.com/blog/2012/05/16/benchmarking-single-row-insert-performance-on-amazon-ec2/, showing 10,000-25,000 inserts individual inserts per second.
Here is another comparing MySQL and MongoDB: DB with best inserts/sec performance?
I have an app that is posting data from android to some MySQL tables through PHP with a 10 second interval. The same PHP file does a lot of queries on some other tables in the same database and the result is downloaded and processed in the app (with DownloadWebPageTask).
I usually have between 20 and 30 clients connected this way. Most of the data each client query for is the same as for all the other clients. If 30 clients run the same query every 10th second, 180 queries will be run. In fact every client run several queries, some of them are run in a loop (looping through results of another query).
My question is: if I somehow produce a textfile containing the same data, and updating this textfile every x seconds, and let all the clients read this file instead of running the queries themself - is it a better approach? will it reduce serverload?
In my opinion you should consider using memcache.
It will let you store your data in memory which is even faster than files on disk or mysql queries.
What it will also do is reduce load on your database so you will be able to serve more users with the same server/database setup.
Memcache is very easy to use and there are lots of tutorials on the internet.
Here is one to get you started:
http://net.tutsplus.com/tutorials/php/faster-php-mysql-websites-in-minutes/
What you need is caching. You can either cache the data coming from your DB or cache the page itself. Below you can find few links on how do the same in PHP:
http://www.theukwebdesigncompany.com/articles/php-caching.php
http://www.addedbytes.com/articles/for-beginners/output-caching-for-beginners/
And yes. This will reduce DB server load drastically.
I have following Scenario:
PHP(Server, Writer) ----> MySQL Database <------ PHP(Client, Reader/ Writer);
PHPS = PHP Server
PHPC = PHP Client
How it works?
PHPS writes data to temporary database tables (queue_*).
PHPC is triggered by a 1 hour cron.
PHPC starts, connects to database and caches all records locally (how? no idea, local mysql db? sqlite?)
PHPC executes tasks defined in those records one by one
if the task is successful it removes it from databases
if its unsuccessful it adds that record in database under a reports table.
How do i implement this such that
No half written records from PHPS get to PHPC.
PHPC can cache all records locally after one query to process them.
Any other ideas that you might have and share are highly appreciated.
MySQL's default locking will ensure that no "half-written" rows are fetched. As far as "caching locally", it seems like all that means in your scenario is reading them out of the database into a local PHP data structure like an array.
you can see about MySQL locking here: Locking in MySQL. Remeber unlock table after finishing write data.