We have one big MySQL database and when making request and getting data from that data base it's took so long to to that. So what we want is there should be a CouchDB or MongoDB cluster that serves as a slave and layer between the MySql and website. When we make request to database that should not come from the database but the replicate CouchDB and when we post a data to the database that should go to CouchDB. We think of setting up a cron job for data synchronization between CouchDB and MySQL database. Is there any way of making this? Any suggestions? I only know one table of the database which is 625k long table and it almost takes 5 mins to getting and posting the data. Our website is built with Laravel so I just wanted you guys to know it for better understanding over the situation.
Related
I'm trying to develop an Android app that provides information on a chosen topic. All the information is stored on a MySQL database with one table for each topic. What I want to achieve is that when the user chooses a topic, the corresponding table should be downloaded to SQLite so that it can be used offline. Also, any changes to that particular table in the MySQL should be in synced with the SQLite db automatically when the phone connects to Internet the next time.
I have understood how to achieve the connection using PHP and HTTP requests. What I wanna know what is the best logic to sync any entries in a particular table in MuSQL database to the one in SQLite. I read about using various sync services but I don't understand how to use them. All my tables have exactly the same schema so is there an efficient way to achieve the sync ?
I have a decent knowledge in SQL but I'm kinda new to Android.
I am about to rebuild my web application to use elastic search instead of mysql for searching purposes, but I am unsure exactly how to do so.
I watched a Laracon video on it, since my application is built on Laravel 4.2, and I will be using this wrapper to query: https://github.com/elasticsearch/elasticsearch
However, am I still going to use the MySQL database to house the data, and have ES search it? Or is it better to have ES house and query the data.
If I go the first route, do I have to do CRUD operations on both sides to keep them updated?
Can ES handle the data load that MySQL can? Meaning hundreds of millions of rows?
I'm just very skiddish on starting the whole thing. I could use a little direction, it would be greatly appreciated. I have never worked with any search other than MySQL.
I would recommend keeping MySQL as the system of record and do all CRUD operations from your application against MySQL. Then start an ElasticSearch machine and periodically move data from MySQL to ElasticSearch (only the data you need to search against).
Then if ElasticSearch goes down, you only lose the search feature - your primary data store is still ok.
ElasticSearch can be configured as a cluster and can scale very large, so it'll handle the number of rows.
To get data into Elastic, you can do a number of things:
Do an initial import (very slow, very big) and then just copy diffs with a process. You might consider something like Mule ESB to move data (http://www.mulesoft.org/).
When you write data from your app, you can write once to MySQL and also write the same data to Elastic. This provides real time data in Elastic, but of course if the second write to Elastic fails, then you'll be missing the data.
I have website running under drupal 7 CMS with mysql database and i'm facing a problem in the data base because i have to store a lot of large texts in blob type in 3 tables in the current time the size of each table from those 3 tables is about 10 GB.
I use on those 3 tables 'insert' and 'select' query.
although my server is 16 GB RAM but I believe due to the database my website is so slow, what is your suggestions to solve this problem ? How large websites deal with mega data problems ?
I'm thinking to put this 3 tables in another database also in another server ?
The best solution will depend a lot on the nature of your site and exactly what you're looking for, so it's very difficult to give a concise answer here.
One common approach, for sites which aren't extremely latency-sensitive, is to actually store the textual/binary data in another service (e.g., Amazon's S3), and then only keep a key to that service stored in your database. Your application can then perform a database query, retrieve the key, and either send a request to the service directly (if you want to process the BLOB server-side) or instruct the client application to download the file from the service.
I have two servers. First server is serving as datacenter which only contains database and REST API by philstrgeon https://github.com/philsturgeon/codeigniter-restserver.
This first server is basically working with database, I already implemented database cache http://ellislab.com/codeigniter/user-guide/database/caching.html.
Second server, contains frontend which is making requests to first api server and displaying results on site to users. EX: http://api.server.com/getuses?key=XXXX
Problem: As Second server is sending many api requests and first server
Is not fast server like google.
Contains huge amount of data 2000K rows and expecting more.
Using multiple MYSQL JOIN query (Almost 5 joins in query).
Second Server taking time:
<< Page rendered in 6.0492 seconds >>
What I have done and what I am expecting.
I have already indexed and cached MYSQL of first(api) server properly.
No cache enabled on second server.
How would I cache api response to second server and will find that there was no change made since last request.
Would you suggest using some other idea on database like (redis, mongodb etc)?
Any help would be great!!!
I think if your tables involve join queries of upto 5 tables you should consider using MongoDB for keeping the associated data asquerying it will be much faster .At the same time you can use Redis to store session data for fast access.
Hope that Helps
I have a a table on filemaker that has about 1 million + rows and growing. It has about 30 columns. I need to display this on to the datatables on my PHP page. My research online says FileMaker to PHP is super slow. So, i am trying to get the data to a mongodb collection and then send it to the datatables.
Just wanted to know if its a good architectural decision ?
If yes, is there a good way to get the data from FM to Mongodb ?
If you are OK with not having "live access" to the data in FileMaker then I'd periodically export the entire dataset and import into MongoDB. Unfortunately, that would only give you Read-Only access to the data and is a very poor halfbaked solution.
If you wan to continue to keep FileMaker as your primary datastorage I'd work on making it work better with PHP then attempt to work around by introducing another piece into your infrastructure.
Displaying one million rows on a webpage is going to be slow, no matter what the backend is. Do you want to do batch fetching? Infinite scroll? You can fetch batches straight from FileMaker 500 at a time and it should perform all right.