I have a a table on filemaker that has about 1 million + rows and growing. It has about 30 columns. I need to display this on to the datatables on my PHP page. My research online says FileMaker to PHP is super slow. So, i am trying to get the data to a mongodb collection and then send it to the datatables.
Just wanted to know if its a good architectural decision ?
If yes, is there a good way to get the data from FM to Mongodb ?
If you are OK with not having "live access" to the data in FileMaker then I'd periodically export the entire dataset and import into MongoDB. Unfortunately, that would only give you Read-Only access to the data and is a very poor halfbaked solution.
If you wan to continue to keep FileMaker as your primary datastorage I'd work on making it work better with PHP then attempt to work around by introducing another piece into your infrastructure.
Displaying one million rows on a webpage is going to be slow, no matter what the backend is. Do you want to do batch fetching? Infinite scroll? You can fetch batches straight from FileMaker 500 at a time and it should perform all right.
Related
We have one big MySQL database and when making request and getting data from that data base it's took so long to to that. So what we want is there should be a CouchDB or MongoDB cluster that serves as a slave and layer between the MySql and website. When we make request to database that should not come from the database but the replicate CouchDB and when we post a data to the database that should go to CouchDB. We think of setting up a cron job for data synchronization between CouchDB and MySQL database. Is there any way of making this? Any suggestions? I only know one table of the database which is 625k long table and it almost takes 5 mins to getting and posting the data. Our website is built with Laravel so I just wanted you guys to know it for better understanding over the situation.
I've recently implemented Redis into one of my Laravel projects. It's currently more of an technical exercise as opposed to production as I want to see what it's capable of.
What I've done is created a list of payment transactions. What I'm pushing to the list is the payload which I receive from a webhook every time a transaction is processed. The payload is essentially an object containing all the information to do with that particular transaction.
I've created a VueJS frontend that then displays all the data in a table and has pagination so it's show 10 rows at a time.
Initially this was working super quick but now that the list contains 30,000 rows which is about 11MB worth of data, the request is taking about 11seconds.
I think the issue here is that I'm using a list and am fetching all the rows from the list using LRANGE.
The reason I used a list was because it has the LPUSH command so that latest transactions go to the start of the list.
I decided to do a test where I got all the data from the list and outputted the value to a blank page and this took about the same time so it's not an issue with Vue, Axios, etc.
Firslty, is this read speed normal? I've always heard that Redis is blazing fast.
Secondly, is there a better way to increase read performance when using Redis?
Thirdly, am I using the wrong data type?
In time I need to be able to store 1m rows of data.
As I realized you get all 30,000 rows in any transaction update and then paginate it in frontend. In my opinion, the true strategy is getting lighter data packs in each request.
For example, use Laravel pagination in response to your request.
In my opinion:
Firstly: As you know, Redis is blazing fast and Redis is really fast. Because Redis data always in memory, you say read 11MB data about use 11s, you can check your bandwidth
Secondly: I'm sorry I don't know how to increase in this env.
Thirdly: I think your choice ok.
So, you can check your bandwidth first(redis server).
I am about to rebuild my web application to use elastic search instead of mysql for searching purposes, but I am unsure exactly how to do so.
I watched a Laracon video on it, since my application is built on Laravel 4.2, and I will be using this wrapper to query: https://github.com/elasticsearch/elasticsearch
However, am I still going to use the MySQL database to house the data, and have ES search it? Or is it better to have ES house and query the data.
If I go the first route, do I have to do CRUD operations on both sides to keep them updated?
Can ES handle the data load that MySQL can? Meaning hundreds of millions of rows?
I'm just very skiddish on starting the whole thing. I could use a little direction, it would be greatly appreciated. I have never worked with any search other than MySQL.
I would recommend keeping MySQL as the system of record and do all CRUD operations from your application against MySQL. Then start an ElasticSearch machine and periodically move data from MySQL to ElasticSearch (only the data you need to search against).
Then if ElasticSearch goes down, you only lose the search feature - your primary data store is still ok.
ElasticSearch can be configured as a cluster and can scale very large, so it'll handle the number of rows.
To get data into Elastic, you can do a number of things:
Do an initial import (very slow, very big) and then just copy diffs with a process. You might consider something like Mule ESB to move data (http://www.mulesoft.org/).
When you write data from your app, you can write once to MySQL and also write the same data to Elastic. This provides real time data in Elastic, but of course if the second write to Elastic fails, then you'll be missing the data.
We are developing an iOS/Android application which downloads large amounts of data from a server.
We're using JSON to transfer data between the server and client devices.
Recently the size of our data increased a lot (about 30000 records).
When fetching this data, the server request gets timed out and no data gets fetched.
Can anyone suggest the best method to achieve a fast transfer of data?
Is there any method to prepare data initially and download data later?
Is there any advantage of using multiple databases in the device(SQLite dbS) and perform parallel insertion into db's?
Currently we are downloading/uploading only changed data (using UUID and time-stamp).
Is there any best approach to achieve this efficiently?
---- Edit -----
i think its not only the problem of mysql records, at peak times multiple devices are connecting to the server to access data, so connections also goes to waiting. we are using performance server. i am mainly looking for a solution to handle this sync in device. any good method to simplify the sync or make it faster using multi threading, multiple sqlite db etc,...? or data compression, using views or ...?
A good way to achieve this would probably be to download no data at all.
I guess you won't be showing these 30k lines at your client, so why download them in the first place?
It would probably be better to create an API on your server which would help the mobile devices to communicate with the database so the clients would only download the data they actually need / want.
Then, with a cache system on the mobile side you could make yourself sure that clients won't download the same thing every time and that content they have already seen would be available off-line.
When fetching this data, the server request gets timed out and no data gets fetched.
Are you talking only about reads or writes, too?
If you are talking about writing access, as well: Are the 30,000 the result of a single insert/update? Are you using a transactional engine like InnoDB, e.g.? If so, Are your queries wrapped in a single transaction? Having auto commit mode enabled can lead to massive performance issues:
Wrap several modifications into a single transaction to reduce the number of flush operations. InnoDB must flush the log to disk at each transaction commit if that transaction made modifications to the database. The rotation speed of a disk is typically at most 167 revolutions/second (for a 10,000RPM disk), which constrains the number of commits to the same 167th of a second if the disk does not “fool” the operating system.
Source
Can anyone suggest the best method to achieve a fast transfer of data?
How complex is your query designed? Inner or outer joins, correlated or non-correlated subqueries, etc? Use EXPLAIN to inspect the efficiency? Read about EXPLAIN
Also, take a look at your table design: Have you made use of normalization? Are you indexing properly?
Is there any method to prepare data initially and download data later?
How do you mean that? Maybe temporary tables could do the trick.
But without knowing any details of your project, downloading 30,000 records on a mobile at one time sounds weird to me. Probably your application/DB-design needs to be reviewd.
Anyway, for any data that need not be updated/inserted directly to the database use a local SQLite on the mobile. This is much faster, as SQLite is a file-based DB and the data doesn't need to be transferred over the net.
Situation
I am working on a project (CodeIgniter - PHP MySQL) that sources data from an Ad_API and display those listings in my site. The API talks JSON with loads of data (~1kb) about each single entity. I show around 20 - 30 such entities per page so that's what i request the server (about ~20kb data). The server gives data in a random manner and data can not be requested back for single entity by supplying any identifier.
Problem
I, now have to show more results (200+) with pagination. If it were a MySQL db I was querying thing would be butter but here I cant.
My Argument to Solutions
jQuery Pagination : Yes that is an option but again, i will have to load all 200 data entities on the user's browser at once then paginate them using jQuery
So does anyone have any better solution. Please read the situation carefully before answering because this scenario is very different from the ones we come across in daily life.
How about storing the API data in a MySQL table temporarily just for the pagination purpose?.
You may also be interested in MongoDB and store the JSON data in a MongoDB collection and paginate from it.