what's the best approach to achieve data moved between from 3 to 5 remote mysql sources + one local?
We are setting up "centralized" panel to control these (I know this isn't the best/secure way to do it), we don't have any kind of sensitive data anyway and work over https so this is the fastest way to achieve this.
Currently when we do insert / select to these, we are receiving huge load. Is it better to build API to each site to receive the select/update/insert per node (this might reduce/share the load between servers)? Or just keep connecting remotely to the mysql server and do all those directly to mysql?
Ofcourse we must build this api to fit each site and talk to other(s), this way each server would use just local mysql and receives all data thought api from other sources ?
There might be SELECT queries like 1000-2000 rows but these are rare, so this could be from 5 sources up to 8000 rows. Possibly there is a lot of grouping data and just making those values to stack i.e. plus each int values to get total from all sources.
Related
I have website running under drupal 7 CMS with mysql database and i'm facing a problem in the data base because i have to store a lot of large texts in blob type in 3 tables in the current time the size of each table from those 3 tables is about 10 GB.
I use on those 3 tables 'insert' and 'select' query.
although my server is 16 GB RAM but I believe due to the database my website is so slow, what is your suggestions to solve this problem ? How large websites deal with mega data problems ?
I'm thinking to put this 3 tables in another database also in another server ?
The best solution will depend a lot on the nature of your site and exactly what you're looking for, so it's very difficult to give a concise answer here.
One common approach, for sites which aren't extremely latency-sensitive, is to actually store the textual/binary data in another service (e.g., Amazon's S3), and then only keep a key to that service stored in your database. Your application can then perform a database query, retrieve the key, and either send a request to the service directly (if you want to process the BLOB server-side) or instruct the client application to download the file from the service.
We are developing an iOS/Android application which downloads large amounts of data from a server.
We're using JSON to transfer data between the server and client devices.
Recently the size of our data increased a lot (about 30000 records).
When fetching this data, the server request gets timed out and no data gets fetched.
Can anyone suggest the best method to achieve a fast transfer of data?
Is there any method to prepare data initially and download data later?
Is there any advantage of using multiple databases in the device(SQLite dbS) and perform parallel insertion into db's?
Currently we are downloading/uploading only changed data (using UUID and time-stamp).
Is there any best approach to achieve this efficiently?
---- Edit -----
i think its not only the problem of mysql records, at peak times multiple devices are connecting to the server to access data, so connections also goes to waiting. we are using performance server. i am mainly looking for a solution to handle this sync in device. any good method to simplify the sync or make it faster using multi threading, multiple sqlite db etc,...? or data compression, using views or ...?
A good way to achieve this would probably be to download no data at all.
I guess you won't be showing these 30k lines at your client, so why download them in the first place?
It would probably be better to create an API on your server which would help the mobile devices to communicate with the database so the clients would only download the data they actually need / want.
Then, with a cache system on the mobile side you could make yourself sure that clients won't download the same thing every time and that content they have already seen would be available off-line.
When fetching this data, the server request gets timed out and no data gets fetched.
Are you talking only about reads or writes, too?
If you are talking about writing access, as well: Are the 30,000 the result of a single insert/update? Are you using a transactional engine like InnoDB, e.g.? If so, Are your queries wrapped in a single transaction? Having auto commit mode enabled can lead to massive performance issues:
Wrap several modifications into a single transaction to reduce the number of flush operations. InnoDB must flush the log to disk at each transaction commit if that transaction made modifications to the database. The rotation speed of a disk is typically at most 167 revolutions/second (for a 10,000RPM disk), which constrains the number of commits to the same 167th of a second if the disk does not “fool” the operating system.
Source
Can anyone suggest the best method to achieve a fast transfer of data?
How complex is your query designed? Inner or outer joins, correlated or non-correlated subqueries, etc? Use EXPLAIN to inspect the efficiency? Read about EXPLAIN
Also, take a look at your table design: Have you made use of normalization? Are you indexing properly?
Is there any method to prepare data initially and download data later?
How do you mean that? Maybe temporary tables could do the trick.
But without knowing any details of your project, downloading 30,000 records on a mobile at one time sounds weird to me. Probably your application/DB-design needs to be reviewd.
Anyway, for any data that need not be updated/inserted directly to the database use a local SQLite on the mobile. This is much faster, as SQLite is a file-based DB and the data doesn't need to be transferred over the net.
I have two servers. First server is serving as datacenter which only contains database and REST API by philstrgeon https://github.com/philsturgeon/codeigniter-restserver.
This first server is basically working with database, I already implemented database cache http://ellislab.com/codeigniter/user-guide/database/caching.html.
Second server, contains frontend which is making requests to first api server and displaying results on site to users. EX: http://api.server.com/getuses?key=XXXX
Problem: As Second server is sending many api requests and first server
Is not fast server like google.
Contains huge amount of data 2000K rows and expecting more.
Using multiple MYSQL JOIN query (Almost 5 joins in query).
Second Server taking time:
<< Page rendered in 6.0492 seconds >>
What I have done and what I am expecting.
I have already indexed and cached MYSQL of first(api) server properly.
No cache enabled on second server.
How would I cache api response to second server and will find that there was no change made since last request.
Would you suggest using some other idea on database like (redis, mongodb etc)?
Any help would be great!!!
I think if your tables involve join queries of upto 5 tables you should consider using MongoDB for keeping the associated data asquerying it will be much faster .At the same time you can use Redis to store session data for fast access.
Hope that Helps
I have a large dataset of around 600,000 values that need to be compared, swapped, etc. on the fly for a web app. The entire data must be loaded since some calculations will require skipping values, comparing out of order, and so on.
However, each value is only 1 byte
I considered loading it as a giant JSON array, but this page makes me think that might not work dependably: http://www.ziggytech.net/technology/web-development/how-big-is-too-big-for-json/
At the same time, forcing the server to load it all for every request to be a waste of server resources since the clients can do the number crunching just as easily.
So I guess my question is this:
1) Is this possible to do reliably in jQuery/Javascript, and if so how?
2) If jQuery/Javascript is not the better option, what would be the best way to do this in PHP (read in files vs. giant arrays via include?)
Thanks!
I know Apache Cordova can make sql queries.
http://docs.phonegap.com/en/2.7.0/cordova_storage_storage.md.html#Storage
I know it's PhoneGap but it works on desktop browsers (At least all the ones I've used for phone app development)
So my suggestion:
Mirror your database in each users' local Cordova database, then run all the sql queries you want!
Some tips:
-Transfer data from your server to the webapp via JSON
-Break the data requests down into a few parts. That way you can easily provide a progress bar instead of waiting for the entire database to download
-Create a table with one entry that keeps the current version of your database, check this table before you send all that data. And change it each time you want to 'force' an update. This keeps the users database up-to-date and lowers bandwidth
If you need a push in the right direction I have done this before.
I have a site accepts entries from users. I want to create a graph that displays the entries over time. Is it more efficient to make 24 calls to the database have sql return the number of entries per hour or should i just do one call and return all the entries and organize them in php?
Depends on the data, the database schema and the query.
Usually the less queries you can make, the better.
If it's still slow after optimising the query, cache the result in PHP?
I think it depends on your settings, such as database, whether database on the same machine as web server, traffic, ... each call uses some over header on database server, but you do not need sort on web server. I would suggest test it with a loop.
Ok, let's compare:
1) Query the database for all the values. Return a large chunk of data across your network to the application server. Parse all of that data through the client's DBD interface. Build your own data structures to store the data the way you want it. Write, document and maintain client code to loop across the detailed data, adding/updating your data structure with each row.
2) Query the database for the data you want in the format you want it. Let the highly tuned database create the summary buckets. Return less data across the network. Parse less data on the app server. Use the data exactly as it was returned.
There's no "it depends". Use the database.