I have two servers. First server is serving as datacenter which only contains database and REST API by philstrgeon https://github.com/philsturgeon/codeigniter-restserver.
This first server is basically working with database, I already implemented database cache http://ellislab.com/codeigniter/user-guide/database/caching.html.
Second server, contains frontend which is making requests to first api server and displaying results on site to users. EX: http://api.server.com/getuses?key=XXXX
Problem: As Second server is sending many api requests and first server
Is not fast server like google.
Contains huge amount of data 2000K rows and expecting more.
Using multiple MYSQL JOIN query (Almost 5 joins in query).
Second Server taking time:
<< Page rendered in 6.0492 seconds >>
What I have done and what I am expecting.
I have already indexed and cached MYSQL of first(api) server properly.
No cache enabled on second server.
How would I cache api response to second server and will find that there was no change made since last request.
Would you suggest using some other idea on database like (redis, mongodb etc)?
Any help would be great!!!
I think if your tables involve join queries of upto 5 tables you should consider using MongoDB for keeping the associated data asquerying it will be much faster .At the same time you can use Redis to store session data for fast access.
Hope that Helps
Related
what's the best approach to achieve data moved between from 3 to 5 remote mysql sources + one local?
We are setting up "centralized" panel to control these (I know this isn't the best/secure way to do it), we don't have any kind of sensitive data anyway and work over https so this is the fastest way to achieve this.
Currently when we do insert / select to these, we are receiving huge load. Is it better to build API to each site to receive the select/update/insert per node (this might reduce/share the load between servers)? Or just keep connecting remotely to the mysql server and do all those directly to mysql?
Ofcourse we must build this api to fit each site and talk to other(s), this way each server would use just local mysql and receives all data thought api from other sources ?
There might be SELECT queries like 1000-2000 rows but these are rare, so this could be from 5 sources up to 8000 rows. Possibly there is a lot of grouping data and just making those values to stack i.e. plus each int values to get total from all sources.
I am working on a website that needs to serve multiple requests from the same table simultaneously. We made a simple index page in CakePHP which draws some data from the database (10 rows, to be precise), and a colleague executed a test simulating 1000 users viewing the same page at the same time, meaning that 1000 identical requests would be issued to the database. The thing is that at around 500 requests, the database stopped being responsive, everything just froze and we had to kill the processes.
What comes to mind is that each and every request is executed on its own connection, and this would explain why the MySQL server was overwhelmed. From a few searches online, and on SO, I can see that PHP does not support connection pooling natively, as can be done in a Java application, for instance. Having based our app on CakePHP 2.5.3, however, I would like to think that there is some underlying mechanism that overcomes these limitations. Perhaps I am not doing something right?
Any suggestion is welcome, I just want to make sure to exhaust every possible solution.
If results gonna be same for each query, you can cache the query result, then it will not send multiple request to database,
try this plugin:-
https://github.com/ndejong/CakephpAutocachePlugin
We are developing an iOS/Android application which downloads large amounts of data from a server.
We're using JSON to transfer data between the server and client devices.
Recently the size of our data increased a lot (about 30000 records).
When fetching this data, the server request gets timed out and no data gets fetched.
Can anyone suggest the best method to achieve a fast transfer of data?
Is there any method to prepare data initially and download data later?
Is there any advantage of using multiple databases in the device(SQLite dbS) and perform parallel insertion into db's?
Currently we are downloading/uploading only changed data (using UUID and time-stamp).
Is there any best approach to achieve this efficiently?
---- Edit -----
i think its not only the problem of mysql records, at peak times multiple devices are connecting to the server to access data, so connections also goes to waiting. we are using performance server. i am mainly looking for a solution to handle this sync in device. any good method to simplify the sync or make it faster using multi threading, multiple sqlite db etc,...? or data compression, using views or ...?
A good way to achieve this would probably be to download no data at all.
I guess you won't be showing these 30k lines at your client, so why download them in the first place?
It would probably be better to create an API on your server which would help the mobile devices to communicate with the database so the clients would only download the data they actually need / want.
Then, with a cache system on the mobile side you could make yourself sure that clients won't download the same thing every time and that content they have already seen would be available off-line.
When fetching this data, the server request gets timed out and no data gets fetched.
Are you talking only about reads or writes, too?
If you are talking about writing access, as well: Are the 30,000 the result of a single insert/update? Are you using a transactional engine like InnoDB, e.g.? If so, Are your queries wrapped in a single transaction? Having auto commit mode enabled can lead to massive performance issues:
Wrap several modifications into a single transaction to reduce the number of flush operations. InnoDB must flush the log to disk at each transaction commit if that transaction made modifications to the database. The rotation speed of a disk is typically at most 167 revolutions/second (for a 10,000RPM disk), which constrains the number of commits to the same 167th of a second if the disk does not “fool” the operating system.
Source
Can anyone suggest the best method to achieve a fast transfer of data?
How complex is your query designed? Inner or outer joins, correlated or non-correlated subqueries, etc? Use EXPLAIN to inspect the efficiency? Read about EXPLAIN
Also, take a look at your table design: Have you made use of normalization? Are you indexing properly?
Is there any method to prepare data initially and download data later?
How do you mean that? Maybe temporary tables could do the trick.
But without knowing any details of your project, downloading 30,000 records on a mobile at one time sounds weird to me. Probably your application/DB-design needs to be reviewd.
Anyway, for any data that need not be updated/inserted directly to the database use a local SQLite on the mobile. This is much faster, as SQLite is a file-based DB and the data doesn't need to be transferred over the net.
I have an app that is posting data from android to some MySQL tables through PHP with a 10 second interval. The same PHP file does a lot of queries on some other tables in the same database and the result is downloaded and processed in the app (with DownloadWebPageTask).
I usually have between 20 and 30 clients connected this way. Most of the data each client query for is the same as for all the other clients. If 30 clients run the same query every 10th second, 180 queries will be run. In fact every client run several queries, some of them are run in a loop (looping through results of another query).
My question is: if I somehow produce a textfile containing the same data, and updating this textfile every x seconds, and let all the clients read this file instead of running the queries themself - is it a better approach? will it reduce serverload?
In my opinion you should consider using memcache.
It will let you store your data in memory which is even faster than files on disk or mysql queries.
What it will also do is reduce load on your database so you will be able to serve more users with the same server/database setup.
Memcache is very easy to use and there are lots of tutorials on the internet.
Here is one to get you started:
http://net.tutsplus.com/tutorials/php/faster-php-mysql-websites-in-minutes/
What you need is caching. You can either cache the data coming from your DB or cache the page itself. Below you can find few links on how do the same in PHP:
http://www.theukwebdesigncompany.com/articles/php-caching.php
http://www.addedbytes.com/articles/for-beginners/output-caching-for-beginners/
And yes. This will reduce DB server load drastically.
I currently have a read-heavy mobile app (90% reads, 10% writes) that communicates with a single web server through php calls and single MySQL db. The db stores user profile information and messages the users send and receive. We get a few messages per second added to the db.
I'm in the process scaling horizontally, load balancing, etc. So we'll have a load balancer in front of a cluster of web servers and then I plan to put a layer of Couchbase nodes on top of a MySQL cluster so we can have fast access to user profile info and messages info. We'll memcache all user info in Couchbase but then I want to memcache only the latest 24 hours worth of messages in Couchbase since that is the timeframe where most of the read activity will happen.
For the messages data stored in memcache, I want to be able to filter messages based on various data found in a message's fields like country, city, time, etc. I know Couchbase uses a KV approach so I can't query using where clauses like I would with MySQL.
Is there a way to do this? Is Couchbase Views the answer? Or am I totally barking up the wrong tree with Couchbase?
The views in Couchbase Server 2.0 and later are what you're looking for. If the data being put in Couchbase is JSON, you can use those views to perform queries across the data you put in the Couchbase cluster.
Note that you can use a view that emits a date time as an array (a common technique) and even use that in restricting your view time period so you could, potentially, just store all of your data in Couchbase without a need to put it in another system too. If you have other reasons though, you can certainly just have the items expire 24 hours after you put them in the cache. Then, if you're using one of the clients that supports it, you'll be able to get-and-touch the document in the cache extending the expiration if needed. The only downside there is that you'll need to come up with a method of invalidating the document on update.
One way to do that is a trigger in mysql which would delete the given key-- another way is to invalidate it from the application layer.
p.s.: full disclosure: I'm one of the Couchbase folks