I have an android app, which needs to fetch new data from a server running a MySQL database. I add data to the database via a Panel which I access online on my domain.com/mypanel/.
What would be the best way to fetch the data on the client to reduce the overhead, but keep the programming effort as small as possible. It's not necessary for the client to get the latest database changes right after they have been updated, i.e. it would be okay if the client is updated some hours after the update.
Currently I thought of the following:
Add a column timestamp to the database-tables so that I know which changes have been made
Run some sort of background service on the client (in the app) which runs every X hours and then checks for the latest updates since the last successfull server-client synchronization
Send the time-gap to the server in which there haven't been any updates on the client anymore, using HTTP-POST
On the server, there will be some sort of MySQL SELECT-statement which considers the sent time-gap (if there is no time-gap sent from the client, just SELECT everything, e.g. in case of the first synchronization (full-sync)) --> JSON-Encode the Arrays -> Sent the JSON Response to the Client
On the client, take the data, loop row by row and insert into the local database file
My question would be:
Is there something you would rather do differently?
Or would you maybe send the database changes as a whole package/sql-file instead of the raw-data as array?
What would happen, when the internet connection aborts during the synchronization? I thought of the following to avoid any conflicts in this sort of process: Only after successfull retrieve of the complete server-response (i.e. the complete JSON-array), ONLY then insert the rows into the local database and update the local update timestamp to the actual time. If I've retrieved only some of the JSON rows and the internet connection gets interrupted inbetween (or app is being killed), I would NOT have inserted ANY of the retrieved rows into my local app-database, which means that the next time the background service is running, there will hopefully be no conflicts.
Thank you very much
You've mentioned database on client and my guess is that database is SQLite.
SQLite fully supports transaction, which means that you could wrap your inserts in BEGIN TRANSACTION and END TRANSACTION statements. A successful transaction would mean that all your inserts/updates/deletes are fine.
Choosing JSON has a lot of ups and a few downs - its easy for both client and server side. A downside I've been struggling in the past is with big JSONs (a few Mb). The client device have to download all the string and parse it at once, so it may run out of memory while converting the string to JSONObject. I've been there, so just keep that in mind as a possibility. That could be solved by splitting your update into pieces and marking each piece with its number and total number of pieces. Then the client device should know that it'd make a few requests to get all the pieces.
Another option you have is the good old CSV. You won't need the JSON includes, which will save your app some space. An upside is that you may parse and process the data line by line, so the memory impact would be very low. The obvious downside here is that you'll have to parse the data, which might be a problem, depending on your data.
I should also mention XML as an option. My personal opinion is that I'd use only if I really have to.
Related
I'm developing an Android app for salesman, so they can use their device to save their order. Specifically, every morning the salesman would go to the office and fetch the data for that day.
Currently, I can get the data by sending a request to php file, and like common practice we insert those data into sqlite in the Android so it can work offline. However, with current approach the device needs 6-8 seconds on getting the data and inserting those data to sqlite. As the data grow bigger I think it would make it slower. What I had found is that the process of inserting data into sqlite takes quite amount of time.
So, I've been thinking about dumping all data that is needed by the salesman into a sqlite file, so I could send only that file which I guess is more efficient. Can you please lead me on how to do that? Or is there any other way which is more efficient approach for this issue?
Note:
Server DB: Mysql
Server: PHP
You can do here different approach to achieve loading speed:
If your data can be pre-loaded with apk, you can just store those inside .apk and when user download app, it will be there, you just need to call remaining updated data.
If you need refreshed data every time, you can do call chunk of data from server in multiple calls, which will fetch and store data in database and update on UI.
If data is not too much, (I say, if there are 200-300 data, we should not consider it much more) you can do simple thing:
When you call network call for fetching data, you should pass that data objects to database for storing and at same time (before storing in db), just return the entire list object to Activity/Fragment, so it will have the data and you can see those data to user, in mean time, it will store in database.
Also, you can use no-sql in side sqlite, so you don't need to parse objects every time (which is costly compare to no-sql) and store all data in sql as entire object and whenever require, just fetch from db and parse it as per requirement.
Thanks to #Skynet for mentioning transaction, it does improve the process alot.. So I'll stay with this approach for now..
You can do something like so:
db.beginTransaction();
try {
saveCustomer();
db.setTransactionSuccessful();
} catch {
//Error in between database transaction
} finally {
db.endTransaction();
}
For more explanation: Android Database Transaction..
I have a large dataset of around 600,000 values that need to be compared, swapped, etc. on the fly for a web app. The entire data must be loaded since some calculations will require skipping values, comparing out of order, and so on.
However, each value is only 1 byte
I considered loading it as a giant JSON array, but this page makes me think that might not work dependably: http://www.ziggytech.net/technology/web-development/how-big-is-too-big-for-json/
At the same time, forcing the server to load it all for every request to be a waste of server resources since the clients can do the number crunching just as easily.
So I guess my question is this:
1) Is this possible to do reliably in jQuery/Javascript, and if so how?
2) If jQuery/Javascript is not the better option, what would be the best way to do this in PHP (read in files vs. giant arrays via include?)
Thanks!
I know Apache Cordova can make sql queries.
http://docs.phonegap.com/en/2.7.0/cordova_storage_storage.md.html#Storage
I know it's PhoneGap but it works on desktop browsers (At least all the ones I've used for phone app development)
So my suggestion:
Mirror your database in each users' local Cordova database, then run all the sql queries you want!
Some tips:
-Transfer data from your server to the webapp via JSON
-Break the data requests down into a few parts. That way you can easily provide a progress bar instead of waiting for the entire database to download
-Create a table with one entry that keeps the current version of your database, check this table before you send all that data. And change it each time you want to 'force' an update. This keeps the users database up-to-date and lowers bandwidth
If you need a push in the right direction I have done this before.
I have a multiple devices (eleven to be specific) which sends information every second. This information in recieved in a apache server, parsed by a PHP script, stored in the database and finally displayed in a gui.
What I am doing right now is check if a row for teh current day exists, if it doesn't then create a new one, otherwise update it.
The reason I do it like that is because I need to poll the information from the database and display it in a c++ application to make it look sort of real-time; If I was to create a row every time a device would send information, processing and reading the data would take a significant ammount of time as well as system resources (Memory, CPU, etc..) making the displaying of data not quite real-time.
I wrote a report generation tool which takes the information for every day (from 00:00:00 to 23:59:59) and put it in an excel spreadsheet.
My questions are basically:
Is it posible to do the insertion/updating part directly in the database server or do I have to do the logic in the php script?
Is there a better (more efficient) way to store the information without a decrease in performance in the display device?
Regarding the report generation, if I want to sample intervals lets say starting from yesterday at 15:50:00 and ending today at 12:45:00 it cannot be done with my current data structure, so what do I need to consider in order to make a data structure which would allow me to create such queries.
The components I use:
- Apache 2.4.4
- PostgreSQL 9.2.3-2
- PHP 5.4.13
My recommendations - just store all the information, your devices are sending. With proper indexes and queries you can process and retrieve information from DB really fast.
For your questions:
Yes it is possible to build any logic you desire inside Postgres DB using SQL, PL/pgSQL, PL/PHP, PL/Java, PL/Py and many other languages built into Postgres.
As I said before - proper indexing can do magic.
If you cannot get desired query speed with full table - you can create a small table with 1 row for every device. And keep in this table last known values to show them in sort of real-time.
1) The technique is called upsert. In PG 9.1+ it can be done with wCTE (http://www.depesz.com/2011/03/16/waiting-for-9-1-writable-cte/)
2) If you really want it to be real-time you should be sending the data directly to the aplication, storing it in memory or plaintext file also will be faster if you only care about the last few values. But PG does have Listen/notify channels so probabably your lag will be just 100-200 mili and that shouldn't be much taken you're only displaying it.
I think you are overestimating the memory system requirements given the process you have described. Adding a row of data every second (or 11 per second) is not a hog of resources. In fact it is likely more time consuming to UPDATE vs ADD a new row. Also, if you add a TIMESTAMP to your table, sort operations are lightning fast. Just add some garbage collection handling as a CRON job (deletion of old data) once a day or so and you are golden.
However to answer your questions:
Is it posible to do the insertion/updating part directly in the database server or do I >have to do the logic in the php script?
Writing logic from with the Database engine is usually not very straight forward. To keep it simple stick with the logic in the php script. UPDATE (or) INSERT INTO table SET var1='assignment1', var2='assignment2' (WHERE id = 'checkedID')
Is there a better (more efficient) way to store the information without a decrease in >performance in the display device?
It's hard to answer because you haven't described the display device connectivity. There are more efficient ways to do the process however none that have locking mechanisms required for such frequent updating.
Regarding the report generation, if I want to sample intervals lets say starting from >yesterday at 15:50:00 and ending today at 12:45:00 it cannot be done with my current data >structure, so what do I need to consider in order to make a data structure which would >allow me to create such queries.
You could use the a TIMESTAMP variable type. This would include DATE and TIME of the UPDATE operation. Then it's just a simple WHERE clause using DATE functions within the database query.
We have a heart monitor hooked up to to a TI msp430 microcontroller with a roving networks wifi module. I would like to send some type of a datastream to a webserver so that someone could monitor the data offsite. We were thinking that every half second we could send a datapoint to the php/mysql server about what the heart rate is. My problem is storing all that data. If I get one datapoint every second and create a new table entry for each datapoint, then I will start to get a lot entries in my table that contain very little data. I'm afraid this will slow things down significantly when we try to query the database and display the data causing our 'real time' data wouldn't be so 'real time'.
I was then thinking that every hour or something I could have the database batch up all the entries and turn it into one query. This seems to me like a bit of a hack, and I feel like there is a better way that I am missing.
Is there anyway I might be able to open up some type of a connection between the microcontroller to send the live data to the server and continuously write it to a file or something? Like a datastream of some type?
or
Can you keep session variables and whatnot when the microcontroller connects to the server? If we can, then it we could save all the data in a session variable until it gets to a certain size then write a chunk of data to the database with one entry and reset the session variable?
In no way will one data point per second slow down the performance of your database, even if you are running on a very limited server. This is what database abstractions are for, handling large amounts of data. It will actually be better than writing to a file in the long run since it is easy to select the last data point by id for use in your 'real time' application.
We have this PHP application which selects a row from the database, works on it (calls an external API which uses a webservice), and then inserts a new register based on the work done. There's an AJAX display which informs the user of how many registers have been processed.
The data is mostly text, so it's rather heavy data.
The process is made by thousands of registers a time. The user can choose how many registers to start working on. The data is obtained from one table, where they are marked as "done". No "WHERE" condition, except the optional "WHERE date BETWEEN date1 AND date2".
We had an argument over which approach is better:
Select one register, work on it, and insert the new data
Select all of the registers, work with them in memory and insert them in the database after all the work was done.
Which approach do you consider the most efficient one for a web environment with PHP and PostgreSQL? Why?
It really depends how much you care about your data (seriously):
Does reliability matter in this case? If the process dies, can you just re-process everything? Or can't you?
Typically when calling a remote web service, you don't want to be calling it twice for the same data item. Perhaps there are side effects (like credit card charges), or maybe it is not a free API...
Anyway, if you don't care about potential duplicate processing, then take the batch approach. It's easy, it's simple, and fast.
But if you do care about duplicate processing, then do this:
SELECT 1 record from the table FOR UPDATE (ie. lock it in a transaction)
UPDATE that record with a status of "Processing"
Commit that transaction
And then
Process the record
Update the record contents, AND
SET the status to "Complete", or "Error" in case of errors.
You can run this code concurrently without fear of it running over itself. You will be able to have confidence that the same record will not be processed twice.
You will also be able to see any records that "didn't make it", because their status will be "Processing", and any errors.
If the data is heavy and so is the load, considering the application is not real time dependant the best approach is most definately getting the needed data and working on all of it, then putting it back.
Efficiency speaking, regardless of language is that if you are opening single items, and working on them individually, you are probably closing the database connection. This means that if you have 1000's of items, you will open and close 1000's of connections. The overhead on this far outweighs the overhead of returning all of the items and working on them.