I have an application that reads information of songs and singers from Mysql DB and display it in listView ... my question(s)
is it a good idea to cache the data that the user already retrieved so he does not need to retrieve it every time?
how to do that chasing if it is a good idea(give me hints only)?
how to match between what is already cached and what is need to be retrieved from Mysql
It's always a good idea to cache values coming from external sources, as it reduces the time the application takes to display them.
This can be implemented in multiple ways. Generally said:
Binary data (e.g. Images) should be cached on internal/external storage. See "Caching Bitmaps"
Textual data (e.g. Articles) should be cached in memory (for example in a Map)
That said, the important and tricky part of a caching system is to determine, when a cached values is no longer "new" enough. While choosing the specific criteria depends on your application and personal taste, here are some general hints:
The age of the data (you can give server-side hints with the HTTP Response-Header Expires)
The priority of the data. The more important the data, the more up-to-date it should be.
The likelihood of any changes at all. The comments on a article are more likely to change frequently than the article itself.
A nice implementation would be to just delegate all network requests to a caching-aware method/class (implemented from the above hints), which would then decide if the request needed to be done in the first place and return either fresh or cached data.
Related
Is there a way to make a Firestore collection available offline on Android?
I know there's setPersistenceEnabled(true), but I think it only caches data after I first access it. There's no upfront caching.
I would like to have certain collections to be always fully local stored, so that if device is offline it can still query for data that have never been queried before. (Of course it would require internet connection at some point in time so that data can be synched)
I thought on doing a get on the whole collection at some point, and then attaching a listener so that the app gets new data. Is it a good practice? Is there anything better?
Yes, you have to query the data before it's available locally. There are no other options.
It's a good practice is it meets the requirements of your app, and you're willing to pay for the cost of that query, and all the changes to the doucments that happen over time (each changed document while the listener is attached costs 1 read).
The Setup
I have native iOS and Android apps which sync data to and from my webserver. A requirement of the apps is that they work offline so data is stored on the apps in sqlite databases.
The apps communicate with the server with a series of REST calls which send JSON from the server for the apps to store in their databases.
My Problem
The scale of this data is very large, some tables can have a million records, and the final size of the phone databases can approach 100mb.
The REST endpoints must limit their data and have to be called many times with different offsets for a whole sync to be achieved.
So I'm looking for ways to improve the efficiency of this process.
My Idea
An idea I had was to create a script which would run on the server which would create an sqlite file from the servers database, compress it and put it somewhere for the apps to download. Effectively creating a snapshot of the server's current data.
The apps would download this snapshot but still have to call their REST methods in case something had changed since the snapshot happened.
The Question
This would add another level of complexity to my webapp and I'm wondering if this is the right approach. Are there other techniques that people use when syncing large amounts of data?
This is a complex question, as the answer should depend on your constraints:
How often will data change? If it is too often, then the snapshot will get out of date really fast, thus apps will be effectively updating data a lot. Also, with the big volume of data, an application will waste CPU time on synchronization (even if user is not actively using all of that data!), or may become quickly out of sync with the server - this is especially true for iOS where Applications have very limited background capabilities (only small window, which is throttled) compared to Android apps.
Is that DB read-only? Are you sending updates to the server? If so, then you need to prepare conflict resolution techniques and cover cases, in which data is modified, but not immediately posted to the server.
You need to support cases when DB scheme changes. Effectively in your approach, you need to have multiple (initial) databases ready for different versions of your application.
Your idea is good in case there are not too many updates done to the database and regular means of download are not efficient (which is what you generally described: sending millions of records through multiple REST calls is quite a pain).
But, beware of hitting a wall: in case data changes a lot, and you are forced to update tens/hundreds of thousands of records every day, on every device, then you probably need to consider a completely different approach: one that may require your application to support only partial offline mode (for most recent/important items) or hybrid approach to data model (so live requests performed for most recent data in case user wants to edit something).
100mb is not so big. My apps have been synching many GBs at this point. If your data can be statically generated and upated , then one thing you can do is write everything to the server, (json, images, etc...) and then sync all on your local filesystem. In my case I use S3. At a select time or when the user wants to, they sync and it only pulls/updates what's changed. AWS actually has an API call called sync on a local/remote folder or bucket. A single call. I do mine custom, but essentially it's the same, check the last update date and file size locally and if it's different, you add that to the download queue.
I am creating a data repository layer in my application which in turn serves data from two sources, either network API calls or getting data from local storage. When there is a request to the data repository layer, I first need to check if the data is present in the local storage or not. If not, I then make a call to my API to get the data. To check if the data is in local storage or not, I was thinking of two ways to implement this:
On app startup, I load all the data in the database into the memory (eg. lists, maps etc.) and then whenever I have to check of existence of data in the local storage, I check from the memory. This has a possible problem that I have faced earlier as well. When the app is in background, Android system, might clear up this allocated memory to serve memory to other apps. This makes things complicated for me.
Whenever I want to check the existence of data in the local storage, I directly make queries to SQL tables and decide based upon that. This is much more leaner and cleaner solution than the above mentioned case. The only worry for me here is the performance hit. I wanted to know if the SQLite database runs after being loaded into the memory or not? If yes, then the memory layer of data that I had created above is useless. If not, is it safe to keep on querying the SQLite database for each and every small data request?
SQLite caches some data, and the Android OS will keep the database file in its file cache, as long as there is enough memory.
So when using SQLite, your app's performance is automatically optimized, depending on available memory.
My Android app is fetching data from the web (node.js server).
The user create a list of items (usually 20-30 but it can be up to 60+). For each item I query the server to get information for this item. Once this info is fetched (per item), it won't change anymore but new records will be added as time go by (another server call not related to the previous one).
My question is about either storing this info locally (sqlite?) or fetching this info from the server every time the user asks for it (I remind you the amount of calls).
What should be my guidelines whether to store it locally or not other than "speed"?
You should read about the "offline first" principles.
To summarize, mobile users won't always have a stable internet connection (even no connection at all) and the use of your application should not be dependant on a fulltime internet access.
You should decide which data is elligible for offline storage.
It will mainly depend on what the user is supposed to access most often.
If your Items don't vary, you should persist them locally to act as a cache. Despite the fact that the data mayn't be really big, users will welcome it, as your app will need less Internet usage, which may lead to long waits, timeouts, etc.
You could make use of Retrofit to make the calls to the web service.
When it comes to persisting data locally within an Android application, you can store it in several ways.
First one, the easiest, is to use Shared Preferences. I wouldn't suggest you this time, as you're using some objects.
The second one is to use a raw SQLite database.
However, I'd avoid making SQL queries and give a try to ORM frameworks. In Android, you can find several, such as GreenDAO, ORMLite, and so on. This is the choice you should take. And believe me, initially you might find ORMs quite difficult to understand but, when you learn how do they work and the benefits they provide us, you'll love them.
I'd like a bit of advice on how to retrieve, process and store data.
I'm building an app which gets finds the nearest laser tag site to where you are. The adderss data is stored (due to some bad design) in one field in an external database, with it's country coming from another table (told you it was a bad design). I may change where it's getting it's data from if I can, as it currently requires a lot more JSON parsing.
Anyway, as I've got the address, I need to get the locations and put them on the map. As there's several hundred sites to add, it takes a while to request the co-ordinates for all of them, and process them etc.
So my question is this: would it be better to use the google georeader on android, which I'm having problems with, or to use a web server to send the requests via google's http geocoding requests.
Also, would it be better to then store the data on the phone and check for updates every time it loads, or just not store anything on the phone and get all the data every time it loads?
Ta muchly!
So my question is this: would it be
better to use the google georeader on
android, which I'm having problems
with, or to use a web server to send
the requests via google's http
geocoding requests.
It's not like laser tag locations move terribly frequently. Do a one-time geocoding lookup on your server, cache the data in a local database, and serve out of the cache. However, there may be legal implications here, see below...
Also, would it be better to then store
the data on the phone and check for
updates every time it loads, or just
not store anything on the phone and
get all the data every time it loads?
That's actually more of a question of law than of coding, IMHO. From a coding standpoint, if the database is small, keeping it on-device and updating it periodically would be OK. However, if it is not your data, it will depend on whether or not it is legal for you to hold onto it and copy it around.