Benefits of packaging sqlite db rather than creating? - android

When my app is first run, it creates 5 tables and inserts about 50 initial values. The user can delete any of these initial values if they want and they will add to them.
In this situation, what are the pros/cons between creating the db file and copying it over on first run and just putting a bunch of create/insert statements in onCreate?
It's crucial that user information doesn't get overwritten and because of that I'm leaning towards the create/insert statements, since those will fail/be minor if some bug triggers onCreate (if that's possible), whereas copying the db file would wipe the db.

There are no benefits of packaging the sqlite db rather than creating it IMO, just choose one and code accordingly, worrying that something it wrong with your code and deciding to use one method or another based on that is just an ice patch to bad programming.
No offense meant, I just think the reason why you are asking this question isnt right, the real difference would be between deciding something that just copies, but makes the APK bigger, but perhaps is faster than creating and populating.
I personally go with creating the DB from scratch, you will only do it "once", unless of course your update requires modifications to the DB or the user deleted the data. I would rather have the user wait a while, just once, and make the APK a considerable ammount of KBs lighter.

I think you've answered your own question as far as the cons of copying the whole db over. (Although I do like copying over databases for unit testing.) If you are looking for a less tedious way to populate those fifty values, you might try using .dump in sqlite3 and putting all of those insert statements into a single resource.
On the other hand, if onCreate gets called when it shouldn't, you probably have bigger problems to worry about.

Related

Best way to update a Sqlite DB from a server?

I have an android app with a Sqlite database (it's about 800Mb), sometimes I need to insert, modify or delete database rows from an external server (via internet) in order to update the database.
Is there a way to update the database from the server without having to download the entire database (800mb)?
I was thinking of a homemade solution that consists of adding a new column to the server database that indicates if said row needs to be inserted, deleted or modified by the android app, but I don't know if something is already implemented.
First question- does the database also change locally on the Android device? If so, you're basically into cache coherency. There's an old joke that the two hardest problems in CS are cache coherency and naming things. It's not totally wrong.
If you do need to keep local changes, especially if you need to sync local changes up, this needs to be a small book, so I'm going to assume not for the rest of the answer.
Honestly, if your db needs to scale at all or you need to make changes frequently, downloading a new db is the way to go. Doing any sort of diff against the db is going to cost you a lot of DB processor time, which translates to bigger or faster db servers, which equals money. Or a big perf hit on any other use of the db.
If you do decide you need to do this you need two extra columns. One- an isDeleted flag. That way you can easily check for deleted rows (the only other way to do so is to download all rows and see what's missing, which is a very bad idea). Please note you'll need to change every db query you make anywhere to add "and isDeleted=false" as a condition so you don't return delete rows.
The second column isn't an "isModified" field, its a "modifiedTime" timestamp. Why a timestamp? Because you can't be assured that a client downloading the db was only 1 version behind. He could be 2. Or 10. You need to be able to get all the changes in all the previous versions as well, so an isModified isn't good enough. With a modifiedTime field, you can find the max modifiedTime in the local db, then ask the server for all rows with a modifiedTime greater than yours. You'll then either need to change all your inserts and updates to also set modifiedTime, or use a trigger to do so.
There are a few other ways to do it- a migration file approach (a file with the SQL commands to alter the data) can work if your changes are small. Really though, just download the db. It's so much simpler and less likely to break things. And if you're doing large updates, it may even be less bandwidth. Most importantly, if you just download the file you know the data is correct- if you try and do some kind of diff like above, you have to worry about bugs or inconsistencies in the data for various reasons (did your app get killed while processing the changes? Do you have a bug? Did you do a query mid change and get broken data, with only half the changes you need? Downloading a new file and swapping the dbs when done fixes all those things).

Large String Object in SQLite Database

I have a SQLite database which has a table (of course) named Object. In my application, I need to access that table and all of its fields. I am able to query the database and get all of the information I want from a cursor with no issues. The problem comes with deciding what to do with the cursor next. Right now I am thinking of creating a class called Object and it will have fields for every column in the table which will be set by the query. This just seems so... inefficient. I'm not sure how to do this without needing to write out every column that is in the table for the object to use, which seems to violate DRY. Are there any better ways to do this?
My end goal is to be able to access every row in the table and get whatever information I want for that row. For example I will be using this to populate a ListView. If this is too ambiguous let me know and I'll try to clarify.
Thanks!
Edit: I've found the library db40 and it seems to do what I want. The library seems to be kind of big though (40 mb) for a mobile application. Does anybody have experience with this? Everything I've read seems to indicate it is good. I'll post more if I find information.
Are there any better ways to do this?
This is very "wide" question and depends on personal requirements and what is for developer more comfortable. I'm using your idea that is for me the best one you can use.
Generally we can say you want to create ORM (object-relation mapping). It's very clean and efficient approach (my opinion). Of course sometimes is not the best solution to use ORM ( i never met with this case but heard about it). I almost always use my own defined ORM for sure it takes some time but results are significant against done frameworks.
Both have advantages and disadvantages. Own ORM has much more higher performance because it's designated and optimized for concrete solution (mainly queries etc.).
I suggest you to do what you mentioned -> create object that will represent table in database with properties equal to columns. I'm using this in work and we never had problems with performance or too much battery consumption with our applications.
It's also much more safe if you'll show user some data not directly from database but "copies" in objects. Users can do whatever want with dislayed results (they can add some dangerous symbols and hacks) but now you can easily check this before you'll want to update database(s) with changes.
Your source-code looks good, another developer won't be lost in your code, everything will be clear and easy to do updates for future.
I provided you "my opinion" on this thing so hope it'll help you with make a decision.

Which is more memory efficient, SQLite database or XML string[]?

I'm new but learning. I just need to know, which is more memory efficient, string[] in xml or an SQLite db? I can do either, and can do pre-populated on the db. I'm talking about at most 1000 strings, with more possible in updates.
Thanks for your answers.
PS I have learned so much from Stackoverflow. this is the first place I turn to when I hit a snag. Thank you.
Depends on the strings are and what you need them for. If they vary each time the app runs, leaving them in memory, as a string array, is probably best. If they are persistent between app runs, the sqlite DB will probably be better in the long run since you don't need to "reload" the database between app runs.
Likewise, do you really need all 1000 strings in memory at all times while the app runs? If so, again the array might be a good idea. If not, the database is a better bet.
Ultimately, you need to run it on a variety of android devices and see which implementation is sufficiently responsive for whatever the app is designed to do.
I would say string[] is much better. Here is a good answer from SO itself.
"Unless you want to store the data persistently I'd say you should probably just use an Array. Databases are more for persistent storage (i.e. stuff you'll need over multiple runs of your app). That said, if you arrays start getting reeeeeeeeeealy* big, then yea you're going to want to move them onto disk (in which case they won't take up any memory). And probably the simplest way to do that is with a database.
*On the order of magnitude of hundreds of thousands of entrys, maybe even more."
Source: #Kurtis Nusbaum
https://stackoverflow.com/a/7906472/847954

Pre-populated Trie

Background:
My CSS360 group is attempting to create an Android application that will include an auto-complete search feature. The data that we'll be searching consists of around 7000 entries, and will be stored in a SQLite database on the phone itself. The most obvious approach would be to do a linear search of the database following every character that the user types, and then return a list of suggestions which are potential alphabetic extensions of the user's query. However, this seems like it would be pretty inefficient, and we've been looking for better alternatives. In another one of my classes today, my instructor briefly discussed the trie data structure, and mentioned that it's often used to store entire dictionaries. Entries into a trie can be retrieved in logarithmic time (as opposed to linear time for a regular old array), so this seems like a great tool for us to use! Unfortunately, we're in waaaay over our heads on this project already, and none of us really have a clue how to make this happen. All any of us have ever coded to date are basic console applications to teach us programming basics. We're all attempting to learn the Android platform in a week's time by watching YouTube videos, and differing the database stuff to the one guy in our group who has any SQL experience whatsoever. We could seriously use some pointers!
Questions:
When creating a trie, is it possible to have the entire structure pre-populated? IE: generate a line of code for every node used, so that the entire structure will already be in memory when the program starts? My thinking here is that this will save us the overhead of having to regenerate the entire trie from the database every time the program starts. If so, is there an easy way to get these thousands of lines of code into our program? IE: Some sort of script which converts the database files into a giant text file of java commands which can be copied and pasted into Eclipse?
Will there be a considerable amount of overhead if we search the database directly instead of using some sort of internal list or data structure? Should we be copying the names out of the database and searching them inside the program for our auto-complete function?
If this proves too technically difficult for us, and we have to resort to a regular linear search, will the performance be noticeably affected?
Our current plans are to run the auto-complete function each time the user enters a character, and then wait for the function to return before allowing them to continue typing. The only programs any of us have written so far function synchronously like this. What would we need to know to make this function asynchronously? Considering our novice abilities, and the requirements that we're already having to meet, would this be too technically challenging for us?
sqlite should be able to serve this auto-complete functionality reasonably well. I'd recommend using their internal indexes over re-implementing the wheel. If you need to do the latter, then sqlite is probably not going to help you after you've done that work.
If you want substring searching, then full text search is probably your best bet.
If you only want to complete the beginning of the word, then just using their vanilla indexes should be more than enough. If performance is a problem, then just wait until they type three characters before doing the query. Set a limit on your results for snappy responses.

Best practice for keeping data in memory and database at same time on Android

We're designing an Android app that has a lot of data ("customers", "products", "orders"...), and we don't want to query SQLite every time we need some record. We want to avoid to query the database as most as we can, so we decided to keep certain data always in memory.
Our initial idea is to create two simple classes:
"MemoryRecord": a class that will contain basically an array of objects (string, int, double, datetime, etc...), that are the data from a table record, and all methods to get those data in/out from this array.
"MemoryTable": a class that will contain basically a Map of [Key,MemoryRecord] and all methods to manipulate this Map and insert/update/delete record into/from database.
Those classes will be derived to every kind of table we have in the database. Of course there are other useful methods not listed above, but they are not important at this point.
So, when starting the app, we will load those tables from an SQLite database to memory using those classes, and every time we need to change some data, we will change in memory and post it into the database right after.
But, we want some help/advice from you. Can you suggest something more simple or efficient to implement such a thing? Or maybe some existing classes that already do it for us?
I understand what you guys are trying to show me, and I thank you for that.
But, let's say we have a table with 2000 records, and I will need to list those records. For each one, I have to query other 30 tables (some of them with 1000 records, others with 10 records) to add additional information in the list, and this while it's "flying" (and as you know, we must be very fast at this moment).
Now you'll be going to say: "just build your main query with all those 'joins', and bring all you need in one step. SQLite can be very fast, if your database is well designed, etc...".
OK, but this query will become very complicated and sure, even though SQLite is very fast, it will be "too" slow (2 a 4 seconds, as I confirmed, and this isn't an acceptable time for us).
Another complicator is that, depending on user interaction, we need to "re-query" all records, because the tables involved are not the same, and we have to "re-join" with another set of tables.
So, an alternative is bring only the main records (this will never change, no matter what user does or wants) with no join (this is very fast!) and query the other tables every time we want some data. Note that on the table with 10 records only, we will fetch the same records many and many times. In this case, it is a waste of time, because no matter fast SQLite is, it will always be more expensive to query, cursor, fetch, etc... than just grabbing the record from a kind of "memory cache". I want to make clear that we don't plan to keep all data in memory always, just some tables we query very often.
And we came to the original question: What is the best way to "cache" those records? I really like to focus the discussion on that and not "why do you need to cache data?"
The vast majority of the apps on the platform (contacts, Email, Gmail, calendar, etc.) do not do this. Some of these have extremely complicated database schemas with potentially a large amount of data and do not need to do this. What you are proposing to do is going to cause huge pain for you, with no clear gain.
You should first focus on designing your database and schema to be able to do efficient queries. There are two main reasons I can think of for database access to be slow:
You have really complicated data schemas.
You have a very large amount of data.
If you are going to have a lot of data, you can't afford to keep it all in memory anyway, so this is a dead end. If you have complicated structures, you would benefit in either case with optimizing them to improve performance. In both cases, your database schema is going to be key to good performance.
Actually optimizing the schema can be a bit a of a black art (and I am no expert on it), but some things to look out for are correctly creating indices on rows you will query, designing joins so they will take efficient paths, etc. I am sure there are lots of people who can help you with this area.
You could also try looking at the source of some of the platform's databases to get some ideas of how to design for good performance. For example the Contacts database (especially starting with 2.0) is extremely complicated and has a lot of optimizations to provide good performance on relatively large data and extensible data sets with lots of different kinds of queries.
Update:
Here's a good illustration of how important database optimization is. In Android's media provider database, a newer version of the platform changed the schema significantly to add some new features. The upgrade code to modify an existing media database to the new schema could take 8 minutes or more to execute.
An engineer made an optimization that reduced the upgrade time of a real test database from 8 minutes to 8 seconds. A 60x performance improvement.
What was this optimization?
It was to create a temporary index, at the point of upgrade, on an important column used in the upgrade operations. (And then delete it when done.) So this 60x performance improvement comes even though it also includes the time needed to build an index on one of the columns used during upgrading.
SQLite is one of those things where if you know what you are doing it can be remarkably efficient. And if you don't take care in how you use it, you can end up with wretched performance. It is a safe bet, though, if you are having performance issues with it that you can fix them by improving how you are using SQLite.
The problem with a memory cache is of course that you need to keep it in sync with the database. I've found that querying the database is actually quite fast, and you may be pre-optimizing here. I've done a lot of tests on queries with different data sets and they never take more than 10-20 ms.
It all depends on how you're using the data, of course. ListViews are quite well optimized to handle large numbers of rows (I've tested into the 5000 range with no real issues).
If you are going to stay with the memory cache, you may want have the database notify the cache when it's contents change and then you can update the cache. That way anyone can update the database without knowing about the caching. Also, if you build a ContentProvider over your database, you can use the ContentResolver to notify you of changes if you register using registerContentObserver.

Categories

Resources