Main Question
How the cursor retrieves data from SQlite? does it refers to database file addresses dynamically? or loads it fully to the memory? though i know the dalvik virtual machine is address based and the the first assumption is more likely to be true, as the nature of RAM memory and phone storage are almost the same.
So my main question is to know how the data are load? from loading to memory? or just addressing to database file content?
To clarify: (the sample is just for clarifying. you can skip it)
The question is raised from the point that:
I have created an app which loads data from sqlite and displays them in listview. the databsase grows up using user data by time. Now, when the database goes larger, is it required to load data to listview in a manner like using load more or pagination? or its true to load them in one place?
although, pagination would be better for responsiveness but, when trying to export data to xls or pdf format, is it possible to retrieve a cursor to all the database and save data in xls or pdf?
the messaging app of android loads all messages in one place and has causes no problem even when i have 3000 messages in one thread.
Seems that data for Cursor is stored in memory or some kind of cache file (it's implementation details as I've properly mentioned).
There are two possible ways to proof / show why it's not original DB file. I'm sure there should be also some kind of more theoretical explanation.
Take a look at SQLiteCursor source (it's available in your sdk platform installation): it's based on CursorWindow which is
A buffer containing multiple cursor rows.
A CursorWindow is read-write when initially created and used locally.
When sent to a remote process (by writing it to a Parcel), the remote
process receives a read-only view of the cursor window. Typically the
cursor window will be allocated by the producer, filled with data, and
then sent to the consumer for reading.
Also, from the source it looks like that Window contains all data in that buffer.
Create test application with test DB with lot of records. Request all records and show in the list. While list is showing keep constantly changing db content and observe that list content is not changed (I assume eliminate usage of requery() and related deprecated stuff).
Related
About Single Sources of Truth Google document said:
Using this model, the database serves as the single source of truth, and other parts of the app access it using our UserRepository. Regardless of whether you use a disk cache, we recommend that your repository designate a data source as the single source of truth for the rest of your app
https://developer.android.com/jetpack/guide?gclid=CjwKCAjwo4mIBhBsEiwAKgzXOH1Pq--Ws1PLzUiSP4RmDE6ByKfEi6mdXu5g86btqveIdJvvrgYuxBoCz8wQAvD_BwE&gclsrc=aw.ds#connect-viewmodel-repository
According to the document I save all data when I fected data from remote server and I only get data from room When I need to use in acitivty(In fact I collect flow which is defined in viewmodel).
It seems so good! It avoids the different data sources mix up together! But actually I found some strange question gradually:
In my App, I have a list that the server may change it(Because we have data manager website that admin can update or delete data). So in order to get the newest list data from server, I must clear all data stored in room and fect data again from remote server. This operation seems redundant: "why could I get data directly from remote server", I mean, I only get data from remote source is also a single sources truth. And also it cause a promble: my app will flash a moment because clear data make list empty and fect data from server make list full!
The most important thing is that it seems like the local data is not necessary because I must stay the newest list from remote server.
Some people may say that save data into room can make us app available offlice. I agree that, But in this place, my item of list is represent a image url, and after click the item, the app will jump to a new activity and display a ImageView base on the url we get from the list. If app offlice, the ImageView couldn't load the url also.
I am so confused I couldn't load all image url(use base64-url to avoid load invalid) in a moment also, because the data is so much. And if I say I need a search function in this list and I need load so much unbelievable data into my room, It seems so unreal and event fantasy!
In brief:
Room is a nessary? Couldnt just fect data from remote?
If room is nessary, how to solve problem I met, do my incorrect useage cause the problem?
Hi #psycongroo as I Understood your problem, and I want to share my experience:
You can handle any error with loading URL with placeholder I mean if you got an error with no Internet connection user will see placeholder, but in general libs like Picasso or Glide can cache images when it`s load one time, so the user will see the Image.
The question about why we need to use room instead of fetch data from remote directly. So from your question I don`t understand why you need to drop your local changes even they are completely new, user can have a low internet connection so he will see an empty list instead of previous data with for example progress indicator. And also if the user doesn't have the internet at all you can show some dialog to explain what the problem but old data is still present. If you are using, for example, RecyclerView you can update data with Paging 3 from google, and they update the only necessary items from your list.
P.S. let me know if that help, or you have another question.
I have connected my device and accessed my applications data by going to /data/data/my.package.name/databases.
Here I can see files:
data
data-shm
data-wal
As I can understand these files are specific to android system itself, but they represent SQLite database and yet they are not mountable to SQLite reader?
I have this issue when data is being downloaded and stored to database, after some time data-wal starts to become extremely huge (from maybe 12MB to 7GB) and after sync finished it becomes almost empty again. Am I correct in saying that this is probably the issue with transactions (somewhere transaction is not being closed and is always opened and because of this reason data-wal is being filled with back-up data in case of a rollback)?
The database consists of these three files. When opening data, the other two are automatically used when needed.
The -wal file contains all changes made to the database since the last checkpoint. To prevent the -wal file from becoming too large, use smaller transactions so that SQLite is able to do checkpoints, or just disable WAL mode altogether during bulk writes.
We've got an android app and an iPhone app (same functionality) that use sqlite for local data storage. The apps initially come with no data, then on the first run they receive data from a remote server and store it in a sqlite database. The sqlite database is created by the server and the apps download it as one file, which is then used buy the apps. The database file is not very large by today's standards, but not a tiny one either - about 5-6 MB.
Now, once in a while, the apps need to refresh the data from the server. There a few approaches I can think of:
Download a new full database from the server and replace the existing one. This one sounds like the simplest way to deal with the problem were it not for a repeated 5-6 MB downloads. The apps do prompt the user whether they want to download the updates, so this may not be too much of a problem.
Download a delta database from the server, containing only the new/modified records and in some form information about what records to delete. This would lead to a much smaller download size, but the work on the client side is more complicated. I would need to read one database and, based on what is read, update another one. To the best of my knowledge, there's not way with sqlite to do something like insert into db1.table1 (select * from db2.table1) where db1 and db2 are two sqlite databases containing table1 of the same structure. (The full sqlite database contains about 10 tables with the largest one probably containing about 500 records or so.)
Download delta of the data in some other format (json, xml, etc.) and use this info to update the database in the app. Same as before: not to much problem on the server side, smaller download size than the full database, but quite a painful process to do the updates.
Which of the three approaches you recommend? Or maybe there's yet another way that I missed?
Many thanks in advance.
After much considerations and tries-and-errors, I went for a combination of options (2) and (3).
If no data is present at all, then the app downloads a full database file from the server.
If data is present and an update is required, the app downloads some database from the server. And checks the content of a particular value in a particular table. That value will state whether the new database is to replace the original or whether it contains deletions/updates/inserts
This turns out to be the fastest way (performance-wise) and leaves all the heavy lifting (determining whether to put everything into one database or just an update) to the server. Further, with this approach, if I need to modify the algorithm to, say, always download the full database, it would only be a change on the server without the need to re-compile and re-distribute the app.
Is there a way you can have a JSON field for each of the tables? For instance, if you got a table named users, have a column named "json" that stores the JSON for each of the users. In essence, it would contain the information the rest of the fields have.
So when you download the delta in JSON, all you got to do is insert the JSON's into the tables.
Of course with this method, you will need to do additional work in parsing the JSON and creating the model/object from it, but it's just an extra 3-4 small steps.
I will recommend approach 3, because app will download the json file more fast and local db will be updated more easily avoid overhead of more internet usages.
Just create a empty db initially according to server db and then regularly updated the same by fetching json
I want users to be able to get additional content from my website which means I will insert the downloaded data into the device's SQLite. I am wondering if I am approaching this the right way..
My current approach is to create a REST web service which returns data in JSON format, parse the JSON and insert it row by row into the Sqlite db on the android device.
Is this the right approach? Will it be too slow if there are many table rows to be inserted at one time? Or is there a way to download another SQlite db and merge it with the local one?
I welcome any suggestion, thank you in advance for your answer.
I works, but you absolutely need to paginate : set a limit to the number of element sent by your rest service.
Another approach would be to download the complete sqlite database file at once, but that requires some tweaks. see http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/ (it is about embeding the database from the assets, but the preparation of the database is the same.)
A last point: large amount of insert, as well as downloading data from a server, must to be done in a separate thread or asynctask, from a service (not an activity that can be interrupted), or even better from a SynchronizationAdapter, which is called by the system itself.
I have an SQLite db, and it has audio files in it stored as blobs.
Is it possible in android (or anywhere) to stream media from a db?
I would recommend not storing the audio data in the database. The memory issues mentioned earlier can lead to huge amounts of GC thrashing which can make the system non-responsive for seconds or more at time.
The typical approach involves a handful of steps.
Store the audio in a file somewhere in your application's directory.
Create two columns in your database. One column (called anything you like) contains a "content://" URL that references the data. Seeing a "content://" URL is a trigger to the system to then look up the contents of the "_data" column in the same row. The contents of that column should be the full path to the file.
The system then transparently reads that file, and presents it to whichever code actually requested the content.
I've got some example code for doing this with images -- obviously, it's not quite the same, but I can walk through it here and you should get the gist.
The specific problem I was trying to solve was storing album artwork for a track that's stored off the device. I wanted to be able to show the album artwork in a list, and cache it locally on the device so that repeatedly scrolling through it is fast and does involve repeated network fetches for the same data.
I have an albums database, with various columns that get lazily populated from a remote server. I implement this database using the ContentProvider framework. There's a lot of great information about ContentProviders at http://developer.android.com/guide/topics/providers/content-providers.html, and you should read that first so that the rest of this makes sense.
The files involved are (note: I've linked to specific points in the tree because this is a work in progress and I want the line number references I give you to be stable):
https://github.com/nikclayton/android-squeezer/blob/02c08ace43f775412cc9715bf55aeb83e7b5f2dc/src/com/danga/squeezer/service/AlbumCache.java
This class defines various constants that are used elsewhere, and is pretty idiomatic for anything that's implemented as a ContentProvider.
In this class, COL_ARTWORK_PATH is the column that's going to contain the content:// URL.
https://github.com/nikclayton/android-squeezer/blob/02c08ace43f775412cc9715bf55aeb83e7b5f2dc/src/com/danga/squeezer/service/AlbumCacheProvider.java
This is the implementation of the ContentProvider. Again, this is pretty idiomatic for ContentProviders that are wrapping SQLite databases. Some points of interest:
429: albumListCallback()
This code is called whenever the app receives data about an album from the remote server (that's specific to my app, and not relevant to your problem). By this point the data has been wrapped up as a list of SqueezerAlbums, so this code has to unpack that data and turn it in to rows in the database.
456: Here we call updateAlbumArt with enough data that it can do a remote fetch of the album artwork (and I've just realised looking at this code that I can make this more efficient because it's updating the database more often than it should. But I digress).
475: updateAlbumArt()
This has to fetch the remote image, resize it, store both the original and resized versions in the filesystem (why both? Because I haven't finished this, and there will be code to select the correct cached size later).
This creates a cache directory as necessary, downloads the remote image, resizes it, and saves it to the files.
535: This is the bit you're probably particularly interested in. This creates a content:// URL (using the constants in AlbumCache.java) that references the data, and puts that in COL_ARTWORK_PATH. Then it puts the absolute path to the file in the _data column.
571: openFile()
You must implement this. Users of the ContentProvider will call openFile() when they want to open the file in the database. This implementation uses openFileHelper(), which is the code that looks up the value in the _data column, opens that file, and returns a ParcelFileDescriptor to the caller.
As you may just have realised, your open implementation of openFile() doesn't have to do this -- you could use another column name, or perhaps you have a way of going straight from the URL to the file in the filesystem. This does seem to be a very common idiom though.
Assuming you've done something like that, and now have a ContentProvider for your database, to actually access the image, your application will need to have generated a URI that references a given piece of content by it's ID. The code in the app to open the file looks like this:
Inputstream f = this.getContentResolver().openInputStream(theUri);
which ends up calling your implementation of openFile(), which ends up calling openFileHelper(), which finally gets to the right file in the filesystem. One other advantage of this approach is that openFile() is called in your application's security domain, so it can access the file, and, if implemented correctly, your ContentProvider can be called by completely different applications if you make the URLs that it responds to publicly known.
If you are storing the actual audio data in the database in a format that the Android audio system can natively interpret (i.e. mp3, 3gpp, ogg) then one way you could do it is to implement a web server in a Service, and have that Service open the SQLite database, fetch the blob using Cursor.getBlob and then feeding that Blob out to a MediaPlayer instance through the web server by wrapping the byte array with a ByteArrayInputStream. I've seen implementations like this done (in those cases it was from file, not db but the same principles apply).
Alternatively you could use AudioTrack, translate the audio if its not in PCM format and then play it and handle the audio management yourself: probably a lot more work but more efficient.
Note that this will be EXTREMELY memory-intensive and will probably perform poorly: for a 5mb MP3 you'd basically have to hold the entire thing in memory since it doesn't appear that Android's SQLite interface gives you a stream interface to blobs. If you are loading multiple media files, then...bad things happen.