Android SDCard I/O Managing Mount and Unmount - android

I know there are various version of this question asked, but I was having trouble getting a clear answer on what the best approach is for my problem.
What I am doing is I am creating a SQLite database on the SD card. I want to be able to query from it and write to it.
The question I have is what is the best way to manage when the SD card is unmounted. I am totally fine with my application closing like the stock MP3 player does. However, I want to make sure any write action to the db do not get partially done.
One thought I had is the use beginTransaction, mark it as successful, and then call end transaction. The question I have is what happens if end of transaction does not get called. Can that potentially lead to data corruption? Also, I need a little help understanding what to listen to or hook into to get notification of the sd card being unmounted.
Thanks

That is the great thing about transactions in databases - you almost never need to worry:
"All changes within a single transaction in SQLite either occur
completely or not at all, even if the act of writing the change out to
the disk is interrupted by
a program crash, an operating system crash, or a power failure."
Taken from http://www.sqlite.org/transactional.html
The disk being removed on which the database resides should (in the worst case) behave like a power failure while writing the data to the disk. The database will discard that data on next startup.
Thus, as soon as your transaction is committed using commit or end transaction and the method call executing your statement returns all data has been stored. Otherwise NO data from your transaction will have been stored - both cases leave you in a consistent state.
Beware of the only catch: You will need to make sure that all statements you need to execute together (i.e. atomically) must be within one transaction!

Related

Preventing data corruption when app exits unexpectedly

I am currently developing an RPG game for Android devices and have just implemented a custom method of serialisation I use for saving the player's progress. The saving process can take up to half a second which is enough time for a crash to occur (either caused by the device (i.e. low battery power off), the user (killing the app) or a poorly written firmware/ROM (kernel panics) etc).
While saving the player's data, the old player data is overwritten. This means if a crash was to occur, and if the saving process were to be cancelled/interrupted as a result, the player's data would be lost. This is obviously not ideal and in the future, the game will be saving a lot more data and the save time will be much longer. This will increase the chance of a crash occurring during the save process.
I cannot reduce the save time as I am writing the minimal data the game requires to be able to resume after the app has been restarted.
What foolproof measures, if any, can I take to prevent such data corruption/loss from happening?
You can save your data in a temporary set of files and moving/renaming them when the process is complete, then deleting the previous save files.
If you're not confident with the renaming process, you can add these constraints :
ensure that data is consistent with a checksum
always try to resume from the last consistent saved state, depending on a rule of your own (name of the file, ...)
Another idea would be to cut into pieces your data in order to isolate state that do not change.
If save time is really long, you can try to use remaining CPU time during the game to pre-save parts of the current state that won't probably change (using a lower priority Thread, for instance).
You can save data to a SQLiteDatabase. If changes to the save data fails or is interrupted, the database will automatically be rolled back to a previous known state.
For additional security if you need to perform multiple updates atomically, put all your changes into a transaction. If any of the changes fail, the entire transaction will be rolled back to the pre-transaction state.
For more information about using SQLite, see documentation here. For easier manipulation of your save data in the event you want to share it with other apps or sync it to a backup server, consider interacting with your data via a ContentProvider.

when to commit the cached data into the database in android

I have a requirement in my application wherein I have to every now and then perform insert and update queries on the database.
Now, my question is shall maintain cache within the application and after specific intervals or may be after say 20 entries, commit the cache into the database?
If this is the scenario where shall I maintain the cache. Should it be on the application level or in the service?
There is a possibility that say the application abruptly crashes and hence the data that is there in the cache no longer persists.
What can be the possible scenarios for this.
I have read somewhere that everytime opening and closing the database is an overhead.
Note: my database resides in the sdcard.
Thanks.
I have a requirement in my application wherein I have to every now and then perform insert and update queries on the database. Now, my question is shall maintain cache within the application and after specific intervals or may be after say 20 entries, commit the cache into the database?
Insert operation is fairly fast in SQLite. Make sure you are closing the database in finally block to avoid db in opened state. That can block further inserts. As much as possible, use batch inserts, they are much faster. See here
If this is the scenario where shall I maintain the cache. Should it be on the application level or in the service?
Service also comes at the application level :) . There is a possibility of maintaining the cache in a service and making the service live in a separate process. See here.
There is a possibility that say the application abruptly crashes and hence the data that is there in the cache no longer persists.
The only option that stays is, either maintain the cache in other process(which can again crash anytime) or maintain it on disk, may be internal storage. Maintaining a cache on internal storage is as good or bad as writing to sqlite.
What can be the possible scenarios for this.
You implement cache or you dont.
I have read somewhere that everytime opening and closing the database is an overhead. Note: my database resides in the sdcard.
AFAIK, Opening an SQLite is comparable to opening a file(a little slower though) and not comparable to opening an Oracle database. So most of the times it is affordable to open the database, but batching inserts to improve performance will never hurt ;)
Remember that in Android, your application may be killed at any time. So if you want to make sure that all your data is saved, you should store it into the database every time some data is entered and not cache it.
There is an overhead in accessing the database and for that reason make sure you don't update the database on the UI thread. If you do it correctly, performance should not deteriorate for the user.

ANR keyDispatchingTimedOut in onPause

I have an app that allows users to input a lot of data while on one particular Activity screen. If the user navigates away from this screen suddenly for any reason, I save their data to the app's SQLite database. This app has been out for nearly 2 years and has never encountered the ANR timeout, but within the last month I've received 2 reports in the market.
In one case the problem occurred when trying to close the database. In the other case, the problem occurred when trying to open the database. I can't reproduce the problem at all, but I see the exact sequence of steps that are executing, but my problem is I don't know what I could possibly change. Both also revolved around securing a mutex for the database I don't create the mutex myself, this is a system thing.
Basically, onPause gets called, the database gets opened, I write the data to the file, and I close the database. As far as I understand it, this is the proper way to do things if your data isn't able to just be stored in SharedPreferences (which my data can not be).
I don't understand why an open or close of a database would take so long, so what are my options here? My only assumption is that these problems are arising from users who have had the app for a very long time, and thus their database files have grown extremely large, and perhaps it makes the system take longer to lock down the database file (I'm just guessing here).
I thought of maybe starting an AsyncTask from onPause, but that seems wrong. Any suggestions?

How to safely read from a database that another process may be using?

I would like to read some data from the webview.db database that is created when your app uses a WebView. However, that database is managed by classes that are mostly package level or do not provide much in the way of access through apis.
Since I can't share their database objects or locks, how can I safely read content from that database without getting "database is locked" errors or similar exceptions?
I only need to read, I do not need to write.
If there is no way to safely aquire sql queries/cursors on it, is there a way to safely copy the actual webview.db file in a thread safe manner? Is there any danger of getting a corrupt copy?
Thanks much
I think all the locks are at the sqlite level, not in the java code, so you should be able to get away with just opening the db and reading from it without confusing the webview.
However, for the same reason you should be prepared for lock errors and retry your queries until the lock is gone. The db will be locked when the webview does data altering statements.
Hopefully the webview doesn't put an exclusive lock on all the db for the whole time it's running, but i don't see a reason for that.

Implementing a robust persistent undo/redo feature

I'm writing a bitmap editor where I use the Command Pattern to represent actions that will transform the document. I keep all the commands executed so far in a list and, to implement undo, I restore the document to its initial state and then replay all but the last command.
I would like my undo/redo system to have the following feature: When the user closes the editor and returns, the document, including the available undo and redo commands, should be restored to the state it was in when the user left.
I'm implementing this for Android where your application can be given very little notice before it will be cleared from memory if e.g. the user gets a phone call. Also, some of my commands are e.g. a list of all the x,y co-ord the user painted on so these might take a few moments to save to disk.
My current idea is as follows:
When a new action is performed, the command object is added to a list S for commands that need to be saved to disk.
A background thread is used that will continually take commands from list S and save them to disk. The postfix of the filenames used will be numbered in sequence. For example, if the user filled the screen then drew 2 circles, the command files might be called FillCommand1.cmd, DrawCircleCommand2.cmd, DrawCircleCommand3.cmd.
Periodically, we save a "checkpoint" command whose purpose is to store the full document state so that, even if one of the .cmd files is corrupted, we can restore a recent version of the document.
When the user exits the app, the background thread attempts to finish up saving all the commands it can (but it might get killed).
On startup, we look for the most recent .cmd file that represents a checkpoint that we can load successfully. All the .cmd files we can load after this (i.e. some files might be corrupt) go in the redo command list, all the .cmd files we can load between the first checkpoint loaded and the oldest checkpoint we can load go in the undo list.
I want the undo limit to be about 20 or 30 commands back so I need extra logic for discarding commands, deleting .cmd files and I've got to worry about multi-threading behaviour. This system seems pretty complex and will need a lot of testing to make sure it doesn't go wrong.
Is there anything in Java or Android than can help make this easier? Am I reinventing the wheel anywhere? Maybe a database would be better?
Rather than reverting to the original then performing all actions, consider making Commands reversible. That way, if you ever decide to increase the size of your undo history, you won't be introducing the potential for lag while undoing. Alternatively, as Jared Updike notes, your application may benefit from caching render results in the near past and future.
I think you're overcomplicating things with your filesystem-based solution. If you want to maintain a backup of the entire history of current working document, you should just keep an unbuffered log open in append mode, and log actions to it. The log should be associated with the particular instance of the application and file being edited, so you don't have to worry about another thread stepping on your toes. Loading from that log should be very similar to loading from an ordinary save file. Just discarding the last-read action whenever you encounter an undo action.
Well, your code is probably imperative in nature, where the state of the application is modified in place by the user's actions. This is probably fast and straightforward. Undo is basically time-travel and if you clobber old states by modifying state in place you will have to store either recipes to recompute it in reverse, or a history that can recompute it forwards.
Like you said, you can store the actions and the initial state and play them forward (stopping at the new point in history the user selects) but that means undoing one action can cause n actions to replay. One approach is to store saved state copies in the history list so you can immediately jump to a given state. To avoid using too much RAM/storage you if your system is clever it can detect the nearest (non-null) saved state in the history and recompute those few stpes forward (assuming you have all the actions you need -- this assumes actions are small and state is large(r)) until the correct state is reached. In this manner you can start eliminating old saved states (delete or set to null) (drop the state based on a cost function inversely linear to how far back in time the state is), making undo fast for the recent past, and memory/storage efficient for ancient history. I've had success with this method.

Categories

Resources