I'm developing a quiz-type application for android, and aiming to be able to expand the question bank over time. To do this, I'm pre-packaging the SQLite database, and checking for updates and overwriting it on upgrade:
#Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
if(newVersion>oldVersion) {
InputStream inputStream = null;
OutputStream outStream = null;
String dbFilePath = DATABASE_PATH + DATABASE_NAME;
try {
inputStream = myContext.getAssets().open("questions.db");
outStream = new FileOutputStream(dbFilePath);
byte[] buffer = new byte[1024];
int length;
while ((length = inputStream.read(buffer))>0) {
outStream.write(buffer, 0, length);
}
outStream.flush();
outStream.close();
inputStream.close();
} catch(IOException e) {
throw new Error("problem copying database from resource file");
}
}
}
What I'm unsure of is how to save whether a question has been attempted (and prevent that from being overwritten on upgrade). It seems my options are:
1) Save a boolean shared preference for every question id (potentially a LOT of sharedprefs)
2) Create a separate database to hold the data on which questions have been answered (seems like overkill)
3) Keep the results data in a separate table of the same database, and then ensure that only the questions table gets overwritten on upgrade
4) Keep the user data in its own column within the questions table, but prevent that column from being overwritten.
Of the above, 4 seems the most elegant (though not sure if possible). What is the best way of going about it (and if it's option 3 or 4, how do I go about modifying the code?!)
Much appreciated!
If you want to preserve data in the old database, use SQL to parse/read your records for both database. Only insert new questions in your old database thus getting a new database will all the data via a "INSERT OR UPDATE" statement.
no way! because of structure of SQLite database you cant change its file yourself
but maybe you have to alternative:
read all of database, change your desire table then write it.
store your more rewritable table separate. (this solution is dummy, I know)
Related
Whenever I update my database I get this error. But when I rerun the app as it is, the database gets updated.
android.database.sqlite.SQLiteReadOnlyDatabaseException: attempt to write a readonly database (code 1032)[
Code:
public DBAdapter(Context context) {
super(context, DATABASE_NAME, null, DATABASE_VERSION);
ctx = context;
db = getWritableDatabase();
}
#Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
if (oldVersion != newVersion) {
ctx.deleteDatabase(DATABASE_NAME);
new DBAdapter(ctx);
} else {
super.onUpgrade(db, oldVersion, newVersion);
}
}
As one of the SO answers suggested, I have added this too:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
BTW: I am using SQLiteAssetHelper to create prebuilt database
This is not a solution to prevent this issue but a work around.
public DBAdapter(Context context) {
super(context, DATABASE_NAME, null, DATABASE_VERSION);
ctx = context;
try {
db = getWritableDatabase();
} catch (SQLiteReadOnlyDatabaseException e){
ctx.startActivity(new Intent(ctx, MainActivity.class));
}
}
#Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
if (oldVersion != newVersion) {
ctx.deleteDatabase(DATABASE_NAME);
new DBAdapter(ctx);
} else {
super.onUpgrade(db, oldVersion, newVersion);
}
}
First time when the adapter is initialized, a writable db is created. Then onUpgrade gets called. Here when the database is deleted, the adapter get reinitialized. But the connection of the db is not deleted and persists hence, the second time when db = getWritableDatabase(); is executed SQLiteReadOnlyDatabaseException occurs. The original activity that initialized DBAdapter is restarted. Now the Adapter is reinitialized and the onUpgrade method is not called and hence SQLiteReadOnlyDatabaseException does not occur.
All this process happens very fast and the user experience does not become bad in my case.
Note: new DBAdapter(ctx); does not seem to be necessary and deleteDatabase seems to recreate the adapter. But for caution, I have written this line of code.
I would love to get some information on the cause and solution for this error.
I had some similar issues with Android SQLite databases. I submitted a bug report on it long ago at https://code.google.com/p/android/issues/detail?id=174566. This report discusses my findings on the reasons in more detail. I am not sure if it is related to your issue or not, but it seems to share some characteristics.
To summarize here, my debugging indicated that Android opens the database file, calls onUpgrade(), and if you replace the database file during the onUpgrade() call, the Android side file handle points to the old file and thus causes the app to crash when you return from onUpgrade() and Android tries to access the old file.
Here is some code I used to get around the issue:
When the app starts, I did this in onCreate():
Thread t = new Thread(new Runnable() {
#Override
public void run() {
Context context = getApplicationContext();
DBReader.copyDB(MainActivity.this);
DBReader.initialize(context);
}
});
t.start();
This causes the update of the database file to happen in the background while the app is starting and user is occupied with awe of the awesome application. Because my file was rather big and it took a while to copy. Notice that I completely avoid doing anything in onUpgrade() here.
DBReader is my own class, for which the main code of interest is this:
SharedPreferences prefs = context.getSharedPreferences(Const.KEY_PREFERENCES, Context.MODE_PRIVATE);
//here we have stored the latest version of DB copied
String dbVersion = prefs.getString(Const.KEY_DB_VERSION, "0");
int dbv = Integer.parseInt(dbVersion);
if (checkIfInitialized(context) && dbv == DBHelper.DB_VERSION) {
return;
}
File target = context.getDatabasePath(DBHelper.DB_NAME);
String path = target.getAbsolutePath();
//Log.d("Awesome APP", "Copying database to " + path);
path = path.substring(0, path.lastIndexOf("/"));
File targetDir = new File(path);
targetDir.mkdirs();
//Copy the database from assets
InputStream mInput = context.getAssets().open(DBHelper.DB_NAME);
OutputStream mOutput = new FileOutputStream(target.getAbsolutePath());
byte[] mBuffer = new byte[1024];
int mLength;
while ((mLength = mInput.read(mBuffer)) > 0) {
mOutput.write(mBuffer, 0, mLength);
}
mOutput.flush();
mOutput.close();
mInput.close();
SharedPreferences.Editor edit = prefs.edit();
edit.putString(Const.KEY_DB_VERSION, "" + DBHelper.DB_VERSION);
edit.apply();
and the code for checkIfInitialized():
public static synchronized boolean checkIfInitialized(Context context) {
File dbFile = context.getDatabasePath(DBHelper.DB_NAME);
return dbFile.exists();
}
So, to make the story short, I just avoided onUpgrade() alltogether and implemented my own custom upgrade functionality. This avoids the problem of the Android OS crashing on old and invalid filehandles caused by change of the database in onUpgrade().
Kind of odd, I though, for the onUpgrade() to cause the OS to crash your app if you actually end up upgrading your database file in a function intended to let you upgrade your database. And the Google comments on the bug report were made few years after so I no longer had the original crashing code around for easy proof of concept.
Your problem might be slightly different in that you are not copying the database file, but you still seem to be modifying it, so the root cause might be similar.
When the user downloads my app for the first time, the app downloads a CSV file which contains about 100,000 rows of data.
Then, I would like to populate my SQLite database with it.
Here is my code:
InputStream inputStream = activity.openFileInput(file);
InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
String line = null;
dbHelper.beginTransaction();
while((line = bufferedReader.readLine()) != null) {
String[] values = line.replaceAll("'", "''").split(",");
dbHelper.insert(values);
}
dbHelper.setTransactionSuccessful();
dbHelper.endTransaction();
dbHelper.close();
In my DBHelper class, here is the method insert:
public void insert(String[] data) {
ContentValues values = new ContentValues();
values.put(DBHelper.SHAPE_ID, data[0]);
values.put(DBHelper.SHAPE_PT_LAT, data[1]);
values.put(DBHelper.SHAPE_PT_LON, data[2]);
values.put(DBHelper.SHAPE_PT_SEQUENCE, data[3]);
myDB.insert(DBHelper.TABLE_SHAPES, null, values);
}
I tried this code, it worked BUT it took 7 minutes to do it...
Am I doing something wrong ? Is there a better way (read faster) to populate a SQLite database?
You should download the SQLite database file itself.
If there is other data in the database that you want to keep, you could either
copy that other data into the downloaded database (since it is not as much data, this should be fast); or
keep the downloaded database and the other database separated (and ATTACH one to the other if you need to do joins between them).
Use a prepared statement, that should give a performance boost.
Something like:
SQLiteStatement statement = db.compileStatement("INSERT INTO " + TABLE_SHAPES + " VALUES(?, ?));
and change your insert() method to do something like:
statement.bindString(1, data[0]);
statement.bindString(2, data[1]);
statement.executeInsert();
According to this, SQLite3 has facilities to import a CSV file built-in:
.separator ','
.import '/path/to/csv/data' [table]
I'd imagine this can be done on Android using the DatabaseUtils.InsertHelper:
DatabaseUtils.InsertHelper helper = new DatabaseUtils.InsertHelper(dbFilePath, dbTablename);
helper.prepareForInsert();
// use the bind* methods to bind your columns of CSV data to the helper in a loop
helper.execute();
I also see the the class is now marked as deprecated (API level 17) in lieu of SQLiteStatement, which may or may not be relevant to your application. If you have further questions, please leave a comment.
can anyone explain why my inserts are taking so long in Ormlite? Doing 1,700 inserts in one sqlite transaction on the desktop takes less than a second. However, when using Ormlite for Android, it's taking about 70 seconds, and I can see each insert in the debugging messages.
When I try and wrap the inserts into one transaction it goes at exactly the same speed. I understand that there is overhead both for Android and for Ormlite, however, I wouldn't expect it to be that great. My code is below:
this.db = new DatabaseHelper(getApplicationContext());
dao = db.getAddressDao();
final BufferedReader reader = new BufferedReader(new InputStreamReader(getResources().openRawResource(R.raw.poi)));
try {
dao.callBatchTasks(new Callable<Void>() {
public Void call() throws Exception {
String line;
while ((line = reader.readLine()) != null) {
String[] columns = line.split(",");
Address address = new Address();
// setup Address
dao.create(address);
}
return null;
}
});
} catch (SQLException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
I've had the same problem, and found a reasonable workaround. This took insert time from 2 seconds to 150ms:
final OrmLiteSqliteOpenHelper myDbHelper = ...;
final SQLiteDatabase db = myDbHelper.getWritableDatabase();
db.beginTransaction();
try{
// do ormlite stuff as usual, no callBatchTasks() needed
db.setTransactionSuccessful();
}
finally {
db.endTransaction();
}
Update:
Just tested this on Xperia M2 Aqua (Android4.4/ARM) and callBatchTasks() is actually faster. 90ms vs 120ms. So I think more details are in order.
We have 3 tables/classes/DAOs: Parent, ChildWrapper, Child.
Relations: Parent to ChildWrapper - 1 to n, ChildWrapper to Child - n to 1.
Code goes like this:
void saveData(xml){
for (parents in xml){
parentDao.createOrUpdate(parent);
for (children in parentXml){
childDao.createOrUpdate(child);
childWrapperDao.createOrUpdate(generateWrapper(parent, child));
}
}
}
I've got original speed up on a specific Android4.2/MIPS set-top-box (STB).
callBatchTasks was the first option because that's what we use througout all the code and it works well.
parentDao.callBatchTasks(
// ...
saveData();
// ...
);
But inserts were slow, so we've tried to nest callBatchTasks for every used DAO, set autocommit off, startThreadConnection and probably something else - don't remember at the moment. To no avail.
From my own experience and other similar posts it seems the problem occurs when several tables/DAOs are involved and it has something to do with implemetation specifics of Android (or SQLite) for concrete devices.
Unfortunately, this may be "expected". I get similar performance when I do that number of inserts under my emulator as well. The batch-tasks and turning off auto-commit don't seem to help.
If you are looking to load a large amount of data into a database, you might consider replaying a database dump instead. See here:
Android OrmLite pre-populate database
My guess would be that you are slowing somewhat because you are doing two IO tasks at one time (at least in the code shown above). You are reading from a file and writing to a database (which is a file). Also, from what I understand transactions should be a reasonable size. 1600 seems like a very high number. I would start with 100 but play around with the size.
So essentially I suggest you "chunk" your reads and inserts.
Read 100 lines to a temp Array, then insert that 100. Then read the next 100, then insert, etc.
I recently inherited a project where a sqlite db is stored on the users sdcard (tables and columns only, no content). For the initial install (and subsequent data updates), an XML file is parsed via saxParser storing it's contents to the db columns like so:
saxParser:
#Override
public void endElement(String uri, String localName, String qName) throws SAXException {
currentElement = false;
if (localName.equals("StoreID")) {
buffer.toString().trim();
storeDetails.setStoreId(buffer.toString());
} else if (localName.equals("StoreName")) {
buffer.toString().trim();
storeDetails.setStoreName(buffer.toString());
...
} else if (localName.equals("StoreDescription")) {
buffer.toString().trim();
storeDetails.setStoreDescription(buffer.toString());
// when the final column is checked, call custom db helper method
dBHelper.addtoStoreDetail(storeDetails);
}
buffer = new StringBuffer();
}
#Override
public void characters(char[] ch, int start, int length) throws SAXException {
if (currentElement) {
buffer.append(ch, start, length);
}
}
DatabaseHelper:
// add to StoreDetails table
public void addtoStoreDetail(StoreDetails storeDetails) {
SQLiteDatabase database = null;
InsertHelper ih = null;
try {
database = getWritableDatabase();
ih = new InsertHelper(database, "StoreDetails");
// Get the numeric indexes for each of the columns that we're updating
final int idColumn = ih.getColumnIndex("_id");
final int nameColumn = ih.getColumnIndex("StoreName");
...
final int descColumn = ih.getColumnIndex("StoreDescription");
// Add the data for each column
ih.bind(idColumn, storeDetails.getStoreId());
ih.bind(nameColumn, storeDetails.getStoreName());
...
ih.bind(descColumn, storeDetails.getStoreDescription());
// Insert the row into the database.
ih.execute();
} finally {
ih.close();
safeCloseDataBase(database);
}
}
The loaded xml document is 6000+ lines long. When testing on the device it stops inserting after around halfway (no errors) which takes about 4-5 minutes. On the emulator however, it runs rather quickly, writing all lines to the database in about 20 seconds. I have log statements that run when the db is opened, data added, then closed. The LogCat outputs are significantly slower when running on the device. Is there something I'm missing here? Why is my data taking so long to write? I thought the improved InsertHelper would help, but unfortunately not even a little faster. Can someone point out my flaw(s) here?
I also counted on InsertHelper improving singificantly the speed, but the difference wasnt that drastic when I tested it.
Still the strength of the InsertHelper is in multiple inserts, because it compiles the query just once. The way you do it you declare new InsertHelper for every insert, which is bypassing the one-time-compilation improvement. Try using the same instance for multiple inserts.
However, I do not think that 6000+ inserts will go in less than a minute on slow device.
EDIT Also make sure you fetch the column indices only once, this will speed up a bit more. Place these outside the loop for the batch insert.
// Get the numeric indexes for each of the columns that we're updating
final int idColumn = ih.getColumnIndex("_id");
final int nameColumn = ih.getColumnIndex("StoreName");
final int descColumn = ih.getColumnIndex("StoreDescription");
When you're doing a batch insert like this, You might do better to set up a special action in your DB helper for it. Right now, you are opening and closing the connection to the SQLite DB every time you insert a row, which is going to slow you down significantly. For the batch process, set it up so that you can maintain the connection for the whole import Job. I think the reason that it is faster in the emulator is that, while running, the emulator exists entirely in memory - so although it intentionally slows down your CPU speed, File IO comes a lot faster.
In addition to connecting the database just once could the database connection be set not to commit the changes after each insert but only once at the end of the batch? I tried to browse Android dev docs but couldn't find an exact instructions how to do this (or is it already set so). On other platforms setting SQLite driver's AutoCommit to false and committing only at the end of the batch can improve insert speeds significantly.
I initially created an SQLite database on Windows and then had problems accessing it within Android.
Subsequently I created a database on Android and then copied it out. At this point it only had the android_metadata table in it.
I then imported some data via CSV and added it back into my project. The DbHelper class in my project copies the database into /data/data/my.project/databases/.
Now, when I run a raw query from this database, if I try to access the table imported by CSV, I get an error saying that the table doesn't exist. If I try to access the android_metadata table which I created on Android then there is no error.
The database in my assets definitely has the table in that I wish to copy over to the /data/data/example.project/databases folder and the copy routine is definitely called - I've checked with the log output.
Now, if I comment out the copy code, a database is automatically created which contains the android_metadata table in there and it is ~3 KB.
When the copy code is live the database is created as ~8 KB. This is the size of the database in the assets, so it appears that it has been successfully copied. However, when I pull that database back to my desktop from DDMS it is ~8 KB, but it doesn't contain the table which is in the one in the assets folder. If I manually copy directly from desktop into /data/data... then the database works (but this will not be possible with a market app).
Here is my copy code for copying the database:
public void createDatabase() throws IOException {
Log.i(TAG, "createDatabase called");
InputStream assetsDB = mContext.getAssets().open(DATABASE_NAME);
OutputStream dbOut = new FileOutputStream(DATABASE_PATH);
Log.i(TAG, DATABASE_PATH);
Log.i(TAG, assetsDB.toString());
byte[] buffer = new byte[1024];
int length;
while ((length = assetsDB.read(buffer))>0) {
Log.i(TAG, "WritingDB block" + length);
dbOut.write(buffer, 0, length);
}
dbOut.flush();
dbOut.close();
assetsDB.close();
}
How can I fix this problem?
I've rectified this using another example which doesn't override onCreate with the database copy code and handles the copying of the database on its own. I don't really understand why it doesn't work when calling the onCreate method.
Have you seen Using your own SQLite database in Android applications?
This page is a good source for the topic. But, there is a little problem. Actually, it is not a problem and explained how to fix in the page. Look at comments.
If your database is sort of largish or smallish (> 1 MB, < 100 KB (I am not sure about these values)). It seems that it is compressed and that causes confusion in the Android read on the InputStream. The trick is to rename your asset to a file that the packager will not try to compress. Renaming the database file from xxx to xxx.mp3 or xxx.txt or something like that does the trick.
If I clearly understand you, I had the same problem with loading an SQLite database from other sources (I used to the Firefox SQLite manger too).
I want to read a temporary database from the assets folder at startup and fill my application database with test data, and I usually get this error.
I need to put this code before loading my test database:
final SQLiteDatabase db = getReadableDatabase();
db.close();
My database helper class:
public DataBaseHelper(Context context) {
super(context,
DataBaseHelper.DATABASE_NAME,
null,
DataBaseHelper.DATABASE_VERSION);
this.context = context;
// Temporary copy test database
loadMockDataBase();
dataBase = getWritableDatabase();
}
#Override
public final synchronized void close() {
if (dataBase != null) {
dataBase.close();
}
super.close();
}
private void loadMockDataBase() {
final SQLiteDatabase db = getReadableDatabase();
db.close();
try {
copyDataBase();
}
catch (final IOException e) {
Log.d(SystemConfiguration.LOG_TAG, e.getMessage());
}
}