SQLite Android Database Cursor window allocation of 2048 kb failed - android

I have a routine that runs different queries against an SQLite database many times per second. After a while I would get the error
"android.database.CursorWindowAllocationException: - Cursor window allocation of 2048 kb failed. # Open Cursors = " appear in LogCat.
I had the app log memory usage, and indeed when usage reaches a certain limit the I get this error, implying it runs out. My intuition tells me that the database engine is creating a NEW buffer (CursorWindow) every time I run a query, and even though .close() the cursors, neither the garbage collector nor SQLiteDatabase.releaseMemory() are quick enough at freeing memory. I think the solution may lie in "forcing" the database to always write into the same buffer, and not create new ones, but I have been unable to find a way to do this. I have tried instantiating my own CursorWindow, and tried setting SQLiteCursor to it, but to no avail.
¿Any ideas?
EDIT: re example code request from #GrahamBorland:
public static CursorWindow cursorWindow = new CursorWindow("cursorWindow");
public static SQLiteCursor sqlCursor;
public static void getItemsVisibleArea(GeoPoint mapCenter, int latSpan, int lonSpan) {
query = "SELECT * FROM Items"; //would be more complex in real code
sqlCursor = (SQLiteCursor)db.rawQuery(query, null);
sqlCursor.setWindow(cursorWindow);
}
Ideally I would like to be able to .setWindow() before giving a new query, and have the data put into the same CursorWindow everytime I get new data.

Most often the cause for this error are non closed cursors. Make sure you close all cursors after using them (even in the case of an error).
Cursor cursor = null;
try {
cursor = db.query(...
// do some work with the cursor here.
} finally {
// this gets called even if there is an exception somewhere above
if(cursor != null)
cursor.close();
}
To make your App crash when you are not closing a cursor you can enable Strict Mode with detectLeakedSqlLiteObjects in your Applications onCreate:
StrictMode.VmPolicy policy = new StrictMode.VmPolicy.Builder()
.detectLeakedClosableObjects()
.detectLeakedSqlLiteObjects()
.penaltyDeath()
.penaltyLog()
.build();
StrictMode.setVmPolicy(policy);
Obviously you would only enable this for debug builds.

If you're having to dig through a significant amount of SQL code you may be able to speed up your debugging by putting the following code snippet in your MainActivity to enable StrictMode. If leaked database objects are detected then your app will now crash with log info highlighting exactly where your leak is. This helped me locate a rogue cursor in a matter of minutes.
#Override
protected void onCreate(Bundle savedInstanceState) {
if (BuildConfig.DEBUG) {
StrictMode.setVmPolicy(new StrictMode.VmPolicy.Builder()
.detectLeakedSqlLiteObjects()
.detectLeakedClosableObjects()
.penaltyLog()
.penaltyDeath()
.build());
}
super.onCreate(savedInstanceState);
...
...

I have just experienced this issue - and the the suggested answer of not closing the cursor while valid, was not how I fixed it. My issue was closing the database when SQLite was trying to repopulate it's cursor. I would open the database, query the database to get a cursor to a data set, close the database and iterate over the cursor. I noticed whenever I hit a certain record in that cursor, my app would crash with this same error in OP.
I assume that for the cursor to access certain records, it needs to re-query the database and if it is closed, it will throw this error. I fixed it by not closing the database until I had completed all the work I needed.

There is indeed a maximum size Android SQLite cursor windows can take and that is 2MB, anything more than this size would result into the above error. Mostly, this error is either caused by a large image byte array stored as blob in sql database or too long strings. Here is how i fixed it.
Create a java class eg. FixCursorWindow and put below code in it.
public static void fix() {
try {
Field field = CursorWindow.class.getDeclaredField("sCursorWindowSize");
field.setAccessible(true);
field.set(null, 102400 * 1024); //the 102400 is the new size added
} catch (Exception e) {
e.printStackTrace();
}
}
Now go to your application class (create one if you don't have already) and make a call to the FixCursorWindow like this
public class App extends Application {
public void onCreate()
{
super.onCreate();
CursorWindowFixer.fix();
}
}
Finally, ensure you include your application class in your manifest on the application tag like this
android:name=".App">
That's all, it should work perfectly now.

If you're running Android P, you can create your own cursor window like this:
if(cursor instanceof SQLiteCursor && Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
((SQLiteCursor) cursor).setWindow(new CursorWindow(null, 1024*1024*10));
}
This allows you to modify the cursor window size for a specific cursor without resorting to reflections.

Here is #whlk answer with Java 7 automatic resource management of try-finally block:
try (Cursor cursor = db.query(...)) {
// do some work with the cursor here.
}

This is a Normal Exception while we are using External SQLite especially. You can resolve it by closing the Cursor Object just like as follow:
if(myCursor != null)
myCursor.close();
What it means is, IF the cursor has memory and it's opened then close it so the Application will be faster, all the Methods will take less space, and the functionalities related to the Database will also be improved.

public class CursorWindowFixer {
public static void fix() {
try {
Field field = CursorWindow.class.getDeclaredField("sCursorWindowSize");
field.setAccessible(true);
field.set(null, 102400 * 1024);
} catch (Exception e) {
e.printStackTrace();
}
}
}

Related

Ormlite Android bulk inserts

can anyone explain why my inserts are taking so long in Ormlite? Doing 1,700 inserts in one sqlite transaction on the desktop takes less than a second. However, when using Ormlite for Android, it's taking about 70 seconds, and I can see each insert in the debugging messages.
When I try and wrap the inserts into one transaction it goes at exactly the same speed. I understand that there is overhead both for Android and for Ormlite, however, I wouldn't expect it to be that great. My code is below:
this.db = new DatabaseHelper(getApplicationContext());
dao = db.getAddressDao();
final BufferedReader reader = new BufferedReader(new InputStreamReader(getResources().openRawResource(R.raw.poi)));
try {
dao.callBatchTasks(new Callable<Void>() {
public Void call() throws Exception {
String line;
while ((line = reader.readLine()) != null) {
String[] columns = line.split(",");
Address address = new Address();
// setup Address
dao.create(address);
}
return null;
}
});
} catch (SQLException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
I've had the same problem, and found a reasonable workaround. This took insert time from 2 seconds to 150ms:
final OrmLiteSqliteOpenHelper myDbHelper = ...;
final SQLiteDatabase db = myDbHelper.getWritableDatabase();
db.beginTransaction();
try{
// do ormlite stuff as usual, no callBatchTasks() needed
db.setTransactionSuccessful();
}
finally {
db.endTransaction();
}
Update:
Just tested this on Xperia M2 Aqua (Android4.4/ARM) and callBatchTasks() is actually faster. 90ms vs 120ms. So I think more details are in order.
We have 3 tables/classes/DAOs: Parent, ChildWrapper, Child.
Relations: Parent to ChildWrapper - 1 to n, ChildWrapper to Child - n to 1.
Code goes like this:
void saveData(xml){
for (parents in xml){
parentDao.createOrUpdate(parent);
for (children in parentXml){
childDao.createOrUpdate(child);
childWrapperDao.createOrUpdate(generateWrapper(parent, child));
}
}
}
I've got original speed up on a specific Android4.2/MIPS set-top-box (STB).
callBatchTasks was the first option because that's what we use througout all the code and it works well.
parentDao.callBatchTasks(
// ...
saveData();
// ...
);
But inserts were slow, so we've tried to nest callBatchTasks for every used DAO, set autocommit off, startThreadConnection and probably something else - don't remember at the moment. To no avail.
From my own experience and other similar posts it seems the problem occurs when several tables/DAOs are involved and it has something to do with implemetation specifics of Android (or SQLite) for concrete devices.
Unfortunately, this may be "expected". I get similar performance when I do that number of inserts under my emulator as well. The batch-tasks and turning off auto-commit don't seem to help.
If you are looking to load a large amount of data into a database, you might consider replaying a database dump instead. See here:
Android OrmLite pre-populate database
My guess would be that you are slowing somewhat because you are doing two IO tasks at one time (at least in the code shown above). You are reading from a file and writing to a database (which is a file). Also, from what I understand transactions should be a reasonable size. 1600 seems like a very high number. I would start with 100 but play around with the size.
So essentially I suggest you "chunk" your reads and inserts.
Read 100 lines to a temp Array, then insert that 100. Then read the next 100, then insert, etc.

Robolectric: Testing with ormlite

I'm trying to test ORMLite DAOs with robolectric, but database behaviour is not the same as when it's used from my android app. My DAOs are working perfectly well on the android application.
Reading about robolectric shadows and debugging code, I encountered ShadowSQLiteOpenHelper (code here).
Does anyone know if this Shadow is enough to test ormlite daos? Or I have to create my own shadow to achieve that? Any clue/tip/suggestion/example here?
Thanks in advance.
Extra info:
Test method:
#Test
public void basicTest() throws SQLException {
assertNotNull(randomStringResource); // Injection of an android resource: OK
assertThat(randomStringResource, equalTo("Event")); // With correct value: OK
assertNotNull(eventDao); // Dao injection: OK
assertThat(eventDao.countOf(), equalTo(0L)); // Table empty: OK
Event e1 = new Event("e1", new Date());
eventDao.create(e1);
assertNotNull(e1.getId()); // ID generated by OrmLite: OK
assertThat(eventDao.countOf(), equalTo(1L)); // Table not empty: OK
assertThat("e1", equalTo(eventDao.queryForId(e1.getId()).getName())); // Query for inserted event: Throws exception
}
Some of the problems encountered running this test:
Errors querying entities with "camelCased" property names: error thrown at last line of test (related problem). Never had a problem like this running the android app.
When I changed one of these properties name (e.g., isEnabled to enabled) in order to avoid the camelCase problem, the previous error persisted... seems like memory database didn't apply the changes that I made on the entity.
Versions used:
Robolectric 1.1
OrmLite 4.41
Sorry for resurrecting your topic but I ran into the same problem.
I'm using OrmLite 4.45 and Robolectric 2.1.
In ShadowSQLiteCursor.java, cacheColumnNames method calls toLowerCase on each column name. So I decided to extend ShadowSQLiteCursor with my own (which doesn't call toLowerCase):
/**
* Simulates an Android Cursor object, by wrapping a JDBC ResultSet.
*/
#Implements(value = SQLiteCursor.class, inheritImplementationMethods = true)
public class ShadowCaseSensitiveSQLiteCursor extends ShadowSQLiteCursor {
private ResultSet resultSet;
public void __constructor__(SQLiteCursorDriver driver, String editTable, SQLiteQuery query) {}
/**
* Stores the column names so they are retrievable after the resultSet has closed
*/
private void cacheColumnNames(ResultSet rs) {
try {
ResultSetMetaData metaData = rs.getMetaData();
int columnCount = metaData.getColumnCount();
columnNameArray = new String[columnCount];
for(int columnIndex = 1; columnIndex <= columnCount; columnIndex++) {
String cName = metaData.getColumnName(columnIndex);
this.columnNames.put(cName, columnIndex - 1);
this.columnNameArray[columnIndex - 1] = cName;
}
} catch(SQLException e) {
throw new RuntimeException("SQL exception in cacheColumnNames", e);
}
}
}
My answer obviously comes too late but may help others!

Android + sqlite insert speed improvements?

I recently inherited a project where a sqlite db is stored on the users sdcard (tables and columns only, no content). For the initial install (and subsequent data updates), an XML file is parsed via saxParser storing it's contents to the db columns like so:
saxParser:
#Override
public void endElement(String uri, String localName, String qName) throws SAXException {
currentElement = false;
if (localName.equals("StoreID")) {
buffer.toString().trim();
storeDetails.setStoreId(buffer.toString());
} else if (localName.equals("StoreName")) {
buffer.toString().trim();
storeDetails.setStoreName(buffer.toString());
...
} else if (localName.equals("StoreDescription")) {
buffer.toString().trim();
storeDetails.setStoreDescription(buffer.toString());
// when the final column is checked, call custom db helper method
dBHelper.addtoStoreDetail(storeDetails);
}
buffer = new StringBuffer();
}
#Override
public void characters(char[] ch, int start, int length) throws SAXException {
if (currentElement) {
buffer.append(ch, start, length);
}
}
DatabaseHelper:
// add to StoreDetails table
public void addtoStoreDetail(StoreDetails storeDetails) {
SQLiteDatabase database = null;
InsertHelper ih = null;
try {
database = getWritableDatabase();
ih = new InsertHelper(database, "StoreDetails");
// Get the numeric indexes for each of the columns that we're updating
final int idColumn = ih.getColumnIndex("_id");
final int nameColumn = ih.getColumnIndex("StoreName");
...
final int descColumn = ih.getColumnIndex("StoreDescription");
// Add the data for each column
ih.bind(idColumn, storeDetails.getStoreId());
ih.bind(nameColumn, storeDetails.getStoreName());
...
ih.bind(descColumn, storeDetails.getStoreDescription());
// Insert the row into the database.
ih.execute();
} finally {
ih.close();
safeCloseDataBase(database);
}
}
The loaded xml document is 6000+ lines long. When testing on the device it stops inserting after around halfway (no errors) which takes about 4-5 minutes. On the emulator however, it runs rather quickly, writing all lines to the database in about 20 seconds. I have log statements that run when the db is opened, data added, then closed. The LogCat outputs are significantly slower when running on the device. Is there something I'm missing here? Why is my data taking so long to write? I thought the improved InsertHelper would help, but unfortunately not even a little faster. Can someone point out my flaw(s) here?
I also counted on InsertHelper improving singificantly the speed, but the difference wasnt that drastic when I tested it.
Still the strength of the InsertHelper is in multiple inserts, because it compiles the query just once. The way you do it you declare new InsertHelper for every insert, which is bypassing the one-time-compilation improvement. Try using the same instance for multiple inserts.
However, I do not think that 6000+ inserts will go in less than a minute on slow device.
EDIT Also make sure you fetch the column indices only once, this will speed up a bit more. Place these outside the loop for the batch insert.
// Get the numeric indexes for each of the columns that we're updating
final int idColumn = ih.getColumnIndex("_id");
final int nameColumn = ih.getColumnIndex("StoreName");
final int descColumn = ih.getColumnIndex("StoreDescription");
When you're doing a batch insert like this, You might do better to set up a special action in your DB helper for it. Right now, you are opening and closing the connection to the SQLite DB every time you insert a row, which is going to slow you down significantly. For the batch process, set it up so that you can maintain the connection for the whole import Job. I think the reason that it is faster in the emulator is that, while running, the emulator exists entirely in memory - so although it intentionally slows down your CPU speed, File IO comes a lot faster.
In addition to connecting the database just once could the database connection be set not to commit the changes after each insert but only once at the end of the batch? I tried to browse Android dev docs but couldn't find an exact instructions how to do this (or is it already set so). On other platforms setting SQLite driver's AutoCommit to false and committing only at the end of the batch can improve insert speeds significantly.

Android SQLite query crashing when it takes too long?

I have an SQLite query in my android app that seems to crash when it takes too long to execute. It crashes with NullPointerException and tells me the line number...
When I put breakpoints around that line and see that it always gets filled with a variable, the app does not crash and does what it is supposed to.
So aside from having a phantom null pointer, it appears the problem is that the breakpoints actually slow things down giving the query time to complete. Without breakpoints it always crashes without fail.
Others here seem to have a similar problem, and I've read some things about SQLite taking an erratic amount of time to complete tasks, but this table should only ever have a few entries in it (the one I'm testing should only have three entries, 4 columns)
Suggestions on how to make it not crash? Perhaps put a thread wait inside the method that makes the query?
public void fetchItemsToRemove() throws SQLException{
Cursor mCursor =
mapDb.query(myMain_TABLE, new String[] {myOtherId, myCustomID, myDATE}, null, null, null, null, null);
if(mCursor.moveToFirst())
{
do
{
/*taking "dates" that were stored as plain text strings, and converting them to
*Date objects in a particular format for comparison*/
String DateCompareOld = mCursor.getString(mCursor.getColumnIndex(myDATE));
String DateCompareCurrent = "";
Date newDate = new Date();
DateCompareCurrent = newDate.toString();
try {
DateCompareOld = (String)DateCompareOld.subSequence(0, 10);
DateCompareCurrent = (String)DateCompareCurrent.subSequence(0, 10);
SimpleDateFormat dateType = new SimpleDateFormat("EEE MMM dd");
Date convertDate = dateType.parse(DateCompareOld);
newDate = dateType.parse(DateCompareCurrent);
if(convertDate.compareTo(newDate) < 0)
{
//remove unlim id
mapDb.delete(myMain_TABLE, myDATE + "=" + mCursor.getString(mCursor.getColumnIndex(myDATE)), null);
}
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}while(mCursor.moveToNext());
mCursor.close();
}
else
{
mCursor.close();
}
}
Now "line 342" where it crashes with NullPointerException is DateCompareOld = (String)DateCompareOld.subSequence(0, 10); where it gets a subsequence of the string. If it gets here and is null, this means the string was never filled at String DateCompareOld = mCursor.getString(mCursor.getColumnIndex(myDATE));
as if the query just got skipped because it took too long. Do note this is in a while loop, and I have done tests to make sure that the mCursor never goes out of bounds.
You're deleting things from a DB table whilst iterating over the results of a query from that table. Sounds a bit dangerous.
Try building a list, inside the loop, of things to be deleted, and then delete them in a single go after the loop finishes.
Also, wrap the entire thing in a DB transaction. When you're modifying the DB in a loop, that can make a huge difference to performance.
EDIT: a quick explanation of transactions:
A transaction allows you to combine a bunch of DB queries/modifications into a single atomic operation which either succeeds or fails. It's primarily a safety mechanism so your DB isn't stuck in an inconsistent state if something goes wrong half way through, but it also means that any modifications are committed to the DB's file storage in a single shot rather than one at a time, which is much faster.
You start the transaction at the start of your function:
public void fetchItemsToRemove() throws SQLException{
db.beginTransaction();
Cursor mCursor = ....
You set it as successful if the whole function completes without errors. This probably means you want to remove the inner try/catch and have an outer try/catch enclosing the loop. Then at the end of the try{ }, you can assume nothing's gone wrong, so you call:
db.setTransactionSuccessful();
Then, in a finally clause, to make sure you always close the transaction whether it's successful or otherwise:
db.endTransaction();

Sqlite Out of Memory when preparing update statement

I have one problem with my application.
I create a one AsyncTask for downloading list of files from server . When all the files are download after that i update the database. But when i called the update query its give me the below error.
Failure 21 (out of memory) on 0x0 when
preparing update
Can any one tell me why this error occurs ?
Sample Code
public void setStatus(int index)
{
try
{
db.OpenDatabase();
db.updateStatus(id.get(index), 1);
db.closeDatabase();
}
catch(Exception e)
{
e.printStackTrace();
}
}
Above function called from the AsyncTask ....
public void updateStatus(int id,int status)
{
try
{
db.execSQL("update sample set status =" + status + " where id = " + id);
}
catch(Exception e){e.printStackTrace();}
}
This may not be related to the database pe se, but rather to the fact that the memory (heap) is almost full and opening the database completely fills it up.
Remember that most handsets have 48MB of heap or even less.
Sometime while working I also got the same error.
I used this link
"Failure 21 (out of memory)" when performing some SQLite operations
It said that this error occurs when you try to work on a closed DB.
I looked back into my code and found that I was also doing the same. Got it working afterwards
I think you are also trying to work on a closed DB.
Have you tried to use the update() method instead of execSQL()?
public void updateStatus(int id,int status)
{
try
{
ContentValues values = new ContentValues();
values.put("status", status);
db.update ("sample", values, "id = ?", new String[]{Integer.toString(id)});
}
catch(Exception e){e.printStackTrace();}
}
I has "out of memory" error (21) when I try to call sqlite3_prepare() with a NULL pointer to database handle.
Check if your handle is valid and the database is opened.

Categories

Resources