SQLiteException: unknown error (code 0): Native could not create new byte[] - android
I'm getting this error when trying to query up to 30 objects, each object has field byte[] which weights 100x100 ARGB_8888 bitmap data ~ 39kb
I'm using OrmLite 4.45 version. on a Samsung GT n8000 tablet (max heap size 64mb)
Here's stacktrace:
android.database.sqlite.SQLiteException: unknown error (code 0): Native could not create new byte[]
at android.database.CursorWindow.nativeGetBlob(Native Method)
at android.database.CursorWindow.getBlob(CursorWindow.java:403)
at android.database.AbstractWindowedCursor.getBlob(AbstractWindowedCursor.java:45)
at com.j256.ormlite.android.AndroidDatabaseResults.getBytes(AndroidDatabaseResults.java:161)
at com.j256.ormlite.field.types.ByteArrayType.resultToSqlArg(ByteArrayType.java:41)
at com.j256.ormlite.field.BaseFieldConverter.resultToJava(BaseFieldConverter.java:24)
at com.j256.ormlite.field.FieldType.resultToJava(FieldType.java:798)
at com.j256.ormlite.stmt.mapped.BaseMappedQuery.mapRow(BaseMappedQuery.java:60)
at com.j256.ormlite.stmt.SelectIterator.getCurrent(SelectIterator.java:270)
at com.j256.ormlite.stmt.SelectIterator.nextThrow(SelectIterator.java:161)
at com.j256.ormlite.stmt.StatementExecutor.query(StatementExecutor.java:187)
at com.j256.ormlite.dao.BaseDaoImpl.query(BaseDaoImpl.java:263)
at com.j256.ormlite.dao.EagerForeignCollection.(EagerForeignCollection.java:37)
at com.j256.ormlite.field.FieldType.buildForeignCollection(FieldType.java:781)
at com.j256.ormlite.stmt.mapped.BaseMappedQuery.mapRow(BaseMappedQuery.java:82)
at com.j256.ormlite.android.AndroidDatabaseConnection.queryForOne(AndroidDatabaseConnection.java:186)
at com.j256.ormlite.stmt.mapped.MappedQueryForId.execute(MappedQueryForId.java:38)
at com.j256.ormlite.field.FieldType.assignField(FieldType.java:540)
at com.j256.ormlite.stmt.mapped.BaseMappedQuery.mapRow(BaseMappedQuery.java:71)
at com.j256.ormlite.stmt.SelectIterator.getCurrent(SelectIterator.java:270)
at com.j256.ormlite.stmt.SelectIterator.nextThrow(SelectIterator.java:161)
at com.j256.ormlite.stmt.StatementExecutor.query(StatementExecutor.java:187)
at com.j256.ormlite.dao.BaseDaoImpl.query(BaseDaoImpl.java:263)
at com.j256.ormlite.stmt.QueryBuilder.query(QueryBuilder.java:319)
at com.j256.ormlite.stmt.Where.query(Where.java:485)
at com.j256.ormlite.dao.BaseDaoImpl.queryForEq(BaseDaoImpl.java:243)
here's logcat:
05-16 14:05:24.561: D/dalvikvm(4163): GC_CONCURRENT freed 1247K, 10% free 18046K/19911K, paused 11ms+3ms, total 30ms
05-16 14:05:24.561: D/dalvikvm(4163): WAIT_FOR_CONCURRENT_GC blocked 10ms
05-16 14:05:24.686: D/dalvikvm(4163): GC_CONCURRENT freed 119K, 4% free 19922K/20743K, paused 11ms+2ms, total 28ms
05-16 14:05:24.686: D/dalvikvm(4163): WAIT_FOR_CONCURRENT_GC blocked 15ms
... whole ton of these
05-16 14:05:27.261: D/dalvikvm(4163): GC_CONCURRENT freed 109K, 2% free 62754K/63495K, paused 12ms+5ms, total 36ms
05-16 14:05:27.261: D/dalvikvm(4163): WAIT_FOR_CONCURRENT_GC blocked 20ms
05-16 14:05:27.366: I/dalvikvm-heap(4163): Clamp target GC heap from 65.738MB to 64.000MB
Is such fast growth of memory usage normal?
What do you think about splitting query into chunks and explicitly calling System.gc() between those separate queries?
Thanks!
Is such fast growth of memory usage normal?
No it isn't.
What do you think about splitting query into chunks and explicitly calling System.gc() between those separate queries?
No, this most likely would not fix the issue. You need to resolve the underlying memory issue directly.
After looking at your code and entities that you did not provide in your post, this is not a ORMLite issue but a entity problem.
You have a Gallery of Photos. Each photo has a possibly large array of byte image data -- maybe 50+k. The problem is that the Gallery has an eager foreign collection of Photos:
#ForeignCollectionField(eager = true)
private ForeignCollection<Photo> photos;
And then each Photo has an auto-refreshed version of its parent Gallery.
#DatabaseField(foreign = true, foreignAutoRefresh = true, columnName = GALLERY)
private Gallery gallery;
This sets up an eager fetch loop which causes ORMLite to do something like the following:
Whenever ORMLite tries to load the Gallery into memory...
it is being asked to do another query and load all of the photos associated with the Gallery into memory because of the eager collection.
For each of those Photo instances, it is being asked to do another query to get the associated Gallery into memory because of the auto-refreshed parent.
And for that Gallery it is asked to load all of the Photos into memory.
... ORMLite actually has an escape but still does this 3 levels down.
ORMLite has no magic view of the Gallery and Photo instances so it attach the parent Gallery to the foreign field in the Photos. If you want this then I'd see the ObjectCache solution below.
There are a number of ways you can fix this:
I'd recommend not using foreignAutoRefresh = true. If you need the Gallery of a photo then you can get it by doing galleryDao.refresh(photo.getGallery()). This breaks the chain of eager fetches.
You could also make the photos collection not be eager. A lazy loaded collection would go more times to the database but would also break the cycle.
If you really must have all of the eager collection and refreshing then the best solution however would be the introduction of an ObjectCache. You may have to clear the cache often but each of the DAOs would then look in the cache and return the same object entity even with the eager fetch loop going on.
galleryDao = getHelper().getRuntimeExceptionDao(Gallery.class);
galleryDao.setObjectCache(true);
photoDao = getHelper().getRuntimeExceptionDao(Photo.class);
photoDao.setObjectCache(true);
...
// if you load a lot of objects into memory, you must clear the cache often
galleryDao.clearObjectCache();
photoDao.clearObjectCache();
Related
'sx' acronym meaning in heap dump
The 'sx' in my heap dump appears to be holding the largest amount of retained memory, what does this stand for? I've been unable to find anything online for heap acronyms like sx, td, sy, ux, ss, etc.
Window is full! Retrieving bitmap from SQLite database
I am trying to store and retrieve the image captured by device camera in SQLite Database as BLOB . I have no problem to do it, but when I retrieve images from DB I get an error, because of cursor, right? W/CursorWindow: Window is full: requested allocation 707903 bytes, free space 680839 bytes, window size 2097152 bytes After long time looking for solution I couldn't find it. I found only a lot of questions about the same issue without solid solution. I can store URI of image , like it is discussed Android: Cursor Window is full. But then what happens if user deletes the image from Phone Memory/SD Card? Also its not a good idea to ignore it, answered here https://stackoverflow.com/a/37035510/8258166. So, what should I do? Many thanks in advance!
In android memory monitor, what is the different between `total count` and `heap count`?
In google docs, it said that,heap count means Number of instances in the selected heap,while total count means Total number of instances outstanding.What is the selected heap? Always, the total count is larger than heap count,so ,where is the other objects besides those in heap?
There are 3 heaps in Android: App Image Zygote Total Count is the total across all 3 heaps. Heap Count is the number of objects in the current selected heap. See https://developer.android.com/studio/profile/am-hprof.html
total count include the instance in running stack too. heap count is just in heap size.
Cursor window: window is full
I've created a ListView populated by the data returned from a query. It works, but in the LogCat I've got the message: Cursor Window: Window is full: requested allocation 444 bytes, free space 363 bytes, window size 2097152 bytes and it uses a couple of minutes for loading / visualizing the ListView. My query returns about 3700 rows of String/Int/Double, each of which with 30 columns; no images or particular datatypes What does this message exactly mean and how can I avoid it? Can you improve performances by changing this Cursor Window?
From my experience this means that the query results are too large for the cursor's window and it requests more memory. Most times this request is honored, but on low end devices it could throw exceptions. I don't know the specifics of the app in question but you referred to a ListView. A ListView cannot show 3700 rows at once and a endless list could help to load the data on demand My advise is to break up the query into a multiple queries that return smaller results and close them before running the next query. After each successive query combine the results.
Short version: After some investigation, it appears that this message is part of normal operation, and not a cause for concern. It is logged at the "Warning" level, but I think this is simply overeager. Longer version: This is (clearly labelled as) a "Windowed" cursor, which means that old records will be discarded as new records are obtained. In the simplest form, such a "window" implementation may contain up to N rows total, possibly with some read-ahead. In this implementation, however, the window size is defined instead by the total size. The number of rows kept in memory is instead based on how many would fit in the overall window, and will vary at runtime (This could perhaps be considered more of a "buffered" Cursor than "windowed" Cursor). As a buffered implementation with a (soft-?)capped size, the earliest rows will be discarded only when the buffer is too full to accommodate the next row. In this case, 1 or more older rows are dropped. This "keep allocating rows as-needed until we can no longer have room for more, at which point we free up the oldest record(s) in our buffer and try again" process appears to be completely normal and expected, as a normal part of the process to keep the memory space confined. I based this conclusion on reading the source here, combined with some inference: https://android.googlesource.com/platform/frameworks/base/+/master/libs/androidfw/CursorWindow.cpp Why are people talking about images and other massive LOBs? If the size of a single row is larger than the entire "window" (buffer), then this strategy breaks down and you have an actual problem. This was the message #op was getting: Cursor Window: Window is full: requested allocation 444 bytes, free space 363 bytes, window size 2097152 bytes This was the message #vovahost was getting: CursorWindow: Window is full: requested allocation 2202504 bytes, free space 2076560 bytes, window size 2097152 bytes In the first case, requested allocation is much smaller than the windows size. I expect that similar messages are issued repeatedly, with the same window size and varying requested allocation sizes. Each time this is printed, memory is freed from the larger window, and new allocations are made. This is normal and healthy operation. In the second case, requested allocation size exceeds the overall window size. This is an actual problem, requiring storing and reading data in a more streamable way. The difference is "length" (total number of rows) vs "width" (memory cost of the largest single row). The former (#tirrel's issue) is not an issue, but the latter (#vovahost's issue) is.
I also got this problem. In my case I saved a 2.2 MB image in database. When loading the data from the database using Cursor.getBlob() I would see this message in the Log: CursorWindow: Window is full: requested allocation 2202504 bytes, free space 2076560 bytes, window size 2097152 bytes After I would get this message if I try to retrieve any data (String, number, etc) for successive rows it is returned as null without any errors. The solution was to remove the 2.2 MB blob. I don't know if it's possible to load bigger blobs from database in Android.
Also, note that changing the window has overhead of IPC. So, if the cursor has large number of items and is used with a listview, fast navigation results in change of window and hence frequent IPCs. This might result in ANR if the system is loaded.
Android Lucene OutOfMemoryExceptoin
I have a Lucene Index with 50571 documents in it from 1740 books. I have two processes that create this index. The first process is to create the index on the device document by document. This process is very slow. The other process is to create a book index on the server, (The exact same way I create it on the device) and download and merge it with the master index. This one is much quicker to create the master index. Creating the index works fine either way. The problem is when I search on the download-merge index I get an OutOfMemoryException, but when I search with the index that was created on the device I don't get that error. I went through and created the index book by book (download-merge) and searched after each book was indexed; based on that and when I get to book ~450 I start getting the OutOfMemoryException. What is causing me to run out of memory.
Lucene is a memory hog. When writing "merging" indices together it stores the entire set of indices in memory twice. As quoted from the lucene documentation. Note that this requires temporary free space in the Directory up to 2X the sum of all input indexes (including the starting index). If readers/searchers are open against the starting index, then temporary free space required will be higher by the size of the starting index That is a lot of memory. To mitigate this we have to shrink the size of the index by calling forceMerge(int) on the index writer. It is a slow process but it does shrink the size of the index. I am call this with an argument of 1 every time there is 50 or more files in the index directory.