'sx' acronym meaning in heap dump - android

The 'sx' in my heap dump appears to be holding the largest amount of retained memory, what does this stand for? I've been unable to find anything online for heap acronyms like sx, td, sy, ux, ss, etc.

Related

In android memory monitor, what is the different between `total count` and `heap count`?

In google docs, it said that,heap count means Number of instances in the selected heap,while total count means Total number of instances outstanding.What is the selected heap? Always, the total count is larger than heap count,so ,where is the other objects besides those in heap?
There are 3 heaps in Android:
App
Image
Zygote
Total Count is the total across all 3 heaps. Heap Count is the number of objects in the current selected heap.
See https://developer.android.com/studio/profile/am-hprof.html
total count include the instance in running stack too.
heap count is just in heap size.

Cursor window: window is full

I've created a ListView populated by the data returned from a query.
It works, but in the LogCat I've got the message:
Cursor Window: Window is full: requested allocation 444 bytes, free space 363 bytes, window size 2097152 bytes
and it uses a couple of minutes for loading / visualizing the ListView.
My query returns about 3700 rows of String/Int/Double, each of which with 30 columns; no images or particular datatypes
What does this message exactly mean and how can I avoid it?
Can you improve performances by changing this Cursor Window?
From my experience this means that the query results are too large for the cursor's window and it requests more memory. Most times this request is honored, but on low end devices it could throw exceptions.
I don't know the specifics of the app in question but you referred to a ListView. A ListView cannot show 3700 rows at once and a endless list could help to load the data on demand
My advise is to break up the query into a multiple queries that return smaller results and close them before running the next query. After each successive query combine the results.
Short version:
After some investigation, it appears that this message is part of normal operation, and not a cause for concern. It is logged at the "Warning" level, but I think this is simply overeager.
Longer version:
This is (clearly labelled as) a "Windowed" cursor, which means that old records will be discarded as new records are obtained. In the simplest form, such a "window" implementation may contain up to N rows total, possibly with some read-ahead. In this implementation, however, the window size is defined instead by the total size. The number of rows kept in memory is instead based on how many would fit in the overall window, and will vary at runtime (This could perhaps be considered more of a "buffered" Cursor than "windowed" Cursor).
As a buffered implementation with a (soft-?)capped size, the earliest rows will be discarded only when the buffer is too full to accommodate the next row. In this case, 1 or more older rows are dropped. This "keep allocating rows as-needed until we can no longer have room for more, at which point we free up the oldest record(s) in our buffer and try again" process appears to be completely normal and expected, as a normal part of the process to keep the memory space confined.
I based this conclusion on reading the source here, combined with some inference:
https://android.googlesource.com/platform/frameworks/base/+/master/libs/androidfw/CursorWindow.cpp
Why are people talking about images and other massive LOBs?
If the size of a single row is larger than the entire "window" (buffer), then this strategy breaks down and you have an actual problem.
This was the message #op was getting:
Cursor Window: Window is full: requested allocation 444 bytes, free space 363 bytes, window size 2097152 bytes
This was the message #vovahost was getting:
CursorWindow: Window is full: requested allocation 2202504 bytes, free space 2076560 bytes, window size 2097152 bytes
In the first case, requested allocation is much smaller than the windows size. I expect that similar messages are issued repeatedly, with the same window size and varying requested allocation sizes. Each time this is printed, memory is freed from the larger window, and new allocations are made. This is normal and healthy operation.
In the second case, requested allocation size exceeds the overall window size. This is an actual problem, requiring storing and reading data in a more streamable way.
The difference is "length" (total number of rows) vs "width" (memory cost of the largest single row). The former (#tirrel's issue) is not an issue, but the latter (#vovahost's issue) is.
I also got this problem. In my case I saved a 2.2 MB image in database. When loading the data from the database using Cursor.getBlob() I would see this message in the Log:
CursorWindow: Window is full: requested allocation 2202504 bytes, free space 2076560 bytes, window size 2097152 bytes
After I would get this message if I try to retrieve any data (String, number, etc) for successive rows it is returned as null without any errors.
The solution was to remove the 2.2 MB blob. I don't know if it's possible to load bigger blobs from database in Android.
Also, note that changing the window has overhead of IPC.
So, if the cursor has large number of items and is used with a listview, fast navigation results in change of window and hence frequent IPCs. This might result in ANR if the system is loaded.

Smallest object size in Android/Dalvik

DDMS shows the smallest size of an object (i.e. an empty object) is 16 bytes in VM Heap tab. But struct Object is only 8 bytes in dalvik source code vm/oo/Object.h. Why is there a difference? How is that related to alignment issues?
Short answer: 8 bytes of overhead for any Object (class pointer + lock word), plus 4 or 8 bytes of overhead for the underlying dlmalloc-based heap allocation mechanism. All objects are aligned on 8-byte boundaries, so a 12-byte object will have 4 bytes of padding.
Longer answer.

SQLiteException: unknown error (code 0): Native could not create new byte[]

I'm getting this error when trying to query up to 30 objects, each object has field byte[] which weights 100x100 ARGB_8888 bitmap data ~ 39kb
I'm using OrmLite 4.45 version. on a Samsung GT n8000 tablet (max heap size 64mb)
Here's stacktrace:
android.database.sqlite.SQLiteException: unknown error (code 0): Native could not create new byte[]
at android.database.CursorWindow.nativeGetBlob(Native Method)
at android.database.CursorWindow.getBlob(CursorWindow.java:403)
at android.database.AbstractWindowedCursor.getBlob(AbstractWindowedCursor.java:45)
at com.j256.ormlite.android.AndroidDatabaseResults.getBytes(AndroidDatabaseResults.java:161)
at com.j256.ormlite.field.types.ByteArrayType.resultToSqlArg(ByteArrayType.java:41)
at com.j256.ormlite.field.BaseFieldConverter.resultToJava(BaseFieldConverter.java:24)
at com.j256.ormlite.field.FieldType.resultToJava(FieldType.java:798)
at com.j256.ormlite.stmt.mapped.BaseMappedQuery.mapRow(BaseMappedQuery.java:60)
at com.j256.ormlite.stmt.SelectIterator.getCurrent(SelectIterator.java:270)
at com.j256.ormlite.stmt.SelectIterator.nextThrow(SelectIterator.java:161)
at com.j256.ormlite.stmt.StatementExecutor.query(StatementExecutor.java:187)
at com.j256.ormlite.dao.BaseDaoImpl.query(BaseDaoImpl.java:263)
at com.j256.ormlite.dao.EagerForeignCollection.(EagerForeignCollection.java:37)
at com.j256.ormlite.field.FieldType.buildForeignCollection(FieldType.java:781)
at com.j256.ormlite.stmt.mapped.BaseMappedQuery.mapRow(BaseMappedQuery.java:82)
at com.j256.ormlite.android.AndroidDatabaseConnection.queryForOne(AndroidDatabaseConnection.java:186)
at com.j256.ormlite.stmt.mapped.MappedQueryForId.execute(MappedQueryForId.java:38)
at com.j256.ormlite.field.FieldType.assignField(FieldType.java:540)
at com.j256.ormlite.stmt.mapped.BaseMappedQuery.mapRow(BaseMappedQuery.java:71)
at com.j256.ormlite.stmt.SelectIterator.getCurrent(SelectIterator.java:270)
at com.j256.ormlite.stmt.SelectIterator.nextThrow(SelectIterator.java:161)
at com.j256.ormlite.stmt.StatementExecutor.query(StatementExecutor.java:187)
at com.j256.ormlite.dao.BaseDaoImpl.query(BaseDaoImpl.java:263)
at com.j256.ormlite.stmt.QueryBuilder.query(QueryBuilder.java:319)
at com.j256.ormlite.stmt.Where.query(Where.java:485)
at com.j256.ormlite.dao.BaseDaoImpl.queryForEq(BaseDaoImpl.java:243)
here's logcat:
05-16 14:05:24.561: D/dalvikvm(4163): GC_CONCURRENT freed 1247K, 10% free 18046K/19911K, paused 11ms+3ms, total 30ms
05-16 14:05:24.561: D/dalvikvm(4163): WAIT_FOR_CONCURRENT_GC blocked 10ms
05-16 14:05:24.686: D/dalvikvm(4163): GC_CONCURRENT freed 119K, 4% free 19922K/20743K, paused 11ms+2ms, total 28ms
05-16 14:05:24.686: D/dalvikvm(4163): WAIT_FOR_CONCURRENT_GC blocked 15ms
... whole ton of these
05-16 14:05:27.261: D/dalvikvm(4163): GC_CONCURRENT freed 109K, 2% free 62754K/63495K, paused 12ms+5ms, total 36ms
05-16 14:05:27.261: D/dalvikvm(4163): WAIT_FOR_CONCURRENT_GC blocked 20ms
05-16 14:05:27.366: I/dalvikvm-heap(4163): Clamp target GC heap from 65.738MB to 64.000MB
Is such fast growth of memory usage normal?
What do you think about splitting query into chunks and explicitly calling System.gc() between those separate queries?
Thanks!
Is such fast growth of memory usage normal?
No it isn't.
What do you think about splitting query into chunks and explicitly calling System.gc() between those separate queries?
No, this most likely would not fix the issue. You need to resolve the underlying memory issue directly.
After looking at your code and entities that you did not provide in your post, this is not a ORMLite issue but a entity problem.
You have a Gallery of Photos. Each photo has a possibly large array of byte image data -- maybe 50+k. The problem is that the Gallery has an eager foreign collection of Photos:
#ForeignCollectionField(eager = true)
private ForeignCollection<Photo> photos;
And then each Photo has an auto-refreshed version of its parent Gallery.
#DatabaseField(foreign = true, foreignAutoRefresh = true, columnName = GALLERY)
private Gallery gallery;
This sets up an eager fetch loop which causes ORMLite to do something like the following:
Whenever ORMLite tries to load the Gallery into memory...
it is being asked to do another query and load all of the photos associated with the Gallery into memory because of the eager collection.
For each of those Photo instances, it is being asked to do another query to get the associated Gallery into memory because of the auto-refreshed parent.
And for that Gallery it is asked to load all of the Photos into memory.
... ORMLite actually has an escape but still does this 3 levels down.
ORMLite has no magic view of the Gallery and Photo instances so it attach the parent Gallery to the foreign field in the Photos. If you want this then I'd see the ObjectCache solution below.
There are a number of ways you can fix this:
I'd recommend not using foreignAutoRefresh = true. If you need the Gallery of a photo then you can get it by doing galleryDao.refresh(photo.getGallery()). This breaks the chain of eager fetches.
You could also make the photos collection not be eager. A lazy loaded collection would go more times to the database but would also break the cycle.
If you really must have all of the eager collection and refreshing then the best solution however would be the introduction of an ObjectCache. You may have to clear the cache often but each of the DAOs would then look in the cache and return the same object entity even with the eager fetch loop going on.
galleryDao = getHelper().getRuntimeExceptionDao(Gallery.class);
galleryDao.setObjectCache(true);
photoDao = getHelper().getRuntimeExceptionDao(Photo.class);
photoDao.setObjectCache(true);
...
// if you load a lot of objects into memory, you must clear the cache often
galleryDao.clearObjectCache();
photoDao.clearObjectCache();

Android Bitmap Pixels - write directly to file?

Background:
The goal is to write a rather large (at least 2048 x 2048 pixels) image file with OpenGL rendered data.
Today I first use glReadPixels in order to get the 32-bit (argb8888) pixel data into an int array.
Then I copy the data into a new short array, converting the 32-bit argb values into 16-bit (rgb565) values. At this point I also turn the image upside down and change the color order to make the opengl-image data compatible with android bitmap data (different row order and color channel order).
Finally I create a Bitmap() instance and .copyPixelsFromBuffer(Buffer b) in order to be able to save it to disk as a png-file.
However I want to use memory more efficient in order to avoid out of memory crashes on some phones.
Question:
Can I skip the first transformation from int[] -> short[] in some way (and avoid the allocation of a new array for pixel data)? Maybe just use byte arrays / buffers and write the converted pixels to the same array I read from...
More important: Can I skip the bitmap creation (here's where the program crash) and somehow write the data directly to disk as a working image file (and avoid allocation of the pixel data again in the bitmap object)?
EDIT: If I could write the data directly to file, maybe I don't need to convert to 16-bit pixel data, depending on the file size and how fast the file can be read into memory at a later point.
I'm not sure that this could help but, this PNGJ library allows to write a PNG sequentially, line by line. If memory usage if your primary concern (and if you can access the pixels values in the order of the final PNG file from the rendered data) it could be useful.

Categories

Resources