Android Renderscript Allocation.USAGE_SHARED crash - android

I am getting a crash while running my app which uses renderscript. Unfortunately, the logcat does not give any specific details.
b = Bitmap.createBitmap(ib.getWidth(), ib.getHeight(),ib.getConfig());
Allocation mInAllocation = Allocation.createFromBitmap(mRS, inBitmap,
Allocation.MipmapControl.MIPMAP_NONE,Allocation.USAGE_SHARED);
Allocation mOutAllocation2 = Allocation.createFromBitmap(mRS,
outBitmap, Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SHARED);
...execute an algorithm from .rs file and later do the below
mOutAllocation2.copyTo(outBitmap)`;
The same code sequence runs perfectly fine, when I used USAGE_SCRIPT flag instead of USAGE_SHARED for mOutAllocation2.
Any help on why this could happen?
I read in android docs that if the allocation is of the type USAGE_SHARED, then the copy operation from allocation to the bitmap (see above) is faster.
Currently, I am seeing copies from allocation to bitmaps running into secs for decently large images (8MP and above)
I am using Nexus 10 (Android 4.3) currently.

First, you need to be using Allocation.USAGE_SCRIPT | Allocation.USAGE_SHARED. createFromBitmap(RenderScript, Bitmap) will set that for you when possible.
Second, if your copy times are taking that long, you're probably seeing script execution as well. Script execution is asynchronous, so the wall clock time of copyTo(Bitmap) may include significantly more than just the copy.

I was facing the same problem and I resolved it, this issue was happening because my bitmap configuration was not Bitmap.Config.ARGB_8888, we should convert it to ARGB_8888 before applying the blur.
Bitmap U8_4Bitmap;
if (yourBitmap.getConfig() == Bitmap.Config.ARGB_8888) {
U8_4Bitmap = yourBitmap;
} else {
U8_4Bitmap = yourBitmap.copy(Bitmap.Config.ARGB_8888, true);
}

Related

Flutter C++ Memory allocation causes jank on raster thread - Android NDK Dart FFI

I have a flutter app which uses Dart ffi to connect to my custom C++ audio backend. There I allocate around 10MB of total memory for my audio buffers. Each buffer has 10MB / 84 of memory. I use 84 audio players. Here is the ffi flow:
C++ bridge:
extern "C" __attribute__((visibility("default"))) __attribute__((used))
void *
loadMedia(char *filePath, int8_t *mediaLoadPointer, int64_t *currentPositionPtr, int8_t *mediaID) {
LOGD("loadMedia %s", filePath);
if (soundEngine == nullptr) {
soundEngine = new SoundEngine();
}
return soundEngine->loadMedia(filePath, mediaLoadPointer, currentPositionPtr, mediaID);
}
In my sound engine I launch a C++ thread:
void loadMedia(){
std::thread{startDecoderWorker,
buffer,
}.detach();
}
void startDecoderWorker(float*buffer){
buffer = new float[30000]; // 30000 might be wrong here, I entered a huge value to just showcase the problem, the calculation of 10MB / 84 code is redundant to the code
}
So here is the problem, I dont know why but when I allocate memory with new keyword even inside a C++ thread, flutters raster thread janks and I can see that my flutter UI janks lots of frames. This is also present in performance overlay as it goes all red for 3 to 5 frames with each of it taking around 30 40ms. Tested on profile mode.
Here is how I came to this conclusion:
If I instantly return from my startDecoderWorker without running new memory allocation code, when I do this there is 0 jank. Everything is smooth 60fps, performance overlay doesnt show me red bars.
Here are some screenshots from Profile mode:
The actual cause, after discussions (in the comments of the question), is not because the memory allocation is too slow, but lie somewhere else - the calculations which will be heavy if the allocation is big.
For details, please refer to the comments and discussions of the question ;)

Camera2 and Renderscript Allocations

I'm trying to use an Allocation created with the USING_IO_INPUT flag to get images from the camera. I'm setting it up as follows
Type.Builder yuvType = new Type.Builder(mRS, Element.YUV(mRS));
yuvType.setYuvFormat(imageReaderFormat);
yuvType.setX(mCameraWidth).setY(mCameraHeight);
mAllocation = Allocation.createTyped(mRS, yuvType.create(), Allocation
.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
mAllocation.setOnBufferAvailableListener(mOnBufferAllocationAvailable);
I'm adding the Allocation surface to a Preview Session, and getting the callbacks to my very simple function
public void onBufferAvailable(Allocation allocation) {
allocation.ioReceive();
//mPixels is properly initialised
allocation.copyTo(mPixels);
}
This setup works on a Nexus 5X, but fails on a Nexus 4 running 5.1.1. When I call allocation.ioReceive() in the callback, I get a few warnings printed from the driver, and copying from the Allocation to a byte array results in garbage being copied.
W/Adreno-RS: <rsdVendorAllocationUnMapQCOM:394>: NOT Found allocation map for alloc 0xa1761000
W/Adreno-GSL: <gsl_ldd_control:427>: ioctl fd 25 code 0xc01c0915 (IOCTL_KGSL_MAP_USER_MEM) failed: errno 22 Invalid argument
W/Adreno-RS: <rsdVendorAllocationMapQCOM:288>: gsl_memory_map_fd failed -5 hostptr: 0xa0112000 sz: 0 offset: 0 flags: 0x10c0b00 alloc: 0xa1761000
I am running the camera in a background thread, although onBufferAvailable gets called in the "RSMessageThread".
Is this problem related to the way I am setting the Allocations and the Camera Preview, or is it a bug in the driver?
I see the same error message on a Samsung Galaxy S4 (Smart Phone), android version 5.0 (API 21), but do not with the identical application using camera2 and renderscript, on a Samsung Galaxy Tab 5 (Tablet), android version 5.1.1 (API 22). I'm assuming it is an early implementation problem on the device vendors part.
Have you tried the official HDR-viewfinder example? If this works on the Nexus 4, then you can study that example.
If not, you can try with my implementation YUV_420_888 to Bitmap that uses a different approach, not via YUV-allocations but via byte-allocation using the Information from the three image planes.

Possible bug in dalvik of Android 2.x during Bitmap allocation?

The phenomenon: First do allocation some big memory blocks in the Java side until we catche OutOfMemoryError, then free them all. Now, weird things happen: load even a small picture(e.g. width:200, height:200) by BitmapFactory.decodeXXX(decodeResource, decodeFile, ...) will throw an OutOfMemoryError! But its OK to alloc any pure Java big Object(e.g. new byte[2*1024*1024]) now!
Verifying: I wrote some simple codes to verify the problem that can download here, press "Alloc" button many times and you will got an OOF Error, then press "Free All", now the environment is set up. Now you can press "LoadBitmap" and you will see its not work on most of Android 2.x phone.(But in the emulator its just OK, odd)
Digging deeper: I try to dig into some dalvik code to find out why, and find a possible bug in function externalAllocPossible in HeapSource.c which called by dvmTrackExternalAllocation who print the "xxx-byte external allocation too large for this process" messages in LogCat.
In externalAllocPossible it simply wrote:
if (currentHeapSize + hs->externalBytesAllocated + n <=
heap->absoluteMaxSize)
{
return true;
}
return false;
Which means once if the native Bitmap allocation size plus the currentHeapSize(NOT the actually allocated size as shown below, in this case, it's keeping the max size of the heap we bumped up but then freed them all) exceeds the limits, native Bitmap allocation will always fail, but the currentHeapSize in Java seems NOT decrease even when 91.3% Java objects' memory have been freed(set to null and trigger GC)!
Is there anybody else met this problem too?
I think this is correct. Its forcing the entire app (Java+native) take no more than a certain amount of memory from the OS. To do this it has to use the current heap size, because that amount of memory is still allocated to the app (it is not returned to the OS when freed by GC, only returned to the application's memory pool).
At any rate, 2.x is long dead so they're not going to fix it there. They did change how bitmaps store their memory in 3.x and 4.x. Your best bet is to allocate all the bitmaps you use first, then allocate those large structures. Or better yet- throw those large structures into a fixed size LRUCache, and don't use the grow until out of memory idea, instead load new data only when needed.
The class Bitmap has the recycle() method, described as:
Free the native object associated with this bitmap...
The reason behind this method is that there are two heaps: the Java heap and the heap used by native code. The GC only sees the Java heap sizes; for GC, a bitmap may look as a small object because it's size on the Java heap is small, despite the fact that it references a large memory block in the native heap.

How can Android leak memory while the App's memory remains stable

Android runs out of memory and restarts applications even though the amount of memory that Runtime reports that it is using remains nearly constant.
How can an Android phone run out of memory when the amount of memory that it's applications use remains nearly constant?
The following line of code returns a nearly constant value between 4 and 5 MB, but Android's App Manager shows that the running app is leaking about 1 megabyte per iteration and after a while, Android starts shutting down process because memory is low.
(Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory())
I see similar results when I use Eclipse's Memory Analyzer Tool to view HPROF or in Android's Heap tool to view the heap. I didn't see a huge block of memory being allocated in Android's Allocation Tracker either.
So, the big questions for me are:
1) How can the memory as reported in an app and the memory as reported by android be out of synch?
2) I'll give full credit for pointers that get me past this memory leak in the test code. (I'm happy to provide the full test code.)
//This is an excerpt from the case.
import es.vocali.util.AESCrypt;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
...
byte[] data = getData(ONE_MEGABYTE);
AESCrypt aesCrypt = new AESCrypt(PASSWORD);
ByteArrayOutputStream baos = new ByteArrayOutputStream(ONE_MEGABYTE+ONE_KILOBYTE);
//Each iteration leaks approximately ONE_MEGABYTE
for(int i = 0; i < NUMBER_OF_ENCRYPTIONS; i++) {
ByteArrayInputStream bais = new ByteArrayInputStream(data);
aesCrypt.encrypt(2, bais, baos);
bais.close();
bais = null;
baos.reset();
}

Running generated ARM machine code on Android gives UnsupportedOperationException with Java Bitmap objects

We ( http://www.mosync.com ) have compiled our ARM recompiler with the Android NDK which takes our internal byte code and generates ARM machine code. When executing recompiled code we see an enormous increase in performance, with one small exception, we can't use any Java Bitmap operations.
The native system uses a function which takes care of all the calls to the Java side which the recompiled code is calling. On the Java (Dalvik) side we then have bindings to Android features. There are no problems while recompiling the code or when executing the machine code. The exact same source code works on Symbian and Windows Mobile 6.x so the recompiler seems to generate correct ARM machine code.
Like I said, the problem we have is that we can't use Java Bitmap objects. We have verified that the parameters which are sent from the Java code is correct, and we have tried following the execution down in Android's own JNI systems. The problem is that we get an UnsupportedOperationException with "size must fit in 32 bits.". The problem seems consistent on Android 1.5 to 2.3. We haven't tried the recompiler on any Android 3 devices.
Is this a bug which other people have encountered, I guess other developers have done similar things.
I found the message in dalvik_system_VMRuntime.c:
/*
* public native boolean trackExternalAllocation(long size)
*
* Asks the VM if <size> bytes can be allocated in an external heap.
* This information may be used to limit the amount of memory available
* to Dalvik threads. Returns false if the VM would rather that the caller
* did not allocate that much memory. If the call returns false, the VM
* will not update its internal counts.
*/
static void Dalvik_dalvik_system_VMRuntime_trackExternalAllocation(
const u4* args, JValue* pResult)
{
s8 longSize = GET_ARG_LONG(args, 1);
/* Fit in 32 bits. */
if (longSize < 0) {
dvmThrowException("Ljava/lang/IllegalArgumentException;",
"size must be positive");
RETURN_VOID();
} else if (longSize > INT_MAX) {
dvmThrowException("Ljava/lang/UnsupportedOperationException;",
"size must fit in 32 bits");
RETURN_VOID();
}
RETURN_BOOLEAN(dvmTrackExternalAllocation((size_t)longSize));
}
This method is called, for example, from GraphicsJNI::setJavaPixelRef:
size_t size = size64.get32();
jlong jsize = size; // the VM wants longs for the size
if (reportSizeToVM) {
// SkDebugf("-------------- inform VM we've allocated %d bytes\n", size);
bool r = env->CallBooleanMethod(gVMRuntime_singleton,
gVMRuntime_trackExternalAllocationMethodID,
jsize);
I would say it seems that the code you're calling is trying to allocate a too big size. If you show the actual Java call which fails and values of all the arguments that you pass to it, it might be easier to find the reason.
I managed to find a work-around. When I wrap all the Bitmap.createBitmap calls inside a Activity.runOnUiThread() It works.

Categories

Resources