RenderScript copy 3D allocation in rs file - android

RenderScript allocations can be 1D, 2D or 3D. Almost all methods support these 3, but the copy method is missing 3D. According to Android developers, below are all data access functions:
rsAllocationCopy1DRange Copy consecutive cells between allocations
rsAllocationCopy2DRange Copy a rectangular region of cells between allocations
rsAllocationVLoadX Get a vector from an allocation of scalars
rsAllocationVStoreX Store a vector into an allocation of scalars
rsGetElementAt Return a cell from an allocation
rsGetElementAtYuv_uchar_U Get the U component of an allocation of YUVs
rsGetElementAtYuv_uchar_V Get the V component of an allocation of YUVs
rsGetElementAtYuv_uchar_Y Get the Y component of an allocation of YUVs
rsSample Sample a value from a texture allocation
rsSetElementAt Set a cell of an allocation
As you can see, there is no rsAllocationCopy3DRange. I tried it anyway with this code:
rsAllocationCopy3DRange(output, 0, 0, 0, 0, 0, w, h, d, data, 0, 0, 0, 0, 0);
And the compile failed.
How to copy 3D allocations? I can write a kernel function just return the input and then rsForEach that function, but seems very inefficient for a simple copy.
Why is there no 3D copy? It doesn't make sense. Why is that Java/Kotlin side has 3D copy, but rs side doesn't?

Related

Renderscript allocation from bitmap

I'm trying to get into render script and was confused regarding allocations usage. Almost all examples show next algorithm:
Create In and Out Bitmap
Create In and Out Allocation from Bitmaps In and Out correspondingly
Configure script and perform forEach method
copy result from Out allocation into bitmap using copyTo method
Something like that:
Bitmap srcBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.lena);
Bitmap dstBitmap = Bitmap.createBitmap(srcBitmap.getWidth(), srcBitmap.getHeight(), srcBitmap.getConfig());
Allocation allocationIn = Allocation.createFromBitmap(renderScript, srcBitmap);
Allocation allocationOut = Allocation.createFromBitmap(renderScript, dstBitmap);
scriptColorMatrix.setGreyscale();
scriptColorMatrix.forEach(allocationIn, allocationOut);
//no difference after removing this line
allocationOut.copyTo(dstBitmap);
imagePreview.setImageBitmap(dstBitmap);
This works, but it also works even if I omit step 4 by removing:
allocationOut.copyTo(dstBitmap);
Lets go further and lower brightness after grayscale:
Bitmap srcBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.lena);
Bitmap dstBitmap = Bitmap.createBitmap(srcBitmap.getWidth(), srcBitmap.getHeight(), srcBitmap.getConfig());
Allocation allocationIn = Allocation.createFromBitmap(renderScript, srcBitmap);
Allocation allocationOut = Allocation.createFromBitmap(renderScript, dstBitmap);
scriptColorMatrix.setGreyscale();
scriptColorMatrix.forEach(allocationIn, allocationOut);
//reset color matrix
scriptColorMatrix.setColorMatrix(new Matrix4f());
//adjust brightness
scriptColorMatrix.setAdd(-0.5f, -0.5f, -0.5f, 0f);
//Performing forEach vise versa (from out to in)
scriptColorMatrix.forEach(allocationOut, allocationIn);
imagePreview.setImageBitmap(srcBitmap);
Shortly describing the code above, we performed grayscale collor matrix from In allocation into Out one, and brightness adjustment in backward direction. I never called copyTo method, but at the end I've got result in srcBitmap and it was correct.
That's not the end. Lets go deeper. I'll leave only one bitmap and one Allocation:
Bitmap srcBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.lena);
Allocation allocationIn = Allocation.createFromBitmap(renderScript, srcBitmap);
scriptColorMatrix.setGreyscale();
scriptColorMatrix.forEach(allocationIn, allocationIn);
//reset color matrix
scriptColorMatrix.setColorMatrix(new Matrix4f());
//adjust brightness
scriptColorMatrix.setAdd(-0.5f, -0.5f, -0.5f, 0f);
scriptColorMatrix.forEach(allocationIn, allocationIn);
imagePreview.setImageBitmap(srcBitmap);
The result is the same...
Can anybody explain why does it happen and where to use copyTo and where I can use targeting Bitmap without it?
The Allocation objects are needed in order to provide the proper mapping of Bitmap to something Renderscript understands. If you are targeting API 18 or higher, the Allocation.createFromBitmap() methods you are using are automatically giving in the flag USAGE_SHARED, which tries to have the Allocation use the same backing memory as the Bitmap object. So the two are linked but technically the copyTo() method is still needed as the RS implementation may need to synchronize it back. On some platforms this may have already happened where others may cause this to pause as DMA or other mechanisms are updating the Bitmap's backing memory with whatever changes were made by the RS code.
As for why you can reverse the in/out Allocation order when calling scripts - it's valid and up to you to get the arguments and order correct. To RS they are just objects which point to some type of backing data to be manipulated. Since both were created with the Allocation.createFromBitmap() call, they can be used as either input or output as long as the Bitmap objects are mutable.
Similarly, using the same Allocation for the input and output is not normal, but also not invalid. It just means your input is changing on the fly. As long as your script is not accessing other Elements in the data when the root function is called for a specific Element, then it should work (as you are seeing.)

Android: Create Allocation from pixels array for renderscript

I am trying to use the Android Renderscript for blurring an image. My input is an array of integers that containt the pixel's colors. Here's what I did and not worked. The application shut down without any error message on Galaxy S device
bmp.getPixels(pixels, 0, bmp.getWidth(), 0, 0, bmp.getWidth(), bmp.getHeight());
Allocation input = Allocation.createSized(rs, Element.I32(rs), pixels.length);
input.copy1DRangeFrom(0, pixels.length, pixels);
Allocation output = Allocation.createTyped(rs, input.getType());
ScriptIntrinsicBlur script = ScriptIntrinsicBlur.create(rs, Element.U8_4(rs));
script.setRadius(6f);
script.setInput(input);
script.forEach(output);
output.copyTo(pixels);
You'll need to look at the logcat output (make sure no filters are on in Android Studio / Eclipse), it will show you the crash.
The problem you're seeing is most likely because your input Allocation element type doesn't match the output. They need to be the same. Rather than call Allocation.createSized() and specify an element, just call Allocation.createFromBitmap() and provide it with your input Bitmap object. Then copy the input Bitmap into the Allocation.

why do we allocate blocks in bytes rather than floats in OpenGL (ES) android , though we work with float most of the times

this is how i am made an array of triangle
float[] tableVerticesWithTriangle = {
// triangle 1
0f, 0f, 9f, 14f, 0f, 14f,
// triangle 2
0f, 0f, 9f, 0f, 9f, 14f
};
and this is how i have allocated the block in native environment
vertexData = ByteBuffer
.allocateDirect(
tableVerticesWithTriangle.length * BYTES_PER_FLOAT)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexData.put(tableVerticesWithTriangle);
The reason people use ByteBuffer.allocateDirect() is that other buffer classes, like FloatBuffer, do not have an allocateDirect() method. Only ByteBuffer can be allocated as a direct buffer. So allocating a ByteBuffer, and then using the memory as a FloatBuffer, is the only way to get a directly allocated FloatBuffer.
What is a direct buffer?
The documentation of isDirect() of the FloatBuffer class explains it like this:
Indicates whether this buffer is direct. A direct buffer will try its best to take advantage of native memory APIs and it may not stay in the Java heap, so it is not affected by garbage collection.
A float buffer is direct if it is based on a byte buffer and the byte buffer is direct.
In other (less formal) words, a native buffer is a native memory allocation that Java is not messing with.
When are direct buffers required?
Strangely enough, I have never been able to find clear documentation for this. So the following is a hypothesis that I confirmed with experiments, without finding any counter-examples so far.
Direct buffers have to be used when a buffer is passed to an OpenGL API where the memory is used by the OpenGL implementation after the call returns.
There is only one example of this I could find: client side vertex arrays (which BTW are marked as a legacy feature in ES 3.0, but still supported). This is the glVertexAttribPointer() call with the following signature, which supports vertex arrays without the use of VBOs:
glVertexAttribPointer(int indx, int size, int type, boolean normalized,
int stride, Buffer ptr)
In this case, OpenGL will pull vertex data from the buffer in later draw calls, so the buffer content has to remain accessible to OpenGL after the call returns, and will potentially be read directly by the GPU.
In all other cases (again according to my hypothesis), it is not necessary to use direct buffers. You can for example do the following:
float[] vertexData = {...};
GLES20.glBufferData(GL_ARRAY_BUFFER, vertexData.length * 4,
FloatBuffer.wrap(vertexData), GLES20.GL_STATIC_DRAW);
The glBufferData() call consumes the data during the call, and the original buffer can not be accessed by OpenGL after the call returns. Therefore, it is not necessary to use a direct buffer.

OutOfMemoryError while create a bitmap

I want to send a fax from my app.
A fax document has a resolution of 1728 x 2444 pixels.
So I create a bitmap, add text and/or pictures and encode it to CCITT (Huffman):
Bitmap image = Bitmap.createBitmap(1728, 2444, Config.ALPHA_8);
Canvas canvas = new Canvas(image);
canvas.drawText("This is a fax", 100, 100, new Paint());
ByteBuffer buffer = ByteBuffer.allocateDirect(image.getWidth() * image.getHeight());
image.copyPixelsToBuffer(buffer);
image.recycle();
encodeCCITT(buffer, width, height);
This works perfect on my Galaxy SII (64 MB heap size), but not at emulator (24 MB). After creating the second fax page I get "4223232-byte external allocation too large for this process...java.lang.OutOfMemoryError" while allocating the buffer.
I already reduced color depth from ARGB_8888 (4 byte per pixel) to ALPHA_8 (1 byte), because fax pages are monochrome anyway.
I need this resolution and I need to have access to the pixels for encoding.
What is the best way?
Android doesn't support 1-Bpp bitmaps, and the Java heap size limit of 24/32/48MB is part of Android. Real devices can't allocate more than the Java heap limit no matter how much RAM they have. There appear to be only two possible solutions:
1) Work within the limitations of the Java heap.
2) Use native code (NDK).
In native code you can allocate the entire available RAM of the device. The only down side is that you will need to write your own code to edit and encode your bitmap.
In addition to BitBank's already good answer, you have to null the reference if you want the Garbage collector to actually clean up your Bitmap's references. The documentation for that method states:
This is an advanced call, and normally need not be called, since the
normal GC process will free up this memory when there are no more
references to this bitmap.
instead of copy all pixels to a ByteBuffer, you can copy step by step. Here with a int[] array. So, you need less memory:
int countLines = 100;
int[] pixels = new int[width * countLines];
for (int y = 0; y < heigth; y += countLines) {
image.getPixels(line, 0, width, 0, y, width, countLines);
// do something with pixels...
image.setPixels(line, 0, width, 0, y, width, countLines);
}

Handling large bitmaps on Android - int[] larger than max heap size

I'm using very large bitmaps and I store data in a big int[]. The images can be really large and I can't downsample them (I'm getting the bitmaps over the wire and rendering them).
The problem I'm hitting is on very large bitmaps (bitmap size = 64MB), where I try to allocate int array with size 16384000. I'm testing this on Samsung Galaxy SII, which should have enough memory to handle this, but it seems there is a "cap" on heap size. The method Runtime.getRuntime().maxMemory() returns 64MB, so this is the max heap size for this particular device.
The API level is set to 10, so I can't use android:largeHeap attribute suggested elsewhere (and I don't even know if that would help).
Is there any way to allocate more than 64MB? I tried allocating the array in native (using JNI NewIntArray function), but that fails as well. It seems as though it is bound by the same limit as jvm.
I could however allocate memory on the native side using NewDirectByteBuffer, but since this byte buffer is not backed by an array, I can not get access to int[] (using asIntBuffer().array() in java which I need in order to display the image using setPixels or createBitmap. I guess OpenGL would be a way to go, but I have (so far) 0 experience with OpenGL.
Is there a way to somehow access allocated memory as int[] that I am missing?
So, the only way I've found so far is to allocate image using NDK. Furthermore, since Bitmap does not use existing Buffer as pixel "storage" (the method copyPixelsFromBuffer is also bound to memory limits; and judging by the method name, it copies the data anyway).
The solution (I've only prototyped it roughly) is to malloc whatever the size of the image is, and fill it using c/c++ and then use ByteBuffer in Java with OpenGLES.
The current prototype creates a simple plane and applies this image as a texture to it (luckily, OpenGLES methods take Buffer as input, which seems to work as expected). I'm using glTexImage2D to apply this buffer as a texture to a plane. Here is a sample, where mImageData is ByteBuffer allocated (and filled) on the native side.
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureId = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, 4000, 4096, 0, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, mImageData);
I assume OP already solved this, but if you have a Stream, you can stream the bitmap into a file and read that file using inSampleSize

Categories

Resources