Camera2 and Renderscript Allocations - android

I'm trying to use an Allocation created with the USING_IO_INPUT flag to get images from the camera. I'm setting it up as follows
Type.Builder yuvType = new Type.Builder(mRS, Element.YUV(mRS));
yuvType.setYuvFormat(imageReaderFormat);
yuvType.setX(mCameraWidth).setY(mCameraHeight);
mAllocation = Allocation.createTyped(mRS, yuvType.create(), Allocation
.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
mAllocation.setOnBufferAvailableListener(mOnBufferAllocationAvailable);
I'm adding the Allocation surface to a Preview Session, and getting the callbacks to my very simple function
public void onBufferAvailable(Allocation allocation) {
allocation.ioReceive();
//mPixels is properly initialised
allocation.copyTo(mPixels);
}
This setup works on a Nexus 5X, but fails on a Nexus 4 running 5.1.1. When I call allocation.ioReceive() in the callback, I get a few warnings printed from the driver, and copying from the Allocation to a byte array results in garbage being copied.
W/Adreno-RS: <rsdVendorAllocationUnMapQCOM:394>: NOT Found allocation map for alloc 0xa1761000
W/Adreno-GSL: <gsl_ldd_control:427>: ioctl fd 25 code 0xc01c0915 (IOCTL_KGSL_MAP_USER_MEM) failed: errno 22 Invalid argument
W/Adreno-RS: <rsdVendorAllocationMapQCOM:288>: gsl_memory_map_fd failed -5 hostptr: 0xa0112000 sz: 0 offset: 0 flags: 0x10c0b00 alloc: 0xa1761000
I am running the camera in a background thread, although onBufferAvailable gets called in the "RSMessageThread".
Is this problem related to the way I am setting the Allocations and the Camera Preview, or is it a bug in the driver?

I see the same error message on a Samsung Galaxy S4 (Smart Phone), android version 5.0 (API 21), but do not with the identical application using camera2 and renderscript, on a Samsung Galaxy Tab 5 (Tablet), android version 5.1.1 (API 22). I'm assuming it is an early implementation problem on the device vendors part.

Have you tried the official HDR-viewfinder example? If this works on the Nexus 4, then you can study that example.
If not, you can try with my implementation YUV_420_888 to Bitmap that uses a different approach, not via YUV-allocations but via byte-allocation using the Information from the three image planes.

Related

ARCore Image Extraction from GPU

I try to extract image directly from the GPU (so without using AcquireCameraImageBytes()) for performance reason (Samsung S9 can't reach 10fps) and to support the Xiaomi Pocophone I bought. I use the class TextureReader, included inside the ComputerVision example, but OnImageAvailableCallback is never called and log show some error during initialization:
camera_utility: Failed to create OpenGL frame buffer.
camera_utility: Failed to create OpenGL frame buffer.
I insert a couple of breakpoints inside libarcore_camera_utility.so. And I see that glCheckFramebufferStatus inside TextureReader::create return 0 (so a error occurred), but glGetError() don't return any error. How it's possible resolve the problem?

tensorflow TF lite android app crashing after detection

I have trained my model using ssd_mobilenet_v2_quantized_coco, which was also a long painstaking process of digging. Once training was successful, the model was correctly detecting images from my laptop but on my phone as soon as an object is detected, app crashes. I used TF lite Android app available at GitHub. I did some debugging on Android Studio and getting the following error log when an object gets detected and app crashes:
I/tensorflow: MultiBoxTracker: Processing 0 results from 314 I/tensorflow:
DetectorActivity: Preparing image 506 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 506
I/tensorflow: MultiBoxTracker: Processing 0 results from 506
I/tensorflow: DetectorActivity: Preparing image 676 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 676
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3122
java.lang.ArrayIndexOutOfBoundsException: length=80; index=-2147483648
at java.util.Vector.elementData(Vector.java:734)
at java.util.Vector.get(Vector.java:750)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.os.HandlerThread.run(HandlerThread.java:65)
My guess is labels located in .txt file being somehow misread. This is because of the line:
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
and that line corresponds to the following code:
labels.get((int) outputClasses[0][i] + labelOffset)
However, I don't know what to change in labels.txt. Possibly, I need to edit that txt as suggested here. Any other suggestions and explanation for possible causes are appreciated.
Update. I added ??? to the labels.txt and compiled/run, but I am still getting the same error as above.
P.S. I trained ssdmobilenet_V2_coco (the model without quantization) as well and it is working without crash on the app. I am guessing, perhaps, quantization is converting label indices differently and maybe resulting in outofbound error for labels.
Yes it is because the output of labels at times gets garbage value. For a quick fix you can try this:
add a condition:
if((int) outputClasses[0][i]>10)
{
outputClasses[0][i]=-1;
}
here 10 is the number of classes for which the model was trained for. You can change it accordingly.

Qt5, opengl es 2 extensions required

I am developing an Android application based on Qt5.4/QtQuick 2 and opengles 2.
I installed textureinsgnode in order to check how well it runs in my target device(as my app is making massive use of FBOs) and I am getting 18 FPS. I checked on a Samsung SM-T535 and I get around 47/48 FPS, and my application looks like colapsed when trying to make any user action.
I checked the extension available in both devices(my target and Samsung tablet) by:
QOpenGLFramebufferObject *createFramebufferObject(const QSize &size) {
QSet<QByteArray> extensions = QOpenGLContext::currentContext()->extensions();
foreach (QByteArray extension, extensions) {
qDebug() << extension;
}
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
format.setSamples(4);
return new QOpenGLFramebufferObject(size, format);
}
And I am getting a very sort list of extensions in the target device If I compare with Samsung tablet list in the same point:
"GL_OES_rgb8_rgba8"
"GL_EXT_multisampled_render_to_texture" "GL_OES_packed_depth_stencil"
"GL_ARM_rgba8" "GL_OES_vertex_half_float" "GL_OES_EGL_image"
"GL_EXT_discard_framebuffer" "GL_OES_compressed_ETC1_RGB8_texture"
"GL_OES_depth_texture" "GL_KHR_debug" "GL_ARM_mali_shader_binary"
"GL_OES_depth24" "GL_EXT_texture_format_BGRA8888"
"GL_EXT_blend_minmax" "GL_EXT_shader_texture_lod"
"GL_OES_EGL_image_external" "GL_EXT_robustness" "GL_OES_texture_npot"
"GL_OES_depth_texture_cube_map" "GL_ARM_mali_program_binary"
"GL_EXT_debug_marker" "GL_OES_get_program_binary"
"GL_OES_standard_derivatives" "GL_OES_EGL_sync"
I installed an NME(3.4.4, so based in opengl es 1.1) application (BunnyMark) and I can get around 45-48 FPS.
By basing in this tests, I can think the target device is having some problem with opengl es 2 but I am not able to find (I was googleing) on any place what opengl es 2 extension requires Qt5.4 to work properly.
The question: What are the opengl es 2 extensions required by Qt5.4 and is there a way to check it?

Android Renderscript Allocation.USAGE_SHARED crash

I am getting a crash while running my app which uses renderscript. Unfortunately, the logcat does not give any specific details.
b = Bitmap.createBitmap(ib.getWidth(), ib.getHeight(),ib.getConfig());
Allocation mInAllocation = Allocation.createFromBitmap(mRS, inBitmap,
Allocation.MipmapControl.MIPMAP_NONE,Allocation.USAGE_SHARED);
Allocation mOutAllocation2 = Allocation.createFromBitmap(mRS,
outBitmap, Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SHARED);
...execute an algorithm from .rs file and later do the below
mOutAllocation2.copyTo(outBitmap)`;
The same code sequence runs perfectly fine, when I used USAGE_SCRIPT flag instead of USAGE_SHARED for mOutAllocation2.
Any help on why this could happen?
I read in android docs that if the allocation is of the type USAGE_SHARED, then the copy operation from allocation to the bitmap (see above) is faster.
Currently, I am seeing copies from allocation to bitmaps running into secs for decently large images (8MP and above)
I am using Nexus 10 (Android 4.3) currently.
First, you need to be using Allocation.USAGE_SCRIPT | Allocation.USAGE_SHARED. createFromBitmap(RenderScript, Bitmap) will set that for you when possible.
Second, if your copy times are taking that long, you're probably seeing script execution as well. Script execution is asynchronous, so the wall clock time of copyTo(Bitmap) may include significantly more than just the copy.
I was facing the same problem and I resolved it, this issue was happening because my bitmap configuration was not Bitmap.Config.ARGB_8888, we should convert it to ARGB_8888 before applying the blur.
Bitmap U8_4Bitmap;
if (yourBitmap.getConfig() == Bitmap.Config.ARGB_8888) {
U8_4Bitmap = yourBitmap;
} else {
U8_4Bitmap = yourBitmap.copy(Bitmap.Config.ARGB_8888, true);
}

How to measure VRAM consumption on Android?

I want to acquire怀Android Device VRAM size.
Is there a method for acquisition from the program?
Let's do some calculation using Nexus One:
Screen resolution is 480x800. So minimum required video memory size would be:
400 * 800 * 4 bytes = 1536000 bytes
Assuming that driver may (and normally should) use several buffers, we should also expect values like:
1536000 * 2 bytes = 3072000 bytes
1536000 * 3 bytes = 4608000 bytes
etc...
It would be weird to have values that are not multiple of 1536000 (or W x H x 4 in general).
After some searches on Android internals I've found this documentation :
...Android makes two requirements of the driver: a linear address space of mappable memory that it can write to directly...accessing the driver by calling open on /dev/fb0...
So I tried and take size of /dev/graphics/fb0 file (on my device there is no /dev/fb0).
But a direct approach doesn't work:
File file = new File("/dev/graphics/fb0");
file.length(); // ==0, doesn't work, no read access
Using next trick you can get actual size of fb0:
>adb pull /dev/graphics/fb0
1659 KB/s (4608000 bytes in 2.712s)
Video memory is ~4mb (Nexus One). Let's check if this is multiple of Nexus screen size:
4608000/1536000 = 3
It looks like a right value. And we also can say that driver uses three screen buffers.
So, as a conclusion, you can detect video memory size using adb, but you can't use this approach from your android application in runtime due to file access restrictions.
You typically do not have a dedicated "VRAM" on mobile devices. At least you don't have it with PowerVR architectures (wich totally dominate the market with their MBX and SGX cores).
That is, the OpenGL driver allocates normal RAM until you run out of it, and the more you allocate the less you have left for your application.
The Android/OpenGL APIs don't offer explicit methods to read the VRAM size from a given device.
Poor man solution:
You could try to infer the VRAM size in an empiric way adding 1MB texture until you get an out of memory error from gl.glGetError().
From your "dmesg" output u can read off the VRAM, so for my Tablet:
> [ 0.000000] Machine: TDM3730 [ 0.000000] Reserving 12582912
> bytes SDRAM for VRAM
>
> 7>[ 3.929962] VRAM: checking region 9f400000 3072
> <4>[ 3.929992] Failed. Allocating 4194304 bytes for fb 0
> <7>[ 3.935333] VRAM: alloc mem type 0 size 4194304 paddr dec2bd4c
> <7>[ 3.935485] VRAM: checking region 9f400000 3072
> <7>[ 3.935485] VRAM: found 9f400000, end a0000000
> <6>[ 3.936584] android_usb gadget: high speed config #1: android
> <4>[ 3.960113] allocating 4194304 bytes for fb 1
or details at:
http://pastebin.com/jQSXQqHh
Is simple just count how many Mb ram that from usable to real capacity of the ram, example for my lenovo a369i has 512 RAM Module, but in setting app only showing 471 Mb usable so the 41Mb left is reserved for the GPU, so the conclusion is my a369i has 41Mb vram
This method is based from shared graphics memory (wiki)
I suspect that android.os.StatFs is what you're looking for:
http://developer.android.com/reference/android/os/StatFs.html

Categories

Resources