Get Bitmap from TextureView efficiently - android

I am trying to get each frame from a TextureView, unfortunately trying:
textureView.getBitmap();
Results in slow performance is there a faster way to obtain a bitmap. Is it better to use the NDK instead?
Looking for actual examples

A TextureView receives frames on a SurfaceTexture, which takes frames sent to its Surface and converts them to a GLES texture. To get the pixel data out, the texture must be rendered to a framebuffer, then read out with glReadPixels(). The pixel data can then be wrapped with a Bitmap object (which may or may not involve copying the pixel data).
Using the NDK isn't going to do you much good, as all of the code that needs to run quickly is already implemented natively.
You may see some improvement by sending the data directly to a SurfaceTexture and doing the GLES work yourself, but presumably you want to display the incoming frames in the TextureView, so all you'd potentially save is the Bitmap overhead (which may or may not be significant).
It might help if you explained in your question where the frames are coming from and what it is you want to do with them.

Related

Camera2 - most efficient way to obtain a still capture Bitmap

To start with the question: what is the most efficient way to initialize and use ImageReader with the camera2 api, knowing that I am always going to convert the capture into a Bitmap?
I'm playing around with the Android camera2 samples, and everything is working quite nicely. However, for my purposes I always need to perform some post processing on captured still images, for which I require a Bitmap object. Presently I am using BitmapFactory.decodeByteArray(...) using the bytes coming from the ImageReader.acquireNextImage().getPlanes()[0].getBuffer() (I'm paraphrasing). While this works acceptably, I still feel like there should be a way to improve performance. The captures are encoded in ImageFormat.Jpeg and need to be decoded again to get the Bitmap, which seems redundant. Ideally I'd obtain them in PixelFormat.RGB_888 and just copy that to a Bitmap using Bitmap.copyPixelsFromBuffer(...), but it doesn't seem like initializing an ImageReader with that format has reliable device support. YUV_420_888 could be another option, but looking around SO it seems that it requires jumping through some hoops to decode into a Bitmap. Is there a recommended way to do this?
The question is what you are optimizing for.
Jpeg is without doubt the easiest format supported by all devices. Decoding it to bitmap is not redundant as it seems because encoding the picture into jpeg he is usually performed by kind of hardware. This means that uses minimal bandwidth to transmit the image from the sensor to your application. on some devices this is the only way to get maximum resolution. BitmapFactory.decodeByteArray(...) is often performed by special hardware decoder too. The major problem with this call is that may cause out of memory exception, because the output bitmap is too big. So you will find many examples the do subsampled decoding, tuned for the use case where the bitmap must be displayed on the phone screen.
If your device supports required resolution with RGB_8888, go for it: this needs minimal post-processing. But scaling such image down may be more CPU intensive then dealing with Jpeg, and memory consumption may be huge. Anyways, only few devices support this format for camera capture.
As for YUV_420_888 and other YUV formats,
the advantages over Jpeg are even smaller than for RGB.
If you need the best quality image and don't have memory limitations, you should go for RAW images which are supported on most high-end devices these days. You will need your own conversion algorithm, and probably make different adaptations for different devices, but at least you will have full command of the picture acquisition.
After a while I now sort of have an answer to my own question, albeit not a very satisfying one. After much consideration I attempted the following:
Setup a ScriptIntrinsicYuvToRGB RenderScript of the desired output size
Take the Surface of the used input allocation, and set this as the target surface for the still capture
Run this RenderScript when a new allocation is available and convert the resulting bytes to a Bitmap
This actually worked like a charm, and was super fast. Then I started noticing weird behavior from the camera, which happened on other devices as well. As it would turn out, the camera HAL doesn't really recognize this as a still capture. This means that (a) the flash / exposure routines don't fire in this case when they need to and (b) if you have initiated a precapture sequence before your capture auto-exposure will remain locked unless you manage to unlock it using AE_PRECAPTURE_TRIGGER_CANCEL (API >= 23) or some other lock / unlock magic which I couldn't get to work on either device. Unless you're fine with this only working in optimal lighting conditions where no exposure adjustment is necessary, this approach is entirely useless.
I have one more idea, which is to setup an ImageReader with a YUV_420_888 output and incorporating the conversion routine from this answer to get RGB pixels from it. However, I'm actually working with Xamarin.Android, and RenderScript user scripts are not supported there. I may be able to hack around that, but it's far from trivial.
For my particular use case I have managed to speed up JPEG decoding to acceptable levels by carefully arranging background tasks with subsampled decodes of the versions I need at multiple stages of my processing, so implementing this likely won't be worth my time any time soon. If anyone is looking for ideas on how to approach something similar though; that's what you could do.
Change the Imagereader instance using a different ImageFormat like this:
ImageReader.newInstance(width, height, ImageFormat.JPEG, 1)

How to load output of openGL glReadpixels() into bitmap memory

I've written code to convert YUV to RGB using openGL ES 3.0. Instead of showing the converted image into glViewPort surface, i want to store it on bitmap memory.
I used openGL frame buffer and render buffer(render to texture), then trying to get the output using glReadpixels() function. I am getting output, but i dont know how to upload the output into bitmap memory. Please help.
If you have image data in client (system) memory, and want to send it to texture memory, you would use glTexImage2D (I'm assuming you have a 2D image).
However, if I'm understanding your use case, it's kind of a strange to render to render buffer, use glReadPixels to read the results it into system memory, and then use that data to create a new texture. Generally, you would just attach a texture as the color output for the framebuffer, and then use that directly. This would bypass the roundtrip to system memory, which is expensive.

Android - Capturing video frame from GLSurfaceView and SurfacView

To play a media using Android MediaPlayer or MediaCodec, most of the time, you use SurfaceView or GLSurfaceView (There is another way to achieve this using TextureView, but let's not talk about it here, since it's a bit different type of view)
And as far as I know, capturing the video frame from SurfaceView is not possible - you don't have access to hw overlay.
How about GLSurfaceView? Since we have access to YUV pixels (we're, right?), is it possible?
Can anyone point me where i can find a sample code to do it?
I don't think below explanation can work, because it's assuming the color format is RGBA, and in above case, I think it's YUV.
When using GLES20.glReadPixels on android, the data returned by it is not exactly the same with the living preview
Thank you and have a great day.
You are correct in that you cannot read back from a Surface. It's the producer side of a producer-consumer pair. GLSurfaceView is just a bunch of code wrapped around a SurfaceView that (in theory) makes working with GLES easier.
So you have to send the preview somewhere else. One approach is to send it to a SurfaceTexture, which converts every frame sent to its Surface into a GLES texture. The texture can then be rendered twice, once for display and once to an offscreen pbuffer that can be saved as a bitmap (just like this question).
I'm not sure why you don't want to talk about TextureView. It's a View that uses SurfaceTexture under the hood, and it provides a getBitmap() call that does exactly what you want.

Android MediaCodec/NdkMediaCodec GLES2 interop

we are trying to decode AVC/h264 bitstreams using the new NdkMediaCodec API. While decoding works fine now, we are struggling to the the contents of the decoded
video frame mapped to GLES2 for rendering.
The API allows passing a ANativeWindow at configuration time, but we want to control scheduling of the video rendering and ultimately just provide N textures which are filled
with the decoded frame data.
All attempts to map the memory returned by getOutputBuffer() to GLES vie eglCreateImageKHR/external image failed. The NdkMediaCodec seems to use libstagefright/OMX internally.
So the output buffers are very likely allocated using gralloc - arent they? Is there a way to get the gralloc handle/GraphicsBuffer to bind the frame to EGL/GLES2?
Since there are lots of pixel formats for the media frame without any further documentation on their memory layout, it's hard to use NdkMediaCodec robustly.
Thanks alot for any hints!
For general MediaCodec in java, create a SurfaceTexture for the GL ES texture you want to have the data in, then create a Surface out of this SurfaceTexture, and use this as target for the MediaCodec decoder. See http://bigflake.com/mediacodec/ (e.g. EncodeDecodeTest) for an example on doing this.
The SurfaceTexture and Surface classes aren't available directly in the NDK right now (as far as I know), though, so you'll need to call these via JNI. Then you can create an ANativeWindow from the Surface using ANativeWindow_fromSurface.
You're right that the output buffers are gralloc buffers, but since there's public APIs for doing this it's safer to rely on those than trying to take shortcuts.

How to save SurfaceTexture as bitmap

When I decode a video to a surface I want to save the frames i want as bitmap/jpeg files. I don't want to draw on the screen and just want to save the content of the SurfaceTexture as an image file.
You have to render the texture.
If it were a normal texture, and you were using GLES 2 or later, you could attach it to an FBO and read directly from that. A SurfaceTexture is backed by an "external texture", and might be in a format that the GL driver doesn't support a full set of operations on, so you can't do that. You need to render it, and read the result.
FWIW, the way you go about saving the frame can have a significant performance impact. A full example demonstrating the use of MediaExtractor, MediaCodec, glReadPixels(), and PNG file creation is now up on bigflake (ExtractMpegFramesTest).
I've been looking at this lately, on the Android platform. Summing up the various options and why they are/aren't applicable.
glReadPixels()
The only option Android Java coders currently really have. Said to be slow. Reads from a framebuffer, not a texture (so one must render the texture to an internal frame buffer first, unless one wants to record the screen itself). Okay. Got things to work.
EGL_KHR_image_base()
An extension that seems to be available on the native (NJK) level, but not in Java.
glGetTexImage()
Looked promising but not available in OpenGL 2.0 ES variant.
Pixel Buffer Objects
Probably the 'right' way to do things, but requires OpenGL 3.0 ES (i.e. selected Android 4.3+ devices).
I'm not saying this is adding any info that wouldn't be available elsewhere. But having so many seemingly similar options (that still wouldn't work) was confusing. I'm not an OpenGL expert so any mistakes above are gladly corrected.

Categories

Resources