Efficient path for displaying customized video in Android - android

In Android, I need an efficient way of modifying the camera stream before displaying it on the screen. This post discusses a couple of ways of doing so and I was able to implement the first one:
Get frame buffer from onPreviewFrame
Convert frame to YUV
Modify frame
Convert modified frame to jpeg
Display frame to ImageView which was placed on SurfaceView used for the
preview
That worked but brought down the 30 fps I was getting with a regular camera preview to 5 fps or so. Converting frames back and forth from different image spaces is also power hungry, which I don't want.
Are there examples on how to get access to the raw frames directly and not have to go through so many conversions? Is using OpenGL the right way of doing this? It must be a very common thing to do but I can't find good examples.
Note: I'd rather avoid using the Camera2 APIs for backward compatibility sake.

The most efficient form of your CPU-based pipeline would look something like this:
Receive frames from the Camera on a Surface, rather than as byte[]. With Camera2 you can send the frames directly to an ImageReader; that will get you CPU access to the raw YUV data from the Camera without copying it or converting it. (I'm not sure how to rig this up with the old Camera API, as it wants either a SurfaceTexture or SurfaceHolder, and ImageReader doesn't provide those. You can run the frames through a SurfaceTexture and get RGB values from glReadPixels(), but I don't know if that'll buy you anything.)
Perform your modifications on the YUV data.
Convert the YUV data to RGB.
Either convert the RGB data into a Bitmap or a GLES texture. glTexImage2D will be more efficient, but OpenGL ES comes with a steep learning curve. Most of the pieces you need are in Grafika (e.g. the texture upload benchmark) if you decide to go that route.
Render the image. Depending on what you did in step #4, you'll either render the Bitmap through a Canvas on a custom View, or render the texture with GLES on a SurfaceView or TextureView.
I think the most significant speedup will be from eliminating the JPEG compression and uncompression, so you should probably start there. Convert the output of your frame editor to a Bitmap and just draw it on the Canvas of a TextureView or custom View, rather than converting to JPEG and using ImageView. If that doesn't get you the speedup you want, figure out what's slowing you down and work on that piece of the pipeline.

If you're restricted to the old camera API, then using a SurfaceTexture and doing your processing in a GPU shader may be most efficient.
This assumes whatever modifications you want to do can be expressed reasonably as a GL fragment shader, and that you're familiar enough with OpenGL to set up all the boilerplate necessary to render a single quadrilateral into a frame buffer, using the texture from a SurfaceTexture.
You can then read back the results with glReadPixels from the final rendering output, and save that as a JPEG.
Note that the shader will provide you with RGB data, not YUV, so if you really need YUV, you'll have to convert back to a YUV colorspace before processing.
If you can use camera2, as fadden says, ImageReader or Allocation (for Java/JNI-based processing or Renderscript, respectively) become options as well.
And if you're only using the JPEG to get to a Bitmap to place on an ImageView, and not because you want to save it, then again as fadden says you can skip the encode/decode step and draw to a view directly. For example, if using the Camera->SurfaceTexture->GL path, you can just use a GLSurfaceView as the output destination and render directly into a GLSurfaceView if that's all you need to do with the data.

Related

Buffer to openGL rendering android?

In my Application, I am using Android Camera API to access Camera of device. I'm receiving camera frames as byte array on callback onPreviewFrame(). I have to process the image/bye array and give to OpenGL display.
I'm not configuring camera to setPreviewDisplay(holder) or setPreviewTexture(surface) for the frames to be rendered on view directly.
I have been googling.. still no useful reference found.
Please suggest useful information or source for image buffer rendering on OPENGL?
What you are looking for is how to render a texture, which in this case is your image byte array data. You will want to create a vertex buffer object with attributes - position and texture coordinates. Basically you will be creating a quad and mapping the corners of the texture to it. You can find a good example here

Android: Attach SurfaceTexture to FrameBuffer

I am performing a video effect that requires dual pass rendering (the texture needs to be passed through multiple shader programs). Attaching a SurfaceTexture to a GL_TEXTURE_EXTERNAL_OES that is passed in the constructor does not seem to be a solution, since the displayed result is only rendered once.
One solution I am aware of is that the first rendering can be done to a FrameBuffer, and then the resulting texture can be rendered to where it actually gets displayed.
However, it seems that a SurfaceTexture must be attached to a GL_TEXTURE_EXTERNAL_OES texture, and not a FrameBuffer. I'm not sure if there is a workaround around this, or if there is a different approach I should take.
Thank you.
SurfaceTexture receives a buffer of graphics data and essentially wraps it up as an "external" texture. If it helps to see source code, start in updateTexImage(). Note the name of the class ("GLConsumer") is a more accurate description of the function than "SurfaceTexture": it consumes frames of graphic data and makes them available to GLES.
SurfaceTexture is expected to work with formats that OpenGL ES doesn't "naturally" work with, notably YUV, so it always uses external textures.

CPU processing on camera image

Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?
One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.

Android: Dual Pass Render To SurfaceTexture Using OpenGL

In order to perform a Gaussian blur on a SurfaceTexture, I am performing a dual pass render, meaning that I am passing the texture through one shader (horizontal blur) and then through another shader (vertical blur).
I understand the theory behind this: render the first texture to an FBO and the second one onto the SurfaceTexture itself.
There are some examples of this, but none of them seem applicable since a SurfaceTexture uses GL_TEXTURE_EXTERNAL_OES as its target in glBindTexture rather than GL_TEXTURE_2D. Therefore, in the call to glFramebufferTexture2D, GL_TEXTURE_2D cannot be used as the textarget, and I don't think GL_TEXTURE_EXTERNAL_OES can be used in this call.
Can anyone suggest a way to render a texture twice, with the final rendering going to a SurfaceTexture?
Important update: I am using a SurfaceTexture since this is a dynamic blur of a video that plays onto a surface.
Edit: This question was asked with some misunderstanding on my part. A SurfaceTexture is not a display element. It instead receives data from a surface, and is attached to a GL_TEXTURE_EXTERNAL_OES.
Thank you.
Rendering to a SurfaceTexture seems like an odd thing to do here. The point of SurfaceTexture is to take whatever is sent to the Surface and convert it into a GLES "external" texture. Since you're rendering with GLES, you can just use an FBO to render into a GL_TEXTURE_2D for your second pass.
SurfaceTexture is used when receiving frames from Camera or a video decoder because the source is usually YUV. The "external" texture format allows for a wider range of pixel formats, but constrains the uses of the texture. There's no value in rendering to a SurfaceTexture with GLES if your goal is to create a GLES texture.

Android: Creating blurred texture using blending in OpenGL 1.1

Has anyone had much success on Android with creating blurred textures using blending to blur a texture?
I'm thinking of the technique described here but the crux is to take a loaded texture and then apply a blur to it so that the bound texture itself is blurred.
"Inplace blurring" is something a CPU can do, but using a GPU, which generally does things in parallel, you must have another image buffer as render target.
Even with new shaders, reads and writes from/to the same buffer can lead to corruption because they can be done reordered. A similar issue is, that a gaussian blurring kernel, which can handle blurring in one pass, depends on neighbor fragments which could have been modified by the kernel applied at their fragment coordinate.
If you don't have the 'framebuffer_object' extension available for rendering into renderbuffers or even textures (additionally requires 'render_texture' extension),
you have to render into the back buffer as in the example and then do glReadPixels() to get the image back for uploading it to the source texture, or do a fast and direct glCopyTexImage2D() (OpenGL* 1.1 have this).
If the render target is too small, you can render multiple tiles.

Categories

Resources