Android: Dual Pass Render To SurfaceTexture Using OpenGL - android

In order to perform a Gaussian blur on a SurfaceTexture, I am performing a dual pass render, meaning that I am passing the texture through one shader (horizontal blur) and then through another shader (vertical blur).
I understand the theory behind this: render the first texture to an FBO and the second one onto the SurfaceTexture itself.
There are some examples of this, but none of them seem applicable since a SurfaceTexture uses GL_TEXTURE_EXTERNAL_OES as its target in glBindTexture rather than GL_TEXTURE_2D. Therefore, in the call to glFramebufferTexture2D, GL_TEXTURE_2D cannot be used as the textarget, and I don't think GL_TEXTURE_EXTERNAL_OES can be used in this call.
Can anyone suggest a way to render a texture twice, with the final rendering going to a SurfaceTexture?
Important update: I am using a SurfaceTexture since this is a dynamic blur of a video that plays onto a surface.
Edit: This question was asked with some misunderstanding on my part. A SurfaceTexture is not a display element. It instead receives data from a surface, and is attached to a GL_TEXTURE_EXTERNAL_OES.
Thank you.

Rendering to a SurfaceTexture seems like an odd thing to do here. The point of SurfaceTexture is to take whatever is sent to the Surface and convert it into a GLES "external" texture. Since you're rendering with GLES, you can just use an FBO to render into a GL_TEXTURE_2D for your second pass.
SurfaceTexture is used when receiving frames from Camera or a video decoder because the source is usually YUV. The "external" texture format allows for a wider range of pixel formats, but constrains the uses of the texture. There's no value in rendering to a SurfaceTexture with GLES if your goal is to create a GLES texture.

Related

Applying Fragment Shaders on MediaCodec-generated Surface intended for lockCanvas()

Most of the code for generating videos using MediaCodec I've seen so far either use pure OpenGL or locking the Canvas from the MediaCodec-generated Surface and editing it. Can I do it with a mix of both?
For example, if I generate my frames in the latter way, is it possible to apply a Fragment Shader on the MediaCodec-generated Surface before or after editing the Surface's Canvas?
A Surface is the producer end of a producer-consumer pair. Only one producer can be connected at a time, so you can't use GLES and Canvas on the same Surface without disconnecting one and attaching the other.
Last I checked (Lollipop) there was no way to disconnect a Canvas. So switching back and forth is not possible.
What you would need to do is:
Create a Canvas backed by a Bitmap.
Render onto that Canvas.
Upload the rendered Bitmap to GLES with glTexImage2D().
Blit the bitmap with GLES, using your desired fragment shader.
The overhead associated with the upload is unavoidable, but remember that you can draw the Bitmap at a smaller resolution and let GLES scale it up. Because you're drawing on a Bitmap rather than a Surface, it's not necessary to redraw the entire screen for every update, so there is some opportunity to reduce Canvas rendering overhead.
All of the above holds regardless of what the Surface is connected to -- could be MediaCodec, SurfaceView, SurfaceTexture, etc.

Android: Attach SurfaceTexture to FrameBuffer

I am performing a video effect that requires dual pass rendering (the texture needs to be passed through multiple shader programs). Attaching a SurfaceTexture to a GL_TEXTURE_EXTERNAL_OES that is passed in the constructor does not seem to be a solution, since the displayed result is only rendered once.
One solution I am aware of is that the first rendering can be done to a FrameBuffer, and then the resulting texture can be rendered to where it actually gets displayed.
However, it seems that a SurfaceTexture must be attached to a GL_TEXTURE_EXTERNAL_OES texture, and not a FrameBuffer. I'm not sure if there is a workaround around this, or if there is a different approach I should take.
Thank you.
SurfaceTexture receives a buffer of graphics data and essentially wraps it up as an "external" texture. If it helps to see source code, start in updateTexImage(). Note the name of the class ("GLConsumer") is a more accurate description of the function than "SurfaceTexture": it consumes frames of graphic data and makes them available to GLES.
SurfaceTexture is expected to work with formats that OpenGL ES doesn't "naturally" work with, notably YUV, so it always uses external textures.

Efficient path for displaying customized video in Android

In Android, I need an efficient way of modifying the camera stream before displaying it on the screen. This post discusses a couple of ways of doing so and I was able to implement the first one:
Get frame buffer from onPreviewFrame
Convert frame to YUV
Modify frame
Convert modified frame to jpeg
Display frame to ImageView which was placed on SurfaceView used for the
preview
That worked but brought down the 30 fps I was getting with a regular camera preview to 5 fps or so. Converting frames back and forth from different image spaces is also power hungry, which I don't want.
Are there examples on how to get access to the raw frames directly and not have to go through so many conversions? Is using OpenGL the right way of doing this? It must be a very common thing to do but I can't find good examples.
Note: I'd rather avoid using the Camera2 APIs for backward compatibility sake.
The most efficient form of your CPU-based pipeline would look something like this:
Receive frames from the Camera on a Surface, rather than as byte[]. With Camera2 you can send the frames directly to an ImageReader; that will get you CPU access to the raw YUV data from the Camera without copying it or converting it. (I'm not sure how to rig this up with the old Camera API, as it wants either a SurfaceTexture or SurfaceHolder, and ImageReader doesn't provide those. You can run the frames through a SurfaceTexture and get RGB values from glReadPixels(), but I don't know if that'll buy you anything.)
Perform your modifications on the YUV data.
Convert the YUV data to RGB.
Either convert the RGB data into a Bitmap or a GLES texture. glTexImage2D will be more efficient, but OpenGL ES comes with a steep learning curve. Most of the pieces you need are in Grafika (e.g. the texture upload benchmark) if you decide to go that route.
Render the image. Depending on what you did in step #4, you'll either render the Bitmap through a Canvas on a custom View, or render the texture with GLES on a SurfaceView or TextureView.
I think the most significant speedup will be from eliminating the JPEG compression and uncompression, so you should probably start there. Convert the output of your frame editor to a Bitmap and just draw it on the Canvas of a TextureView or custom View, rather than converting to JPEG and using ImageView. If that doesn't get you the speedup you want, figure out what's slowing you down and work on that piece of the pipeline.
If you're restricted to the old camera API, then using a SurfaceTexture and doing your processing in a GPU shader may be most efficient.
This assumes whatever modifications you want to do can be expressed reasonably as a GL fragment shader, and that you're familiar enough with OpenGL to set up all the boilerplate necessary to render a single quadrilateral into a frame buffer, using the texture from a SurfaceTexture.
You can then read back the results with glReadPixels from the final rendering output, and save that as a JPEG.
Note that the shader will provide you with RGB data, not YUV, so if you really need YUV, you'll have to convert back to a YUV colorspace before processing.
If you can use camera2, as fadden says, ImageReader or Allocation (for Java/JNI-based processing or Renderscript, respectively) become options as well.
And if you're only using the JPEG to get to a Bitmap to place on an ImageView, and not because you want to save it, then again as fadden says you can skip the encode/decode step and draw to a view directly. For example, if using the Camera->SurfaceTexture->GL path, you can just use a GLSurfaceView as the output destination and render directly into a GLSurfaceView if that's all you need to do with the data.

Render Android MediaCodec output on two views for VR Headset compatibility

What I know so far is that I need to use a SurfaceTexture that can be rendered on two TextureViews simultaneously.
So it will be:
MediaCodec -> SurfaceTexture -> 2x TextureViews
But how do I get a SurfaceTexture programmaticly to be used in the MediaCodec? As far as I know a new SurfaceTexture is created for every TextureView, so if I have two TextureViews in my activity, I will get two TextureViews!? Thats one to much... ;)
Or is there any other way to render the MediaCodec Output to a screen twice?
Do you actually require two TextureViews, or is that just for convenience?
You could, for example, have a single SurfaceView or TextureView that covers the entire screen, and then just render on the left and right sides with GLES. With the video output in a SurfaceTexture, you can render it however you like. The "texture from camera" activity in Grafika demonstrates various ways to manipulate image from a video source.
If you really want two TextureViews, you can have them. Use a single EGL context for the SurfaceTexture and both TextureViews, and just switch between EGL surfaces with eglMakeCurrent() when it's time to render.
In any event, you should be creating your own SurfaceTexture to receive the video, not using one that comes from a TextureView -- see e.g. this bit of code.

How to record webview activity screen using Android MediaCodec?

I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?
Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.
"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/

Categories

Resources