I am performing a video effect that requires dual pass rendering (the texture needs to be passed through multiple shader programs). Attaching a SurfaceTexture to a GL_TEXTURE_EXTERNAL_OES that is passed in the constructor does not seem to be a solution, since the displayed result is only rendered once.
One solution I am aware of is that the first rendering can be done to a FrameBuffer, and then the resulting texture can be rendered to where it actually gets displayed.
However, it seems that a SurfaceTexture must be attached to a GL_TEXTURE_EXTERNAL_OES texture, and not a FrameBuffer. I'm not sure if there is a workaround around this, or if there is a different approach I should take.
Thank you.
SurfaceTexture receives a buffer of graphics data and essentially wraps it up as an "external" texture. If it helps to see source code, start in updateTexImage(). Note the name of the class ("GLConsumer") is a more accurate description of the function than "SurfaceTexture": it consumes frames of graphic data and makes them available to GLES.
SurfaceTexture is expected to work with formats that OpenGL ES doesn't "naturally" work with, notably YUV, so it always uses external textures.
Related
I came across AHardwareBuffer in Android. I wanted to make use of AHardwareBuffer to store textures so that I can use them on different threads where I don't have an OpenGL context. Currently, I'm doing the following:
Generate a texture and bind it to GL_TEXTURE_2D.
Create EGLClientBuffer and EGLImageKHR from it. Attach the EGLImage as texture target.
Generate an FBO and bind it to the texture using glFramebufferTexture2D.
To draw the texture (say tex), I'm rendering it onto the AHardwareBuffer using shaders
However, I wanted away so that I don't need to rerender it onto hardwarebuffer but instead directly store data of the texture onto hardwarebuffer.
I was thinking of using glCopyTexImage2d for doing this. Is this fine and would it work?
Also (a dumb question but I cannot get over it) if I attach my EGLImage which is from the Hardwarebuffer to GL_TEXTURE_2D and define the texture using glTexImage2D, would it not store the data of the texture which is a parameter of glTexImage2D into the hardwarebuffer?
I solved this issue using glSubTexImage2D.
First create a opengl texture and bind it to GL_TEXTURE_2D.
Then use glEGLImageTargetTexture2DOES to bind texture to EGLImageKHR created from EGLClientBuffer. This is similar to glTexImage2D. Any subsequent call to glTexImage2D will break the relationship between the texture and EGLClientBuffer.
refer: https://www.khronos.org/registry/OpenGL/extensions/OES/OES_EGL_image_external.txt
However glSubTexImage2D preserves the relationship. Hence we can load data with this API and store it in AHardwareBuffer.
PS: This might me one way and if there are other ways i'll be accepting the answer.
In order to perform a Gaussian blur on a SurfaceTexture, I am performing a dual pass render, meaning that I am passing the texture through one shader (horizontal blur) and then through another shader (vertical blur).
I understand the theory behind this: render the first texture to an FBO and the second one onto the SurfaceTexture itself.
There are some examples of this, but none of them seem applicable since a SurfaceTexture uses GL_TEXTURE_EXTERNAL_OES as its target in glBindTexture rather than GL_TEXTURE_2D. Therefore, in the call to glFramebufferTexture2D, GL_TEXTURE_2D cannot be used as the textarget, and I don't think GL_TEXTURE_EXTERNAL_OES can be used in this call.
Can anyone suggest a way to render a texture twice, with the final rendering going to a SurfaceTexture?
Important update: I am using a SurfaceTexture since this is a dynamic blur of a video that plays onto a surface.
Edit: This question was asked with some misunderstanding on my part. A SurfaceTexture is not a display element. It instead receives data from a surface, and is attached to a GL_TEXTURE_EXTERNAL_OES.
Thank you.
Rendering to a SurfaceTexture seems like an odd thing to do here. The point of SurfaceTexture is to take whatever is sent to the Surface and convert it into a GLES "external" texture. Since you're rendering with GLES, you can just use an FBO to render into a GL_TEXTURE_2D for your second pass.
SurfaceTexture is used when receiving frames from Camera or a video decoder because the source is usually YUV. The "external" texture format allows for a wider range of pixel formats, but constrains the uses of the texture. There's no value in rendering to a SurfaceTexture with GLES if your goal is to create a GLES texture.
In Android, I need an efficient way of modifying the camera stream before displaying it on the screen. This post discusses a couple of ways of doing so and I was able to implement the first one:
Get frame buffer from onPreviewFrame
Convert frame to YUV
Modify frame
Convert modified frame to jpeg
Display frame to ImageView which was placed on SurfaceView used for the
preview
That worked but brought down the 30 fps I was getting with a regular camera preview to 5 fps or so. Converting frames back and forth from different image spaces is also power hungry, which I don't want.
Are there examples on how to get access to the raw frames directly and not have to go through so many conversions? Is using OpenGL the right way of doing this? It must be a very common thing to do but I can't find good examples.
Note: I'd rather avoid using the Camera2 APIs for backward compatibility sake.
The most efficient form of your CPU-based pipeline would look something like this:
Receive frames from the Camera on a Surface, rather than as byte[]. With Camera2 you can send the frames directly to an ImageReader; that will get you CPU access to the raw YUV data from the Camera without copying it or converting it. (I'm not sure how to rig this up with the old Camera API, as it wants either a SurfaceTexture or SurfaceHolder, and ImageReader doesn't provide those. You can run the frames through a SurfaceTexture and get RGB values from glReadPixels(), but I don't know if that'll buy you anything.)
Perform your modifications on the YUV data.
Convert the YUV data to RGB.
Either convert the RGB data into a Bitmap or a GLES texture. glTexImage2D will be more efficient, but OpenGL ES comes with a steep learning curve. Most of the pieces you need are in Grafika (e.g. the texture upload benchmark) if you decide to go that route.
Render the image. Depending on what you did in step #4, you'll either render the Bitmap through a Canvas on a custom View, or render the texture with GLES on a SurfaceView or TextureView.
I think the most significant speedup will be from eliminating the JPEG compression and uncompression, so you should probably start there. Convert the output of your frame editor to a Bitmap and just draw it on the Canvas of a TextureView or custom View, rather than converting to JPEG and using ImageView. If that doesn't get you the speedup you want, figure out what's slowing you down and work on that piece of the pipeline.
If you're restricted to the old camera API, then using a SurfaceTexture and doing your processing in a GPU shader may be most efficient.
This assumes whatever modifications you want to do can be expressed reasonably as a GL fragment shader, and that you're familiar enough with OpenGL to set up all the boilerplate necessary to render a single quadrilateral into a frame buffer, using the texture from a SurfaceTexture.
You can then read back the results with glReadPixels from the final rendering output, and save that as a JPEG.
Note that the shader will provide you with RGB data, not YUV, so if you really need YUV, you'll have to convert back to a YUV colorspace before processing.
If you can use camera2, as fadden says, ImageReader or Allocation (for Java/JNI-based processing or Renderscript, respectively) become options as well.
And if you're only using the JPEG to get to a Bitmap to place on an ImageView, and not because you want to save it, then again as fadden says you can skip the encode/decode step and draw to a view directly. For example, if using the Camera->SurfaceTexture->GL path, you can just use a GLSurfaceView as the output destination and render directly into a GLSurfaceView if that's all you need to do with the data.
I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?
Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.
"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/
Has anyone had much success on Android with creating blurred textures using blending to blur a texture?
I'm thinking of the technique described here but the crux is to take a loaded texture and then apply a blur to it so that the bound texture itself is blurred.
"Inplace blurring" is something a CPU can do, but using a GPU, which generally does things in parallel, you must have another image buffer as render target.
Even with new shaders, reads and writes from/to the same buffer can lead to corruption because they can be done reordered. A similar issue is, that a gaussian blurring kernel, which can handle blurring in one pass, depends on neighbor fragments which could have been modified by the kernel applied at their fragment coordinate.
If you don't have the 'framebuffer_object' extension available for rendering into renderbuffers or even textures (additionally requires 'render_texture' extension),
you have to render into the back buffer as in the example and then do glReadPixels() to get the image back for uploading it to the source texture, or do a fast and direct glCopyTexImage2D() (OpenGL* 1.1 have this).
If the render target is too small, you can render multiple tiles.