Applying Fragment Shaders on MediaCodec-generated Surface intended for lockCanvas() - android

Most of the code for generating videos using MediaCodec I've seen so far either use pure OpenGL or locking the Canvas from the MediaCodec-generated Surface and editing it. Can I do it with a mix of both?
For example, if I generate my frames in the latter way, is it possible to apply a Fragment Shader on the MediaCodec-generated Surface before or after editing the Surface's Canvas?

A Surface is the producer end of a producer-consumer pair. Only one producer can be connected at a time, so you can't use GLES and Canvas on the same Surface without disconnecting one and attaching the other.
Last I checked (Lollipop) there was no way to disconnect a Canvas. So switching back and forth is not possible.
What you would need to do is:
Create a Canvas backed by a Bitmap.
Render onto that Canvas.
Upload the rendered Bitmap to GLES with glTexImage2D().
Blit the bitmap with GLES, using your desired fragment shader.
The overhead associated with the upload is unavoidable, but remember that you can draw the Bitmap at a smaller resolution and let GLES scale it up. Because you're drawing on a Bitmap rather than a Surface, it's not necessary to redraw the entire screen for every update, so there is some opportunity to reduce Canvas rendering overhead.
All of the above holds regardless of what the Surface is connected to -- could be MediaCodec, SurfaceView, SurfaceTexture, etc.

Related

How to crop Camera2 preview without overlay object?

I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#

Android : Single SurfaceView vs multiple SurfaceView

I am trying to draw 3D object onto camera preview frames (Android). Should I use two surface views, one for camera preview and another GLSurfaceView for drawing. The views should be synchronized and frame rate of display should be good enough to provide a good user experience. So most of the tutorials talk about using multiple views. The alternate idea is to get texture from camera preview and merge it with the 3D object to be drawn so as to get the appropriate 2D raster image.
Which method would be better for performance gains?
P.S : I will be using the Java APIs for openGL es 2.0
Since two surface views increase the number of API calls per frame and require transparency, they will be slower.
You don't need two surface views for your purpose.
Disable depth writes.
Render the camera preview on a 2D quad filling the screen.
Enable depth writes.
Render 3D object.
This will make sure your 3D objects are rendered over the camera preview.
You can also achieve this with two surface views and transparency, but it will be slower.

Android, Buffers in textureView

I want to use TextureView for draw Math curves(a lot), which data source is external device.
Here, every zone i draw, must add lines to previous.
Down to TextureView render using 3 buffers, i would like the buffer i draw in each moment, has like source the buffer i´ve just to release.
That is, i want the contain from buffer i release, fill the next buffer before i draw on it.
Other posibility, will be, force to use only one buffer.
I see, is possible get bitmap and setbitmap, but i would like do it without charge this in memory.
Anyone know if is this possible.
I would recommend two things:
Assuming you're rendering with Canvas, don't use TextureView. Use a custom View instead. TextureView doesn't really give you an advantage, and you lose hardware-accelerated rendering.
Render to an off-screen Bitmap, then blit the Bitmap to the View. Offscreen rendering is not hardware-accelerated, but you avoid re-drawing the entire scene. You will have to experiment to determine which is most efficient.
If you're drawing with OpenGL ES, just draw everything to the TextureView on every frame (unless you want to play with FBOs).
You can try lockCanvas(null) of Surface.
use TextureView.getSurfaceTexture to get a surfaceTexture
use new Surface(surfaceTexture) to create a surface
use Surface.lockCanvas or Canvas.lockHardwareCanvas to get a canvas.
Then you can do a lot of drawings on textureView with this canvas.

Android: Dual Pass Render To SurfaceTexture Using OpenGL

In order to perform a Gaussian blur on a SurfaceTexture, I am performing a dual pass render, meaning that I am passing the texture through one shader (horizontal blur) and then through another shader (vertical blur).
I understand the theory behind this: render the first texture to an FBO and the second one onto the SurfaceTexture itself.
There are some examples of this, but none of them seem applicable since a SurfaceTexture uses GL_TEXTURE_EXTERNAL_OES as its target in glBindTexture rather than GL_TEXTURE_2D. Therefore, in the call to glFramebufferTexture2D, GL_TEXTURE_2D cannot be used as the textarget, and I don't think GL_TEXTURE_EXTERNAL_OES can be used in this call.
Can anyone suggest a way to render a texture twice, with the final rendering going to a SurfaceTexture?
Important update: I am using a SurfaceTexture since this is a dynamic blur of a video that plays onto a surface.
Edit: This question was asked with some misunderstanding on my part. A SurfaceTexture is not a display element. It instead receives data from a surface, and is attached to a GL_TEXTURE_EXTERNAL_OES.
Thank you.
Rendering to a SurfaceTexture seems like an odd thing to do here. The point of SurfaceTexture is to take whatever is sent to the Surface and convert it into a GLES "external" texture. Since you're rendering with GLES, you can just use an FBO to render into a GL_TEXTURE_2D for your second pass.
SurfaceTexture is used when receiving frames from Camera or a video decoder because the source is usually YUV. The "external" texture format allows for a wider range of pixel formats, but constrains the uses of the texture. There's no value in rendering to a SurfaceTexture with GLES if your goal is to create a GLES texture.

Android MediaMuxer with openGL

I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.

Categories

Resources