I want to use TextureView for draw Math curves(a lot), which data source is external device.
Here, every zone i draw, must add lines to previous.
Down to TextureView render using 3 buffers, i would like the buffer i draw in each moment, has like source the buffer i´ve just to release.
That is, i want the contain from buffer i release, fill the next buffer before i draw on it.
Other posibility, will be, force to use only one buffer.
I see, is possible get bitmap and setbitmap, but i would like do it without charge this in memory.
Anyone know if is this possible.
I would recommend two things:
Assuming you're rendering with Canvas, don't use TextureView. Use a custom View instead. TextureView doesn't really give you an advantage, and you lose hardware-accelerated rendering.
Render to an off-screen Bitmap, then blit the Bitmap to the View. Offscreen rendering is not hardware-accelerated, but you avoid re-drawing the entire scene. You will have to experiment to determine which is most efficient.
If you're drawing with OpenGL ES, just draw everything to the TextureView on every frame (unless you want to play with FBOs).
You can try lockCanvas(null) of Surface.
use TextureView.getSurfaceTexture to get a surfaceTexture
use new Surface(surfaceTexture) to create a surface
use Surface.lockCanvas or Canvas.lockHardwareCanvas to get a canvas.
Then you can do a lot of drawings on textureView with this canvas.
Related
I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#
Most of the code for generating videos using MediaCodec I've seen so far either use pure OpenGL or locking the Canvas from the MediaCodec-generated Surface and editing it. Can I do it with a mix of both?
For example, if I generate my frames in the latter way, is it possible to apply a Fragment Shader on the MediaCodec-generated Surface before or after editing the Surface's Canvas?
A Surface is the producer end of a producer-consumer pair. Only one producer can be connected at a time, so you can't use GLES and Canvas on the same Surface without disconnecting one and attaching the other.
Last I checked (Lollipop) there was no way to disconnect a Canvas. So switching back and forth is not possible.
What you would need to do is:
Create a Canvas backed by a Bitmap.
Render onto that Canvas.
Upload the rendered Bitmap to GLES with glTexImage2D().
Blit the bitmap with GLES, using your desired fragment shader.
The overhead associated with the upload is unavoidable, but remember that you can draw the Bitmap at a smaller resolution and let GLES scale it up. Because you're drawing on a Bitmap rather than a Surface, it's not necessary to redraw the entire screen for every update, so there is some opportunity to reduce Canvas rendering overhead.
All of the above holds regardless of what the Surface is connected to -- could be MediaCodec, SurfaceView, SurfaceTexture, etc.
Working with a SurfaceView to make a 2d game i do not know who to put a background image in it efficiently.
I want to avoid drawing it each frame because it is an static image, any help?
The SurfaceView's Surface is a single layer. You can specify a dirty rect to reduce your draw area when you lock the Canvas, but you have to draw everything in the rect every frame -- background and all.
You could use a pair of SurfaceViews, one at the default Z-ordering, one at "media" depth. Use SurfaceView.setZOrderMediaOverlay() to do that. The layers will be composited by the system before being rendered.
If you were using OpenGL ES rather than Canvas, it'd generally be more efficient to just redraw the background from a texture each time. (If HWC runs out of overlays, that's essentially what SurfaceFlinger is doing anyway.)
See the "multi-surface test" in Grafika for an example of using multiple overlapping surfaces.
I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.
Usually when clearing the frame for a new draw, one uses the glClear() or glClearColor(). But each of those completely removes the previous frame.
I'd like to make the frames disappear gradually, i.e. with each new frame put a semi-transparent overlay on what's already on the canvas. I tried to use the glClearColor()'s alpha parameter, but it doesn't seem to have any effect.
What should I do to achieve this gradual disappearing effect?
If you just want to draw the clear color over the last frame without getting rid of it entirely, draw a screen-size quad over the viewport with the same color as what you'd pass to glClearColor, and skip calling glClear(GL_COLOR_BUFFER_BIT) (you should probably still clear the depth/stencil buffers if you're using either of them). So, if you're using a depth buffer, first clear the depth buffer if need be, disable depth testing (this is important mainly to make sure that your quad does not update the depth buffer), draw your screen-size quad, and then re-enable depth testing. Draw anything else afterward if you need to.
What follows assumes you're using OpenGL ES 2.0
If you need to blend two different frames together and you realistically never actually see the clear color, you should probably render the last frame to a texture and draw that over the new frame. For that, you can either read the current framebuffer and copy it to a texture, or create a new framebuffer and attach a texture to it (see glFramebufferTexture2D). Once the framebuffer is set up, you can happily draw into that. After you've drawn the frame into the texture, you can go ahead and bind the texture and draw a quad over the screen (remembering of course to switch the framebuffer back to your regular framebuffer).
If you're using OpenGL ES 1.1, you will instead want to use glCopyTexImage2D (or glCopyTexSubImage2D) to copy the color buffer to a texture.