I am trying to draw 3D object onto camera preview frames (Android). Should I use two surface views, one for camera preview and another GLSurfaceView for drawing. The views should be synchronized and frame rate of display should be good enough to provide a good user experience. So most of the tutorials talk about using multiple views. The alternate idea is to get texture from camera preview and merge it with the 3D object to be drawn so as to get the appropriate 2D raster image.
Which method would be better for performance gains?
P.S : I will be using the Java APIs for openGL es 2.0
Since two surface views increase the number of API calls per frame and require transparency, they will be slower.
You don't need two surface views for your purpose.
Disable depth writes.
Render the camera preview on a 2D quad filling the screen.
Enable depth writes.
Render 3D object.
This will make sure your 3D objects are rendered over the camera preview.
You can also achieve this with two surface views and transparency, but it will be slower.
Related
I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#
Working with a SurfaceView to make a 2d game i do not know who to put a background image in it efficiently.
I want to avoid drawing it each frame because it is an static image, any help?
The SurfaceView's Surface is a single layer. You can specify a dirty rect to reduce your draw area when you lock the Canvas, but you have to draw everything in the rect every frame -- background and all.
You could use a pair of SurfaceViews, one at the default Z-ordering, one at "media" depth. Use SurfaceView.setZOrderMediaOverlay() to do that. The layers will be composited by the system before being rendered.
If you were using OpenGL ES rather than Canvas, it'd generally be more efficient to just redraw the background from a texture each time. (If HWC runs out of overlays, that's essentially what SurfaceFlinger is doing anyway.)
See the "multi-surface test" in Grafika for an example of using multiple overlapping surfaces.
I'm a newbie in android programming, my purpose is to draw multiple identical drawables (and animate them changing their x y coordinates) in a canvas returned calling something like mySurfaceHolder.lockCanvas(). The number of drawables is dynamic and changes with time.
How is it possible keeping the high as possible frame rate?
(Sorry for bad english)
As far as I know, there are two common ways to draw animated images yourself on the screen: SurfaceView OR GLSurfaceView (Android official site).
For the better performance of drawing multiple identical images, my suggestion is to draw images on GLSurfaceView instead of SurfaceView.
If the amount of identical images to draw is large, then the frame rate to draw onto SurfaceView will be affected by the device's computing power (depends on CPU speed). On the other hand, using GLSurfaceView mostly depends on GPU speed, so I suggested using GLSurfaceView to draw images.
But if you are not familiar with OpenGL ES. The most efficient way of drawing multiple identical images is to separate your logic statements from images by drawing code segments with another thread. Drawing using drawable.draw(canvas) or canvas.drawBitmap() will be fine.
ps. Using GLSurfaceView will require translating the Drawables to the texture before they can be drawn on GLSurfaceView.
I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.
I am making a 2D graphical app that will display planets. I say 2D because the majority of the app will be 2D. I however want to render some 3D objects into dynamic sprites offscreen (to a texture), with transparent (possibly translucent) areas, and subsequently render those rendered textures to the active screen as 2D textured quads. Rendering directly to the screen as 3D objects is not optimal in this case, because it would require me to implement some sort of 3D picking. I am not that advanced in math yet. Note also that the main screen render will be orthographic, while the offscreen render would be perspective.
How can I accomplish this (general idea, no need for specifics), and what would be the most efficient way to do this? Would this reduce support for a wide variety of devices? Also, if the 3D sprite renderings were constantly refreshed every frame (such as being rotated fine amounts) would that kill framerates with continuous unloading/reloading of texture to memory? I suppose that some scenes could have as many as 10 of these 3D offscreen sprites.
Thanks for the help
If you really must use the offscreen rendering just search for FBO(frame buffer object) and attach a texture to it, then use the texture in your main view as 2D. It is quite a straight forward procedure but might decrease the speed. You will probably not be able to do any multithreading on it so you should create just 1 FBO. Its dimensions will probably have to be a power of 2 so the resolution might be different then you wish. This procedure does not continually load/unload anything, the data is allocated when creating the texture and GL draws/reads directly from it. The largest drawback here will be the memory.. You will create as many as 10 of this textures just to draw on them and present once.
It might be very easy to place this objects on a specific place on your main buffer though: Make all the logic as if you would want to draw a full screen planet but use "viewport" method to place it to a specific part of the screen.
If those planet images will be updated only on user request (you don't want to draw them every frame) then I suggest you try to make a combination of both: Create a FBO with a texture of same size or larger then main view and draw all the planets to this single texture using "viewport" method. Then you can update any you want, just don't clear the buffer, rather draw a clear rect on the specific part of the buffer/texture. And keep drawing the whole texture to the main buffer.