I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.
Related
I need to record a video that just contains one single frame: an image specified by the user (it can be of any length, but it will only have the same static image). So, I figured I could use the new MediaRecorder.VideoSource.SURFACE and just draw to the Surface being used by the recorder. I initialize the recorder properly, and I can even call MediaRecorder.getSurface() without an exception (something that is apparently tricky).
My problem is somewhat embarrassing: I don't know what to do with the surface returned. I need to draw to it somehow, but all examples I can find involve drawing to a SurfaceView. Is this surface the same surface used by MediaRecorder.setPreviewDisplay()? How do I draw something to it?
In theory you can use Surface#lockCanvas() to get a Canvas to draw on if you want to render in software. There used to be problems with this on some platforms; not sure if that has been fixed.
The other option is to create an EGLSurface from the Surface and render onto it with OpenGL ES. You can find examples of this, with some code to manage all of the EGL setup, in Grafika.
The GLES recording examples uses MediaCodec rather than MediaRecorder, but the idea is the same, and it should be much simpler with MediaRecorder.
I want to use TextureView for draw Math curves(a lot), which data source is external device.
Here, every zone i draw, must add lines to previous.
Down to TextureView render using 3 buffers, i would like the buffer i draw in each moment, has like source the buffer i´ve just to release.
That is, i want the contain from buffer i release, fill the next buffer before i draw on it.
Other posibility, will be, force to use only one buffer.
I see, is possible get bitmap and setbitmap, but i would like do it without charge this in memory.
Anyone know if is this possible.
I would recommend two things:
Assuming you're rendering with Canvas, don't use TextureView. Use a custom View instead. TextureView doesn't really give you an advantage, and you lose hardware-accelerated rendering.
Render to an off-screen Bitmap, then blit the Bitmap to the View. Offscreen rendering is not hardware-accelerated, but you avoid re-drawing the entire scene. You will have to experiment to determine which is most efficient.
If you're drawing with OpenGL ES, just draw everything to the TextureView on every frame (unless you want to play with FBOs).
You can try lockCanvas(null) of Surface.
use TextureView.getSurfaceTexture to get a surfaceTexture
use new Surface(surfaceTexture) to create a surface
use Surface.lockCanvas or Canvas.lockHardwareCanvas to get a canvas.
Then you can do a lot of drawings on textureView with this canvas.
I'm developing a drawing application where the user can select a range of brushes and paint on the screen. I'm using textures as brushes and I'm drawing vertexes as points with PointSpriteOES enabled as displayed below.
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnable(GL11.GL_POINT_SPRITE_OES);
gl.glTexEnvf(GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL10.GL_TRUE);
The application worked just as desired, but I needed to optimize it for runtime as its framerate dropped under 30 when dealt with a lot of vertexes. Since the application's domain enables it, it seemed a good idea to leave the glClear and leave the redrawing of already existing lines as it's really unnecessary. However, this resulted in a very strange bug I couldn't fix since then. When the OpenGL is not rendering (I have set render mode to WHEN_DIRTY), only about 1/3 of all the vertexes are visible on the screen. Requesting a redraw by calling requestRender() makes these vertexes disappear and others are shown. There are three states I can tell apart, each state showing an approximate of 1/3 of all vertexes.
I have uploaded three screenshots (http://postimg.org/image/d63tje56l/, http://postimg.org/image/npeds634f/) to make it a bit easier for you to understand. The screenshots show the state where I have drawn three lines with different colors (SO didn't enable me to link all 3 images, but I hope you can imagine it - it has the segments missing from the 1st and the 2nd). It can clearly be seen that if I could merge the screens into a single one, I would get the desired result.
I'm only guessing what the issue is caused by since I'm not an OpenGL expert. My best shot is that OpenGL uses triple buffers and only a single buffer is shown at a given time, while other vertexes are placed on the backbuffers. I have tried forcing all buffers to be rendered as well as trying to force all vertexes to appear on all buffers, but I couldn't manage either.
Could you help me solve this?
I believe your guess is exactly right. The way OpenGL is commonly used, you're expected to draw a complete frame, including an initial clear, every time you're asked to redraw. If you don't do that, behavior is generally undefined. In your case, it certainly looks like triple buffering is used, and your drawing is distributed over 3 separate surfaces.
This model does not work very well for incremental drawing, where drawing a full frame is very expensive. There are a few options you can consider.
Optimize your drawing
This is not directly a solution, but always something worth thinking about. If you can find a way to make your rendering much more efficient, there might be no need to render incrementally. You're not showing your rendering code, so it's possible that you simply have too many points to get a good framerate.
But in any case, make sure that you use OpenGL efficiently. For example, store your points in VBOs, and update only the parts that change with glBufferSubData().
Draw to FBO, then blit
This is the most generic and practical solution. Instead of drawing directly to the primary framebuffer, use a Frame Buffer Object (FBO) to render to a texture. You do all of your drawing to this FBO, and copy it to the primary framebuffer when it's time to redraw.
For copying from FBO to the primary framebuffer, you will need a simple pair of vertex/fragment shaders in ES 2.0. In ES 3.0 and later, you can use glBlitFramebuffer().
Pros:
Works on any device, using only standard ES 2.0 features.
Easy to implement.
Cons:
Requires a copy of framebuffer on every redraw.
Single Buffering
EGL, which is the underlying API to connect OpenGL to the window system in Android, does have attributes to create single buffered surfaces. While single buffered rendering is rarely advisable, your use case is one of the few where it could still be considered.
While the API definition exists, the documentation specifies support as optional:
Client APIs may not be able to respect the requested rendering buffer. To determine the actual buffer being rendered to by a context, call eglQueryContext.
I have never tried this myself, so I have no idea how widespread support is, or if it's supported on Android at all. The following sketches how it could be implemented if you want to try it out:
If you derive from GLSurfaceView for your OpenGL rendering, you need to provide your own EGLWindowSurfaceFactory, which would look something like this:
class SingleBufferFactory implements GLSurfaceView.EGLWindowSurfaceFactory {
public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display,
EGLConfig config, Object nativeWindow) {
int[] attribs = {EGL10.EGL_RENDER_BUFFER, EGL10.EGL_SINGLE_BUFFER,
EGL10.EGL_NONE};
return egl.eglCreateWindowSurface(display, config, nativeWindow, attribs);
}
public void destroySurface(EGL10 egl, EGLDisplay display, EGLSurface surface) {
egl.eglDestroySurface(display, surface);
}
}
Then in your GLSurfaceView subclass constructor, before calling setRenderer():
setEGLWindowSurfaceFactory(new SingleBufferFactory());
Pros:
Can draw directly to primary framebuffer, no need for copies.
Cons:
May not be supported on some or all devices.
Single buffered rendering may be inefficient.
Use EGL_BUFFER_PRESERVE
The EGL API allows you to specify a surface attribute that requests the buffer content to be preserved on eglSwapBuffers(). This is not available in the EGL10 interface, though. You'll have to use the EGL14 interface, which requires at least API level 17.
To set this, use:
EGL14.eglSurfaceAttrib(EGL14.eglGetCurrentDisplay(), EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW),
EGL14.EGL_SWAP_BEHAVIOR, EGL14.EGL_BUFFER_PRESERVED);
You should be able to place this in the onSurfaceCreated() method of your GLSurfaceView.Renderer implementation.
This is supported on some devices, but not on others. You can query if it's supported by querying the EGL_SURFACE_TYPE attribute of the config, and check it against the EGL_SWAP_BEHAVIOR_PRESERVED_BIT bit. Or you can make this part of your config selection.
Pros:
Can draw directly to primary framebuffer, no need for copies.
Can still use double/triple buffered rendering.
Cons:
Only supported on subset of devices.
Conclusion
I would probably check for EGL_BUFFER_PRESERVE support on the specific device, and use it if it is suppported. Otherwise, go for the FBO and blit approach.
OpenGL ES 2.0 is implemented in a project that I have been working on with a couple shader components that define what a texture should look like after modifications from a Bitmap. The SurfaceView will only ever have a single image in it for my project.
While doing several different approaches and looking through code in the past 24 hours, just hoping for a quick response or two from the community. Not looking for solutions, I'll do that research.
It sounds as though since we are using shaders, that in order to do scaling and movements in the texture based on touch events, that I will have have to use the Matrix utilities and OpenGL translations or movements with the camera to get the same effect as what is currently done within an ImageView. Would this be the appropriate approach? Perhaps even modify the shader code so that I have some additional input variables?
I don't believe that I can use anything on the Android side that would get the same effect, such as modifying the canvas of the SurfaceView or altering dimensions of the UI in some other fashion that would achieve the same effect?
Thanks. Again, solutions for zooming and moving around aren't necessary, just trying to get a grasp on intermixing OpenGL and Android appropriately for the task.
Why does it seem that several elements in 1.0 are easier than 2.0; ease of use should improve between releases.
Yes. You will need to use an ortho projection and adjust the extents to zoom. See this link here. To pan, you can simply use a glTranslatef.
If you would like to do this entirely in the pixel shader, you can use the texture matrix stack with glScalef and glTranslatef.
I have a drawing app where the user can draw lines with their finger, adjust the color, thickness, etc. As the user is drawing, I am converting the massed X/Y points from MotionEvent into SVG Paths, as well as creating Android Path's and then drawing the Android Path's to the screen via a Canvas, and committing the SVG Path's to the app's database.
I am following the model used in FingerPaint, in that the 'in progress' lines are drawn on the fly by repeated calls to invalidate() (and thus, onDraw()), and once the line is complete and a new line is started, the previous line(s) are drawn in onDraw() from the underlying Canvas Bitmap, with in progress lines again generating repeated re-draws.
This works fine in this application - until you start rotating the underlying Bitmap to compensate for device rotation, supporting the ability to 'zoom in' on the drawing surface and thus having to scale the underlying Bitmap, etc. So for example, with the device rotated and the drawing scaled in, when the user is drawing, we need to scale AND rotate our Bitmap in onDraw(), and this is absolutely crawling.
I've looked at a SurfaceView, but as this still uses the same Canvas mechanism, I'm not sure I'll see noticeable improvement... so my thoughts turn to OpenGL. I have read somewhere that OpenGL can do rotations and scaling essentially 'for free', and even seen rumors (third comment) that Canvas may be disappearing in future versions.
Essentially, I am a little stuck between the Canvas and OpenGL solutions... I have a 2D drawing app that seems to fit the Canvas model perfectly when in one state, as there are not constant re-draws going on like a game (for instance when the user is not drawing I don't need any re-drawing), but when the user IS drawing, I need the maximum performance necessary to do some increasingly complex things with the surface...
Would welcome any thoughts, pointers and suggestions.
OpenGL would be able to handle the rotations and scaling easily.
Honestly, you would probably need to learn a lot of OpenGL to do this, specifically related to the topics of:
Geometry
Lighting (or just disabling it)
Picking (selecting geometry to draw on it)
Pixel Maps
Texture Mapping
Mipmapping
Also, learning OpenGL for this might be overkill, and you would have to be pretty good at it to make it efficient.
Instead, I would recommend using the graphic components of a game library built on top of openGL, such as:
Cocos2d
libgdx
any of the engines listed here
Well, this question was asked 6 years ago. Maybe Android 4.0 has not come up?
Actually, after Android 4.0 the Canvas at android.view.View is a hardware accelerated canvas, which means it is implementd by OpenGL, so you do not need to use another way for performance.
You can see the https://github.com/ChillingVan/android-openGL-canvas/blob/master/canvasglsample/src/main/java/com/chillingvan/canvasglsample/comparePerformance/ComparePerformanceActivity.java to compare the performance of normal canvas in view with GLSurfaceView.
You are right that SurfaceView uses Canvas underneath the hood. The main difference is that SurfaceView uses another thread to do the actual drawing, which generally improves performance. It sounds like it would not help you a great deal, though.
You are correct that OpenGL can do rotations very quickly, so if you need more performance that is the way to go. You should probably use GLSurfaceView. The main drawback with using OpenGL is that it is a real pain to do text. Basically you have to (okay, don't have to, but seems to be the best option) render bitmaps of text.