When Android 4.0 (Ice Cream Sandwich) was released, a new view was introduced into the sdk. This View is the TextureView. In the documentation, it says that the TextureView can be used to display content for an OpenGL scene.
When you look up how to do this, you'll find this link to one example.
https://groups.google.com/forum/?fromgroups=#!topic/android-developers/U5RXFGpAHPE
However I wanted to just replace GLSurfaceView with TextureView, and keep the rest of my code the same, and just receive the advantages of the TextureView.
Answer:
1) Start with the source code of the GLSurfaceView, name the file GLTextureView.java
2) Change the header to:
GLTextureView extends TextureView implements SurfaceTextureListener
3) Rename constructors to GLTextureView. Remove code from init() method.
4) Organize imports. Always choose the non-GLSurfaceView option.
5) Find every instance of SurfaceHolder and change it to a SurfaceTexture
6) Add Unimplemented methods for the SurfaceTextureListener, each method should be as follows:
onSurfaceTextureAvailable - surfaceCreated(surface)
onSurfaceTextureDestroyed - surfaceDestroyed(surface), (return true)
onSurfaceTextureSizeChanged - surfaceChanged(surface, 0, width, height)
onSurfaceTextureUpdated - requestRender()
7) There should be one line where there is a call being made to getHolder(), change that to getSurfaceTexture()
8) In the init() method, put the following line setSurfaceTextureListener(this)
Then add an OnLayoutChangeListener and have it call surfaceChanged(getSurfaceTexture(), 0, right - left, bottom - top).
With that you should be able to replace your GLSurfaceView code with GLTextureView and receive the benefits of GLTextureView. Also make sure your app supports Hardware Acceleration and that your Renderer extends GLTextureView.Renderer.
Brilliant!
A minor addition to Mr. Goodale's brilliant answer:
The 4.1.1 version of GLSurfaceView seems to have been modified to avoid rendering on a zero-width/height surface, I think. And there doesn't seem to be a gratuitous onSurfaceTextureChanged notification immediately following onSurfaceTextureAvailable.
If you start with the 4.1.1 sources, onSurfaceTextureAvailable needs to read as follows:
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width,
int height)
{
this.surfaceCreated(surface);
this.surfaceChanged(surface, 0,width,height);
}
Other than that, I was up and running in about five minutes flat! Thanks.
Thanks Mr. Goodale's and Mr. Davies for answers!
I have some extra about conversion GLSurfaceView to GLTextureView.
The first is about render mode.
As described there just remove the requestRender() call in onSurfaceTextureUpdated.
The second is about
mGLESVersion = SystemProperties.getInt("ro.opengles.version", ConfigurationInfo.GL_ES_VERSION_UNDEFINED);
Just use link, but you need Context to do context.getClassLoader();
You can call reflection version of getInt from init() and save result in static field
sGLESVersion = getInt(getContext(), "ro.opengles.version",ConfigurationInfo.GL_ES_VERSION_UNDEFINED);
And the last easiest change is about EGLLogWrapper.getErrorString(error);
Just copy getErrorString from EGLLogWrapper sources.
See the final version of my conversion GLSurfaceView to GLTextureView on GitHub Gist
If you want to copy/paste a ready-made class, I wrote one here:
GLTextureView
You can call setRenderer(GLSurfaceView.Renderer), like with a GLSurfaceView.
Related
I have an application that is using the Android port of GPUImage as the OpenGL Renderer and manager of several filters.
It currently does not have a video implementation, so I am trying to adapt the RecordFBOActivity from the Google grafika repository to work with the GPUImage architecture.
The base GPUImage class manages the GLContext and GLSurfaceView, and the GPUImageRenderer class implements the Renderer class.
This is the class where I am trying to adapt the RenderThread from the RecordFBOActivity of grafika. There are a few problems.
First, in the preparegl() method, I am passing a SurfaceTexture instead of a Surface, as GPUImage doesn't use the SurfaceHolder at all (I think I can implement it, but am trying not to change the base code too much, as i would like to push back my implementation to the aforementioned repo). I know that WindowSurface.java has an overloaded method to construct a WindowSurface from a SurfaceTexture as well as a Surface, but if I do this the mSurface iVar is always null, as I never have a surface to pass to it, which causes a NPE in the makeCurrent() method of recording.
Second, GPUImage attaches itself to a GLSurfaceView, not a SurfaceView like the grafika example uses, so I'm a little uncertain if there are any low level inconsistencies that may be causing conflicts for me...
Third, and I think this is the main issue, at least at the moment, is that I can't seem to reconcile the camera preview of GPUImage with the WindowSurface of grafika. If I comment out the prepareGl() method, the setUpSurfaceTexture() of GPUImage sets the preview texture of the camera from the SurfaceTexture that is created by glGenTextures() and the preview works fine.. as well as being attached to the filter render chain. However, if I try to call the prepareGL() method, and pass the exact same SurfaceTexture to the constructor of mWindowSurface, the camera service dies and i get a EGL_BAD_SURFACE error.
Long question, with a few moving parts, I know... Will attempt to edit/update as I can clarify issues and approaches to myself. But would love if anyone has any thoughts/interrogations... particularly #fadden :D
I was also trying to achieve the same thing and have tried what fadden has suggested. Tried to integrate CameraSurfaceRenderer functionality to GPUImageRenderer. The preview is fine but the recording is just a video with black frames. EGL14.eglGetCurrentContext() returns null for following call and my guess is if a new context is created it will not be same as what GPUImage might have
mVideoEncoder.startRecording(new TextureMovieEncoder.EncoderConfig(
mOutputFile, 640, 480, 1000000, EGL14.eglGetCurrentContext()));
#Jesses.co.tt were you able to achieve it?
(as I can't add comment it is added as an answer).
I'm having the following situation:
In a cross platform rendering library for iOS and Android (written in c(++)), i have two threads that each need their own EGLContext:
Thread A is the main thread; it renders to the Window.
Thread B is a generator thread, that does various calculations and renders the results into textures that are later used by thread A.
Since i can't use EGL on iOS, the library uses function pointers to static Obj.-C functions to create a new context and set it current.
This already works, i create the context for thread A using
EAGLContext *contextA = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
The context for thread B is created using
EAGLContext *contextB = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:[contextA sharegroup]];
I can then set either of the two current:
[EAGLContext setCurrentContext:context];
To use the same logic (function pointers passed to the library) on Android, i want to do this in the C side of the JNI bindings, this time using real EGL instead of Apple's EAGL.
I can easily create contextA using a WindowSurface and the native Window, i can create contextB and pass contextA to the shareContext parameter of the eglCreateContext call.
But when i want to make contextB current, i have to pass a surface to the eglMakeCurrent call, and i'm trying to figure out what kind of surface to pass there.
I cannot use the WindowSurface i use for contextA as the spec says in section 3.7 that "At most one context for each supported client API may be current to a particular thread at a given time, and at most one context may be bound to a particular surface at a given time."
I cannot specify EGL_NO_SURFACE, because that would result in an EGL_BAD_MATCH error in the eglMakeCurrent call.
It seems I could use a PBuffer surface, but I hesitate because I'd have to specify the width and height when I create such a surface, and thread B might want to create textures of different sizes. In addition to that, the "OpenGL ES 2.0 Programming Guide" by Munshi, Ginsburg, and Shreiner states in section 3.8 that "Pbuffers are most often used for generating texture maps. If all you want to do is render to a texture, we recommend using framebuffer objects [...] instead of pbuffers because they are more efficient", which is exactly what i want to do in thread B.
I don't understand what Munshi, Ginsurg and Shreiner mean by that sentence, how would a framebuffer object be a replacement for a pbuffer surface? What if I create a very small (say 1x1px) pbuffer surface to make the context current - can i then still render into arbitrarily large FBOs? Are there any other possibilities I'm not yet aware of?
Thanks a lot for your help!
The surface you pass to eglMakeCurrent() must be an EGL surface from eglCreateWindowSurface(). For example:
EGLSurface EglSurface = mEgl.eglCreateWindowSurface(mEglDisplay, maEGLconfigs[0], surfaceTexture, null);
mEgl.eglMakeCurrent(mEglDisplay, EglSurface, EglSurface, mEglContext);
But, eglCreateWindowSurface() requires a SurfaceTexture which is provided to the onSurfaceTextureAvailable() callback when a TextureView is created, but you can also create off-screen SurfaceTextures without any View.
There is an example app that uses TextureView in the Android SDK here, although it uses the SurfaceTexture for camera video rather than OpenGL ES rendering:
sources\android-17\com\android\test\hwui\GLTextureViewActivity.java
By default, the EGL surface for FBOs will have the same size as the SurfaceTexture they were created from. You can change the size of a SurfaceTexture with:
surfaceTexture.setDefaultBufferSize(width, height);
Don't use pbuffers on Android because some platforms (Nvidia Tegra) do not support them.
This article explains the advantages of FBOs over pbuffers in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
I ended up using a PBuffer surface (sized 1x1) - i then create an FBO and render into textures just fine. For displaying them (in a different thread and a different (shared) opengl context), i use a windowsurface with an ANativeWindow (there's a sample of that somewhere in the sdk).
If drawing to FBO is the only things you want to do, you can grab any EGLContext that already created by you or someone else (e.g. GLSurfaceView)and make it current then you just generate your FBO then draw with it.
The problem is how to share the context say created by GLSurfaceView to your cross-platform c++ library. I did this by calling a static function inside the c++ to get eglcontext and surface immediately after the context been made current by Java layer. like the code below:
//This is a GLSurfaceView renderer method
#override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
//up to this point, we know the EGLContext
//has already been set current on this thread.
//call JNI function to setup context
graphics_library_setup_context();
}
the c++ counterpart
void setup_context() {
context = eglGetCurrentContext();
display = eglGetCurrentDisplay();
surface = eglGetCurrentSurface(EGL_DRAW);
}
So I have done a lot of looking around and the answer to this seems to be to use:
int[] maxSize = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
to detect the size of the texture, now my issue is how do I create or get access to the gl var that holds the function I need? Is it already there somewhere? I would like to support android 2.2 and above, so the 4.0+ new trick wont work. If this is a repeat question just point me in the right direction in the comments and I will take t down. Couldn't seem to find a good explanation of how to set this up properly anywhere, just those two lines of code.
If you take a look on how OpenGL Apps are made you will notice there are the main app thread (main activity) and a renderer class(http://developer.android.com/guide/topics/graphics/opengl.html). The heart of the renderer class if the method public void onDrawFrame(GL10 gl) , this is called by the android infrastructure when the frame needs to be redraw.
So basically, a context object (GL10 gl var) is passed to the renderer (yours) when required and there you can check your max texture size.
I'm building an application on Android to take frames from the camera, process them, and then display the frame on a surfaceView, as well as drawing on the SurfaceView via the canvas and drawbitmap and all.
Just to check, is SurfaceView and Bitmaps and Canvases the best way to do it ? I'm after speed.
Assuming the answer to the above is Yes, the question would be: Where should I place the following function
camera_object.setPreviewCallback(new PreviewCallback()
public void onPreviewFrame(byte[] data, Camera camera){
should I place it in onCreate() or should I place it in surfaceCreated() or surfaceChanged() ?
I declared my mainactivity class as follows:
public class MainActivity extends Activity implements SurfaceHolder.Callback, Camera.PreviewCallback
{
and in that class Eclipse forces me to create an override function for onpreviewframe in the MainActivity class as follows
public void onPreviewFrame(byte[] data, Camera camera){
}
but it never gets called. Should I try to use this function ? is it better to use it ? or is it just an Eclipse thing ?
Please advise
Are you calling setPreviewDisplay(), startPreview() and setPreviewCallback(this) from the app? Without that you will not get any calls to onPreviewFrame(). In fact if you are using SurfaceView, then the callback preview buffers are a copy of the actual buffers that are being displayed on the screen. So if you want to display these copied buffers, you need to create a new view and overwrite it. This would be inefficient. I would suggest you use SurfaceTexture instead and use 'onFrameAvailable' callback to get the frames and then draw & display manually. An example of this can be found in the PanoramaActivity code of the default Android Camera App.
Without camera_object.setPreviewDisplay(surface_holder); you cannot receive camera callbacks; don't forget also
surface_view.setVisibility(View.VISIBLE);
surface_holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
You can hide the camera preview under another view; on 3.0 and higher you can even push the surface out of the screen (display it below the bottom of the screen). I am not sure if the latter trick works on 2.3.6.
I have a little experimentation app (essentially a very cut-down version of the LunarLander demo in the Android SDK), with a single SurfaceView. I have a Drawable "sprite" which I periodically draw into the SurfaceView's Canvas object in different locations, without attempting to erase the previous image. Thus:
private class MyThread extends Thread {
SurfaceHolder holder; // Initialised in ctor (acquired via getHolder())
Drawable sprite; // Initialised in ctor
Rect bounds; // Initialised in ctor
...
#Override
public void run() {
while (true) {
Canvas c = holder.lockCanvas();
synchronized (bounds) {
sprite.setBounds(bounds);
}
sprite.draw(c);
holder.unlockCanvasAndPost(c);
}
}
/**
* Periodically called from activity thread
*/
public void updatePos(int dx, int dy) {
synchronized (bounds) {
bounds.offset(dx, dy);
}
}
}
Running in the emulator, what I'm seeing is that after a few updates have occurred, several old "copies" of the image begin to flicker, i.e. appearing and disappearing. I initially assumed that perhaps I was misunderstanding the semantics of a Canvas, and that it somehow maintains "layers", and that I was thrashing it to death. However, I then discovered that I only get this effect if I try to update faster than roughly every 200 ms. So my next best theory is that this is perhaps an artifact of the emulator not being able to keep up, and tearing the display. (I don't have a physical device to test on, yet.)
Is either of these theories correct?
Note: I don't actually want to do this in practice (i.e. draw hundreds of overlaid copies of the same thing). However, I would like to understand why this is happening.
Environment:
Eclipse 3.6.1 (Helios) on Windows 7
JDK 6
Android SDK Tools r9
App is targetting Android 2.3.1
Tangential question:
My run() method is essentially a stripped-down version of how the LunarLander example works (with all the excess logic removed). I don't quite understand why this isn't going to saturate the CPU, as there seems to be nothing to prevent it running at full pelt. Can anyone clarify this?
Ok, I've butchered Lunar Lander in a similar way to you, and having seen the flickering I can tell you that what you are seeing is a simple artefact of the double-buffering mechanism that every Surface has.
When you draw anything on a Canvas attached to a Surface, you are drawing to the 'back' buffer (the invisible one). And when you unlockCanvasAndPost() you are swapping the buffers over... what you drew suddenly becomes visible as the "back" buffer becomes the "front", and vice versa. And so your next frame of drawing is done to the old "front" buffer...
The point is that you always draw to seperate buffers on alternate frames. I guess there's an implicit assumption in graphics architecture that you're always going to be writing every pixel.
Having understood this, I think the real question is why doesn't it flicker on hardware? Having worked on graphics drivers in years gone by, I can guess at the reasons but hesitate to speculate too far. Hopefully the above will be sufficient to satisfy your curiousity about this rendering artefact. :-)
You need to clear the previous position of the sprite, as well as the new position. This is what the View system does automatically. However, if you use a Surface directly and do not redraw every pixel (either with an opaque color or using a SRC blending mode) you must clear the content of the buffer yourself. Note that you can pass a dirty rectangle to lockCanvas() and it will do the union for you of the previous dirty rectangle and the one you are passing (this is the mechanism used by the UI toolkit.) It will also set the clip rect of the Canvas to be the union of these two rectangles.
As for your second question, unlockAndPost() will do a vsync, so you will never draw at more than ~60fps (most devices that I've seen have a display refresh rate set around 55Hz.)