Adapting Grafika RecordFBOActivity to work with Android GPUImage - android

I have an application that is using the Android port of GPUImage as the OpenGL Renderer and manager of several filters.
It currently does not have a video implementation, so I am trying to adapt the RecordFBOActivity from the Google grafika repository to work with the GPUImage architecture.
The base GPUImage class manages the GLContext and GLSurfaceView, and the GPUImageRenderer class implements the Renderer class.
This is the class where I am trying to adapt the RenderThread from the RecordFBOActivity of grafika. There are a few problems.
First, in the preparegl() method, I am passing a SurfaceTexture instead of a Surface, as GPUImage doesn't use the SurfaceHolder at all (I think I can implement it, but am trying not to change the base code too much, as i would like to push back my implementation to the aforementioned repo). I know that WindowSurface.java has an overloaded method to construct a WindowSurface from a SurfaceTexture as well as a Surface, but if I do this the mSurface iVar is always null, as I never have a surface to pass to it, which causes a NPE in the makeCurrent() method of recording.
Second, GPUImage attaches itself to a GLSurfaceView, not a SurfaceView like the grafika example uses, so I'm a little uncertain if there are any low level inconsistencies that may be causing conflicts for me...
Third, and I think this is the main issue, at least at the moment, is that I can't seem to reconcile the camera preview of GPUImage with the WindowSurface of grafika. If I comment out the prepareGl() method, the setUpSurfaceTexture() of GPUImage sets the preview texture of the camera from the SurfaceTexture that is created by glGenTextures() and the preview works fine.. as well as being attached to the filter render chain. However, if I try to call the prepareGL() method, and pass the exact same SurfaceTexture to the constructor of mWindowSurface, the camera service dies and i get a EGL_BAD_SURFACE error.
Long question, with a few moving parts, I know... Will attempt to edit/update as I can clarify issues and approaches to myself. But would love if anyone has any thoughts/interrogations... particularly #fadden :D

I was also trying to achieve the same thing and have tried what fadden has suggested. Tried to integrate CameraSurfaceRenderer functionality to GPUImageRenderer. The preview is fine but the recording is just a video with black frames. EGL14.eglGetCurrentContext() returns null for following call and my guess is if a new context is created it will not be same as what GPUImage might have
mVideoEncoder.startRecording(new TextureMovieEncoder.EncoderConfig(
mOutputFile, 640, 480, 1000000, EGL14.eglGetCurrentContext()));
#Jesses.co.tt were you able to achieve it?
(as I can't add comment it is added as an answer).

Related

Android Camera2 Basics API

I am reading the code about Android Camera2 APIs from here:
https://github.com/googlesamples/android-Camera2Basic
And it is confusing in this lines:
https://github.com/googlesamples/android-Camera2Basic/blob/master/Application/src/main/java/com/example/android/camera2basic/Camera2BasicFragment.java#L570-L574
that the previewRequest builder only add surface, which is the TextureView to show, as target. But the following line actually add both as the targets. As I understand, this should not fire the "OnImageAvailable" Lisenter during preview, no? So why this add the imagereader's surface here?
I tried to removed this imagereader's surface here but got error when I really want to capture an image.....
SOOO CONFUSING!!!
You need to declare all output Surfaces that image data might be sent to at the time you create a CameraCaptureSession. This is just the way the framework is designed.
Whenever you create a CaptureRequest, you add a (list of) target output Surface(s). This is where the image data from the captured frame will go- it may be a Surface associated with a TextureView for displaying, or with an ImageReader for saving, or with an Allocation for processing, etc. (A Surface is really just a buffer which can take the data output by the camera. The type of object that buffer is associated with determines how you can access/work with the data.)
You don't have to send the data from each frame to all registered Surfaces, but it has to be sent to a subset of them. You can't add a Surface as a target to a CaptureRequest if it wasn't registered with the CameraCaptureSession when it was created. Well, you can, but passing it to the session will cause a crash, so don't.

What Surface to use for eglMakeCurrent for context that only renders into FBOs

I'm having the following situation:
In a cross platform rendering library for iOS and Android (written in c(++)), i have two threads that each need their own EGLContext:
Thread A is the main thread; it renders to the Window.
Thread B is a generator thread, that does various calculations and renders the results into textures that are later used by thread A.
Since i can't use EGL on iOS, the library uses function pointers to static Obj.-C functions to create a new context and set it current.
This already works, i create the context for thread A using
EAGLContext *contextA = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
The context for thread B is created using
EAGLContext *contextB = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:[contextA sharegroup]];
I can then set either of the two current:
[EAGLContext setCurrentContext:context];
To use the same logic (function pointers passed to the library) on Android, i want to do this in the C side of the JNI bindings, this time using real EGL instead of Apple's EAGL.
I can easily create contextA using a WindowSurface and the native Window, i can create contextB and pass contextA to the shareContext parameter of the eglCreateContext call.
But when i want to make contextB current, i have to pass a surface to the eglMakeCurrent call, and i'm trying to figure out what kind of surface to pass there.
I cannot use the WindowSurface i use for contextA as the spec says in section 3.7 that "At most one context for each supported client API may be current to a particular thread at a given time, and at most one context may be bound to a particular surface at a given time."
I cannot specify EGL_NO_SURFACE, because that would result in an EGL_BAD_MATCH error in the eglMakeCurrent call.
It seems I could use a PBuffer surface, but I hesitate because I'd have to specify the width and height when I create such a surface, and thread B might want to create textures of different sizes. In addition to that, the "OpenGL ES 2.0 Programming Guide" by Munshi, Ginsburg, and Shreiner states in section 3.8 that "Pbuffers are most often used for generating texture maps. If all you want to do is render to a texture, we recommend using framebuffer objects [...] instead of pbuffers because they are more efficient", which is exactly what i want to do in thread B.
I don't understand what Munshi, Ginsurg and Shreiner mean by that sentence, how would a framebuffer object be a replacement for a pbuffer surface? What if I create a very small (say 1x1px) pbuffer surface to make the context current - can i then still render into arbitrarily large FBOs? Are there any other possibilities I'm not yet aware of?
Thanks a lot for your help!
The surface you pass to eglMakeCurrent() must be an EGL surface from eglCreateWindowSurface(). For example:
EGLSurface EglSurface = mEgl.eglCreateWindowSurface(mEglDisplay, maEGLconfigs[0], surfaceTexture, null);
mEgl.eglMakeCurrent(mEglDisplay, EglSurface, EglSurface, mEglContext);
But, eglCreateWindowSurface() requires a SurfaceTexture which is provided to the onSurfaceTextureAvailable() callback when a TextureView is created, but you can also create off-screen SurfaceTextures without any View.
There is an example app that uses TextureView in the Android SDK here, although it uses the SurfaceTexture for camera video rather than OpenGL ES rendering:
sources\android-17\com\android\test\hwui\GLTextureViewActivity.java
By default, the EGL surface for FBOs will have the same size as the SurfaceTexture they were created from. You can change the size of a SurfaceTexture with:
surfaceTexture.setDefaultBufferSize(width, height);
Don't use pbuffers on Android because some platforms (Nvidia Tegra) do not support them.
This article explains the advantages of FBOs over pbuffers in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
I ended up using a PBuffer surface (sized 1x1) - i then create an FBO and render into textures just fine. For displaying them (in a different thread and a different (shared) opengl context), i use a windowsurface with an ANativeWindow (there's a sample of that somewhere in the sdk).
If drawing to FBO is the only things you want to do, you can grab any EGLContext that already created by you or someone else (e.g. GLSurfaceView)and make it current then you just generate your FBO then draw with it.
The problem is how to share the context say created by GLSurfaceView to your cross-platform c++ library. I did this by calling a static function inside the c++ to get eglcontext and surface immediately after the context been made current by Java layer. like the code below:
//This is a GLSurfaceView renderer method
#override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
//up to this point, we know the EGLContext
//has already been set current on this thread.
//call JNI function to setup context
graphics_library_setup_context();
}
the c++ counterpart
void setup_context() {
context = eglGetCurrentContext();
display = eglGetCurrentDisplay();
surface = eglGetCurrentSurface(EGL_DRAW);
}

Converting from GLSurfaceView to TextureView (via GLTextureView)

When Android 4.0 (Ice Cream Sandwich) was released, a new view was introduced into the sdk. This View is the TextureView. In the documentation, it says that the TextureView can be used to display content for an OpenGL scene.
When you look up how to do this, you'll find this link to one example.
https://groups.google.com/forum/?fromgroups=#!topic/android-developers/U5RXFGpAHPE
However I wanted to just replace GLSurfaceView with TextureView, and keep the rest of my code the same, and just receive the advantages of the TextureView.
Answer:
1) Start with the source code of the GLSurfaceView, name the file GLTextureView.java
2) Change the header to:
GLTextureView extends TextureView implements SurfaceTextureListener
3) Rename constructors to GLTextureView. Remove code from init() method.
4) Organize imports. Always choose the non-GLSurfaceView option.
5) Find every instance of SurfaceHolder and change it to a SurfaceTexture
6) Add Unimplemented methods for the SurfaceTextureListener, each method should be as follows:
onSurfaceTextureAvailable - surfaceCreated(surface)
onSurfaceTextureDestroyed - surfaceDestroyed(surface), (return true)
onSurfaceTextureSizeChanged - surfaceChanged(surface, 0, width, height)
onSurfaceTextureUpdated - requestRender()
7) There should be one line where there is a call being made to getHolder(), change that to getSurfaceTexture()
8) In the init() method, put the following line setSurfaceTextureListener(this)
Then add an OnLayoutChangeListener and have it call surfaceChanged(getSurfaceTexture(), 0, right - left, bottom - top).
With that you should be able to replace your GLSurfaceView code with GLTextureView and receive the benefits of GLTextureView. Also make sure your app supports Hardware Acceleration and that your Renderer extends GLTextureView.Renderer.
Brilliant!
A minor addition to Mr. Goodale's brilliant answer:
The 4.1.1 version of GLSurfaceView seems to have been modified to avoid rendering on a zero-width/height surface, I think. And there doesn't seem to be a gratuitous onSurfaceTextureChanged notification immediately following onSurfaceTextureAvailable.
If you start with the 4.1.1 sources, onSurfaceTextureAvailable needs to read as follows:
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width,
int height)
{
this.surfaceCreated(surface);
this.surfaceChanged(surface, 0,width,height);
}
Other than that, I was up and running in about five minutes flat! Thanks.
Thanks Mr. Goodale's and Mr. Davies for answers!
I have some extra about conversion GLSurfaceView to GLTextureView.
The first is about render mode.
As described there just remove the requestRender() call in onSurfaceTextureUpdated.
The second is about
mGLESVersion = SystemProperties.getInt("ro.opengles.version", ConfigurationInfo.GL_ES_VERSION_UNDEFINED);
Just use link, but you need Context to do context.getClassLoader();
You can call reflection version of getInt from init() and save result in static field
sGLESVersion = getInt(getContext(), "ro.opengles.version",ConfigurationInfo.GL_ES_VERSION_UNDEFINED);
And the last easiest change is about EGLLogWrapper.getErrorString(error);
Just copy getErrorString from EGLLogWrapper sources.
See the final version of my conversion GLSurfaceView to GLTextureView on GitHub Gist
If you want to copy/paste a ready-made class, I wrote one here:
GLTextureView
You can call setRenderer(GLSurfaceView.Renderer), like with a GLSurfaceView.

really confused with setPreviewCallback in Android, need advice

I'm building an application on Android to take frames from the camera, process them, and then display the frame on a surfaceView, as well as drawing on the SurfaceView via the canvas and drawbitmap and all.
Just to check, is SurfaceView and Bitmaps and Canvases the best way to do it ? I'm after speed.
Assuming the answer to the above is Yes, the question would be: Where should I place the following function
camera_object.setPreviewCallback(new PreviewCallback()
public void onPreviewFrame(byte[] data, Camera camera){
should I place it in onCreate() or should I place it in surfaceCreated() or surfaceChanged() ?
I declared my mainactivity class as follows:
public class MainActivity extends Activity implements SurfaceHolder.Callback, Camera.PreviewCallback
{
and in that class Eclipse forces me to create an override function for onpreviewframe in the MainActivity class as follows
public void onPreviewFrame(byte[] data, Camera camera){
}
but it never gets called. Should I try to use this function ? is it better to use it ? or is it just an Eclipse thing ?
Please advise
Are you calling setPreviewDisplay(), startPreview() and setPreviewCallback(this) from the app? Without that you will not get any calls to onPreviewFrame(). In fact if you are using SurfaceView, then the callback preview buffers are a copy of the actual buffers that are being displayed on the screen. So if you want to display these copied buffers, you need to create a new view and overwrite it. This would be inefficient. I would suggest you use SurfaceTexture instead and use 'onFrameAvailable' callback to get the frames and then draw & display manually. An example of this can be found in the PanoramaActivity code of the default Android Camera App.
Without camera_object.setPreviewDisplay(surface_holder); you cannot receive camera callbacks; don't forget also
surface_view.setVisibility(View.VISIBLE);
surface_holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
You can hide the camera preview under another view; on 3.0 and higher you can even push the surface out of the screen (display it below the bottom of the screen). I am not sure if the latter trick works on 2.3.6.

How to use android.media.effect.Effect

Can anyone help to point me in the right direction how to use these GPU accelerated Effects available since Android 4.0 on a Bitmap?
The documentation states for example "They must be bound to a GL_TEXTURE_2D texture image". But what would be the best way to do this?
The first step to create a Effect should be "Call EffectContext.createWithCurrentGlContext() from your OpenGL ES 2.0 context." But when I do this in my activity it fails with exception "Attempting to initialize EffectContext with no active GL context". But then how do I get an active GL context?
because you create an Effect in your Activity, so it throws an Exception. You should create it in method onSurfaceCreated of Renderer interface.

Categories

Resources