OpenGL ES VBO glgeterror 4242512 - what is it? - android

Hello I get "glgeterror 4242512", my code is:
if (bUseVBO) {
//gl11
GL11 gl11 = (GL11)gl;
int[] buffer = new int[1];
gl11.glGenBuffers(1, buffer, 0);
textureBufferIndex = buffer[0];
Log.e("error", buffer+" "+(gl11==null)+" ERR "+gl.glGetError());
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, textureBufferIndex);
gl11.glBufferData(GL11.GL_ARRAY_BUFFER, texCoords.length * 4, mTexBuffer, GL11.GL_STATIC_DRAW);
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, 0);
}
edit: if I call this on: onSurfaceCreated then everything go fine, i get a glgeterror 0, that is perfect.
If I start this call from a Thread, then I get this number "4242512" , and textureBufferIndex will be null too. Why?

This happens because you are calling OpenGL functions without a OpenGL context made current in the thread. You "main" thread has a OpenGL context and thus GL calls work without a problem, but your "other" thread doesn't have a GL context, and GL calls fail.

Related

eglCreateWindowSurface() can only be called with an instance of Surface, SurfaceView, SurfaceTexture or SurfaceHolder

I am using RecordableSurfaceView
https://github.com/spaceLenny/recordablesurfaceview/blob/master/recordablesurfaceview/src/main/java/com/uncorkedstudios/android/view/recordablesurfaceview/RecordableSurfaceView.java
For android 6, (api 23) I get this error. Is there a way to fix this?
eglCreateWindowSurface() can only be called with an instance of Surface, SurfaceView, SurfaceTexture or SurfaceHolder at the moment, this will be fixed later.
.RecordableSurfaceView
The potential code segment.
mEGLSurface = EGL14
.eglCreateWindowSurface(mEGLDisplay, eglConfig, RecordableSurfaceView.this,
surfaceAttribs, 0);
EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
// guarantee to only report surface as created once GL context
// associated with the surface has been created, and call on the GL thread
// NOT the main thread but BEFORE the codec surface is attached to the GL context
if (mRendererCallbacksWeakReference != null
&& mRendererCallbacksWeakReference.get() != null) {
mRendererCallbacksWeakReference.get().onSurfaceCreated();
}
mEGLSurfaceMedia = EGL14
.eglCreateWindowSurface(mEGLDisplay, eglConfig, mSurface,
surfaceAttribs, 0);
GLES20.glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
write a null check for this mEGLSurface and done
(mEGLSurface != null)
In the code snippet of RecordableSurfaceView the second call to eglCreateWindowSurface passes in a mSurface variable which is initialized in doSetup via:
mSurface = MediaCodec.createPersistentInputSurface();
I would guess that your codec doesn't support this function and it's somehow returning null, which is causing the exception you see. Or perhaps it's being used by more than one codec or recorder instance?
The only other somewhat related question on SO I could find is: MediaCodec's Persistent Input Surface unsupported by Camera2 Session?
Can you at least clarify from the stack trace where in the library it's crashing? In other words, from which call of eglCreateWindowSurface?

eglSwapBuffers fails with EGL_BAD_SURFACE when using a Surface from MediaCodec

I'm trying to encode a movie using MediaCodec and Surfaces (pixel buffer mode works, but performance is not good enough). However, every time I try to call eglSwapBuffers(), it fails with EGL_BAD_SURFACE and as such, dequeueOutputBuffer() always returns -1 (INFO_TRY_AGAIN_LATER)
I have seen the examples on Bigflake and Grafika and I have another working project where everything is ok, but I need to get this working in another setup which is slightly different.
I currently have a GLSurfaceView which does screen rendering and is supplied with a custom EGLContextFactory/EGLConfigChooser. This allows me to create shared contexts to be used for separate OpenGL rendering in a native library. These are created using EGL10, but this should not be an issue as the underlying contexts only care about the client version, from what I know.
I've made sure the context is recordable, using the following config:
private android.opengl.EGLConfig chooseConfig14(android.opengl.EGLDisplay display) {
// Configure EGL for recording and OpenGL ES 3.x
int[] attribList = {
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGLExt.EGL_OPENGL_ES3_BIT_KHR,
EGLExt.EGL_RECORDABLE_ANDROID, 1,
EGL14.EGL_NONE
};
android.opengl.EGLConfig[] configs = new android.opengl.EGLConfig[1];
int[] numConfigs = new int[1];
if (!EGL14.eglChooseConfig(display, attribList, 0, configs, 0,
configs.length, numConfigs, 0)) {
return null;
}
return configs[0];
}
Now, I tried to simplify the scenario, so when recording is started I initialise an instance of MediaCodec as an encoder and call createInputSurface() on the GLSurfaceView's thread. After I have a surface, I turn it into an EGLSurface (EGL14), as follows:
EGLSurface createEGLSurface(Surface surface) {
if (surface == null) return null;
int[] surfaceAttribs = { EGL14.EGL_NONE };
android.opengl.EGLDisplay display = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
android.opengl.EGLConfig config = chooseConfig14(display);
EGLSurface eglSurface = EGL14.eglCreateWindowSurface(display, config, surface, surfaceAttribs, 0);
return eglSurface;
}
When a new frame arrives from the camera, I send it to the screen and to another class that handles recording. That class just renders it to the EGLSurface built from MediaCodec's input surface, as follows:
public void drawToSurface(EGLSurface targetSurface, int width, int height, long timestampNano, boolean ignoreOrientation) {
if (mTextures[0] == null) {
Log.w(TAG, "Attempting to draw without a source texture");
return;
}
EGLContext currentContext = EGL14.eglGetCurrentContext();
EGLDisplay currentDisplay = EGL14.eglGetCurrentDisplay();
EGL14.eglMakeCurrent(currentDisplay, targetSurface, targetSurface, currentContext);
int error = EGL14.eglGetError();
ShaderProgram program = getProgramForTextureType(mTextures[0].getTextureType());
program.draw(width, height, TextureRendererView.LayoutType.LINEAR_HORIZONTAL, 0, 1, mTextures[0]);
error = EGL14.eglGetError();
EGLExt.eglPresentationTimeANDROID(currentDisplay, targetSurface, timestampNano);
error = EGL14.eglGetError();
EGL14.eglSwapBuffers(currentDisplay, targetSurface);
error = EGL14.eglGetError();
Log.d(TAG, "drawToSurface");
}
For some reason, eglSwapBuffers() fails and reports EGL_BAD_SURFACE and I haven't found a way to debug this further.
Update
I've tried querying the current surface after the call that makes it current and it always returns a malformed surface (looking inside, I can see the handle is 0 and it always fails when queried). It looks like eglMakeCurrent() silently fails to set the bind the surface to the context.
Moreover, I've determined this issue appears on Qualcomm chips (Adreno), not Kirin, so it's definitely related to the native OpenGL implementation (it's somehow funny because I've always noticed Adreno to be more permissive when it comes to "bad" OpenGL configurations)
Fixed! It turns out EGL10 and EGL14 appear to play nice together, but in some cases fail in very subtle ways, such as the one I encountered.
In my case, the EGLContextFactory I wrote was creating a base OpenGL ES context using EGL10 and then created more shared contexts on demand, again using EGL10. While I could retrieve them using EGL14 (either in Java or C) and context handles were always correct (sharing textures between contexts worked like a charm), it failed mysteriously when trying to use an EGLSurface created from a context or EGL10 origin... on Adreno chips.
The solution was to switch the EGLContextFactory to start from a context created with EGL14 and continue to create shared contexts using EGL14. For the GLSurfaceView, which still required EGL10, I had to use a hack
#Override
public javax.microedition.khronos.egl.EGLContext createContext(EGL10 egl10, javax.microedition.khronos.egl.EGLDisplay eglDisplay, javax.microedition.khronos.egl.EGLConfig eglConfig) {
EGLContext context = createContext();
boolean success = EGL14.eglMakeCurrent(mBaseEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE, context);
if (!success) {
int error = EGL14.eglGetError();
Log.w(TAG, "Failed to create a context. Error: " + error);
}
javax.microedition.khronos.egl.EGLContext egl14Context = egl10.eglGetCurrentContext(); //get an EGL10 context representation of our EGL14 context
javax.microedition.khronos.egl.EGLContext trueEGL10Context = egl10.eglCreateContext(eglDisplay, eglConfig, egl14Context, glAttributeList);
destroyContext(context);
return trueEGL10Context;
}
What this does is to create a new shared context with EGL14, make it current and then retrieve an EGL10 version of it. That version is not usable directly (for a reason I cannot exactly understand), but another shared context from it works well. The starting EGL14 context can then be destroyed (in my case it's put into a stack a reused later on).
I would really love to understand why this hack is needed, but I'm glad to have a working solution still.

OpenGL drawing on Android combining with Unity to transfer texture through frame buffer cannot work

I'm currently making an Android player plugin for Unity. The basic idea is that I will play the video by MediaPlayer on Android, which provides a setSurface API receiving a SurfaceTexture as constructor parameter and in the end binds with an OpenGL-ES texture. In most other cases like showing an image, we can just send this texture in form of pointer/id to Unity, call Texture2D.CreateExternalTexture there to generate a Texture2D object and set that to an UI GameObject to render the picture. However, when it comes to displaying video frames, it's a little bit different since video playing on Android requires a texture of type GL_TEXTURE_EXTERNAL_OES while Unity only supports the universal type GL_TEXTURE_2D.
To solve the problem, I've googled for a while and known that I should adopt a kind of technology called "Render to texture". More clear to say, I should generate 2 textures, one for the MediaPlayer and SurfaceTexture in Android to receive video frames and another for Unity that should also has the picture data inside. The first one should be in type of GL_TEXTURE_EXTERNAL_OES (let's call it OES texture for short) and the second one in type of GL_TEXTURE_2D (let's call it 2D texture). Both of these generated textures are empty in the beginning. When bound with MediaPlayer, the OES texture will be updated during video playing, then we can use a FrameBuffer to draw the content of OES texture upon the 2D texture.
I've written a pure-Android version of this process and it works pretty well when I finally draw the 2D texture upon the screen. However, when I publish it as an Unity Android plugin and runs the same code on Unity, there won't be any pictures showing. Instead, it only displays a preset color from glClearColor, which means two things:
The transferring process of OES texture -> FrameBuffer -> 2D texture is complete and Unity do receive the final 2D texture. Because the glClearColor is called only when we draw the content of OES texture to FrameBuffer.
There are some mistakes during drawing happened after glClearColor, because we don't see the video frames pictures. In fact, I also call glReadPixels after drawing and before unbinding with the FrameBuffer, which is going to read data from the FrameBuffer we bound with. And it returns the single color's value that is same with the color we set in glClearColor.
In order to simplify the code I should provide here, I'm going to draw a triangle to a 2D texture through FrameBuffer. If we can figure out which part is wrong, we then can easily solve the similar problem to draw video frames.
The function will be called on Unity:
public int displayTriangle() {
Texture2D texture = new Texture2D(UnityPlayer.currentActivity);
texture.init();
Triangle triangle = new Triangle(UnityPlayer.currentActivity);
triangle.init();
TextureTransfer textureTransfer = new TextureTransfer();
textureTransfer.tryToCreateFBO();
mTextureWidth = 960;
mTextureHeight = 960;
textureTransfer.tryToInitTempTexture2D(texture.getTextureID(), mTextureWidth, mTextureHeight);
textureTransfer.fboStart();
triangle.draw();
textureTransfer.fboEnd();
// Unity needs a native texture id to create its own Texture2D object
return texture.getTextureID();
}
Initialization of 2D texture:
protected void initTexture() {
int[] idContainer = new int[1];
GLES30.glGenTextures(1, idContainer, 0);
textureId = idContainer[0];
Log.i(TAG, "texture2D generated: " + textureId);
// texture.getTextureID() will return this textureId
bindTexture();
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_LINEAR);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_S, GLES30.GL_CLAMP_TO_EDGE);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_CLAMP_TO_EDGE);
unbindTexture();
}
public void bindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureId);
}
public void unbindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
}
draw() of Triangle:
public void draw() {
float[] vertexData = new float[] {
0.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f
};
vertexBuffer = ByteBuffer.allocateDirect(vertexData.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(vertexData);
vertexBuffer.position(0);
GLES30.glClearColor(0.0f, 0.0f, 0.9f, 1.0f);
GLES30.glClear(GLES30.GL_DEPTH_BUFFER_BIT | GLES30.GL_COLOR_BUFFER_BIT);
GLES30.glUseProgram(mProgramId);
vertexBuffer.position(0);
GLES30.glEnableVertexAttribArray(aPosHandle);
GLES30.glVertexAttribPointer(
aPosHandle, 3, GLES30.GL_FLOAT, false, 12, vertexBuffer);
GLES30.glDrawArrays(GLES30.GL_TRIANGLE_STRIP, 0, 3);
}
vertex shader of Triangle:
attribute vec4 aPosition;
void main() {
gl_Position = aPosition;
}
fragment shader of Triangle:
precision mediump float;
void main() {
gl_FragColor = vec4(0.9, 0.0, 0.0, 1.0);
}
Key code of TextureTransfer:
public void tryToInitTempTexture2D(int texture2DId, int textureWidth, int textureHeight) {
if (mTexture2DId != -1) {
return;
}
mTexture2DId = texture2DId;
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, mTexture2DId);
Log.i(TAG, "glBindTexture " + mTexture2DId + " to init for FBO");
// make 2D texture empty
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGBA, textureWidth, textureHeight, 0,
GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, null);
Log.i(TAG, "glTexImage2D, textureWidth: " + textureWidth + ", textureHeight: " + textureHeight);
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
fboStart();
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER, GLES30.GL_COLOR_ATTACHMENT0,
GLES30.GL_TEXTURE_2D, mTexture2DId, 0);
Log.i(TAG, "glFramebufferTexture2D");
int fboStatus = GLES30.glCheckFramebufferStatus(GLES30.GL_FRAMEBUFFER);
Log.i(TAG, "fbo status: " + fboStatus);
if (fboStatus != GLES30.GL_FRAMEBUFFER_COMPLETE) {
throw new RuntimeException("framebuffer " + mFBOId + " incomplete!");
}
fboEnd();
}
public void fboStart() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, mFBOId);
}
public void fboEnd() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, 0);
}
And finally some code on Unity-side:
int textureId = plugin.Call<int>("displayTriangle");
Debug.Log("native textureId: " + textureId);
Texture2D triangleTexture = Texture2D.CreateExternalTexture(
960, 960, TextureFormat.RGBA32, false, true, (IntPtr) textureId);
triangleTexture.UpdateExternalTexture(triangleTexture.GetNativeTexturePtr());
rawImage.texture = triangleTexture;
rawImage.color = Color.white;
Well, code above will not display the expected triangle but only a blue background. I add glGetError after nearly every OpenGL functions call while no errors are thrown.
My Unity version is 2017.2.1. For Android build, I shut down the experimental multithread rendering and other settings are all default(no texture compression, not use development build, so on). My app's minimum API level is 5.0 Lollipop and target API level is 9.0 Pie.
I really need some help, thanks in advance!
Now I found the answer: If you want to do any drawing jobs in your plugin, you should do it at native layer. So if you want to make an Android plugin, you should call OpenGL-ES APIs at JNI instead of Java side. The reason is that Unity only allows drawing graphics on its rendering thread. If you simply call OpenGL-ES APIs like I did at Java side as in question description, they will actually run on Unity main thread instead of rendering thread. Unity provides a method, GL.IssuePluginEvent, to call your own functions on rendering thread but it needs native coding since this function requires a function pointer as its callback. Here is a simple example to use it:
At JNI side:
// you can copy these headers from https://github.com/googlevr/gvr-unity-sdk/tree/master/native_libs/video_plugin/src/main/jni/Unity
#include "IUnityInterface.h"
#include "UnityGraphics.h"
static void on_render_event(int event_type) {
// do all of your jobs related to rendering, including initializing the context,
// linking shaders, creating program, finding handles, drawing and so on
}
// UnityRenderingEvent is an alias of void(*)(int) defined in UnityGraphics.h
UnityRenderingEvent get_render_event_function() {
UnityRenderingEvent ptr = on_render_event;
return ptr;
}
// notice you should return a long value to Java side
extern "C" JNIEXPORT jlong JNICALL
Java_com_abc_xyz_YourPluginClass_getNativeRenderFunctionPointer(JNIEnv *env, jobject instance) {
UnityRenderingEvent ptr = get_render_event_function();
return (long) ptr;
}
At Android Java side:
class YourPluginClass {
...
public native long getNativeRenderFunctionPointer();
...
}
At Unity side:
private void IssuePluginEvent(int pluginEventType) {
long nativeRenderFuncPtr = Call_getNativeRenderFunctionPointer(); // call through plugin class
IntPtr ptr = (IntPtr) nativeRenderFuncPtr;
GL.IssuePluginEvent(ptr, pluginEventType); // pluginEventType is related to native function parameter event_type
}
void Start() {
IssuePluginEvent(1); // let's assume 1 stands for initializing everything
// get your texture2D id from plugin, create Texture2D object from it,
// attach that to a GameObject, and start playing for the first time
}
void Update() {
// call SurfaceTexture.updateTexImage in plugin
IssuePluginEvent(2); // let's assume 2 stands for transferring TEXTURE_EXTERNAL_OES to TEXTURE_2D through FrameBuffer
// call Texture2D.UpdateExternalTexture to update GameObject's appearance
}
You still need to transfer texture and everything about it should happen at JNI layer. But don't worry, they are nearly the same as I did in question description but only in a different language than Java and there are a lot of materials about this process so you can surely make it.
Finally let me address the key to solve this problem again: do your native stuff at native layer and don't be addicted to pure Java... I'm totally surprised that there are no blog/answer/wiki to tell us just write our code in C++. Although there are some open-source implementations like Google's gvr-unity-sdk, they give a complete reference but you'll still be doubt that maybe you can finish the task without writing any C++ code. Now we know that we can't. However, to be honest, I think Unity have the ability to make this progress even easier.

android opengl es sharing texture between two contexts running on two threads

Main goal:
create a texture in one thread. use the texture in another thread.
What I have done so far.
I created two contexts and two surfaces. and made context1 and surface1 current in the main thread.
surface1 = eglCreateWindowSurface(display, config, engine->app->window, NULL);
context1 = eglCreateContext(display, config, NULL, attribList);
context2 = eglCreateContext(display, config, context1, attribList);
eglMakeCurrent(display, surface1, surface1, context1)
eglQuerySurface(display, surface1, EGL_WIDTH, &w);
eglQuerySurface(display, surface1, EGL_HEIGHT, &h);
EGLint attribpbf[] =
{
EGL_HEIGHT, h,
EGL_WIDTH, w,
EGL_NONE
};
surface2 = eglCreatePbufferSurface(display, config, attribpbf);
Now I created a new thread and in that thread I made context2 and surface2 current.
eglMakeCurrent(display, surface2, surface2, context2);
Then I created a texture and did some rendering into the texture and then I did glFlush();
I checked it here and the texture was successfully created.
After that tried to use this texture as a texture attachment in the main thread.But the result was a blank black screen. There was no GL error.I think the texture was not shared successfully.
Can you pleas tell me what I am doing wrong.Is there some cases when texture can not be shared .
I think you have runing into a problem of synchronization, which means you rendering your texture in context1, and then you query the texture content immediately in context2, but the command you post to context1 was not finish yet.
And may be you can put a fence on the context1's command buffer and wait for finish in context2
reference:
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glFenceSync.xhtml

VBOs Extremely slow on real hardware (LG Optimus V 670) - Android 2.2.1

I'm working on a 2d OpenGL graphics engine for my android game, so far I have implemented basic non textured quad rendering via VBOs.
To do this my graphics engine creates a VBO of a quad when initialized and upon rendering
draws it using location/dimensions specified by a Polygon2D object.
When rendering 30 - 50 quads on actual hardware (LG Optimus V 670) the frame rate is around 5 - 10 and on the emulator it is 30 - 40.
Here's the code to give a better understanding
public void CreateBuffers(GL10 gl)
{
GL11 gl11 = (GL11)gl;
mQuadBuffer = ByteBuffer.allocateDirect(QUAD2D.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
mQuadBuffer.put(QUAD2D, 0, QUAD2D.length);
mQuadBuffer.flip();
int[] buffer = new int[1];
gl11.glGenBuffers(1, buffer, 0);
mQuadVBOId = buffer[0];
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, mQuadVBOId);
gl11.glBufferData(GL11.GL_ARRAY_BUFFER, mQuadBuffer.capacity() * 4, mQuadBuffer, GL11.GL_DYNAMIC_DRAW);
}
public void draw(GL10 gl) {
GL11 gl11 = (GL11)gl;
Polygon2D poly;
int length = mPolygons.size();
gl.glEnableClientState(GL11.GL_VERTEX_ARRAY);
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, mQuadVBOId);
gl11.glVertexPointer(2, GL11.GL_FLOAT, 0, 0);
for(int i = 0; i < length; i++)
{
poly = mPolygons.get(i);
gl11.glPushMatrix();
gl11.glTranslatef(poly.x, poly.y, 0);
gl11.glScalef(poly.width, poly.height, 0);
gl11.glDrawArrays(GL11.GL_LINE_LOOP, 0, 4);
gl11.glPopMatrix();
}
gl11.glBindBuffer(GL11.GL_ARRAY_BUFFER, 0);
gl.glDisableClientState(GL11.GL_VERTEX_ARRAY);
}
Am I doing something wrong, other OpenGl applications seem to run fine such as Replica Island.
I doubt this is useful but here are the specs http://pdadb.net/index.php?m=specs&id=2746&c=lg_vm670_optimus_v
The VBO's are not slow, but the many separate draw calls are. You should combine all of the quads in one VBO (i.e. calculate the vertex positions for each quad) and draw it with one call. To be able to do that, you will also have to change from using GL_LINE_LOOP to GL_LINES to get separation between the quads.

Categories

Resources