Related
I'm trying to encode a movie using MediaCodec and Surfaces (pixel buffer mode works, but performance is not good enough). However, every time I try to call eglSwapBuffers(), it fails with EGL_BAD_SURFACE and as such, dequeueOutputBuffer() always returns -1 (INFO_TRY_AGAIN_LATER)
I have seen the examples on Bigflake and Grafika and I have another working project where everything is ok, but I need to get this working in another setup which is slightly different.
I currently have a GLSurfaceView which does screen rendering and is supplied with a custom EGLContextFactory/EGLConfigChooser. This allows me to create shared contexts to be used for separate OpenGL rendering in a native library. These are created using EGL10, but this should not be an issue as the underlying contexts only care about the client version, from what I know.
I've made sure the context is recordable, using the following config:
private android.opengl.EGLConfig chooseConfig14(android.opengl.EGLDisplay display) {
// Configure EGL for recording and OpenGL ES 3.x
int[] attribList = {
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGLExt.EGL_OPENGL_ES3_BIT_KHR,
EGLExt.EGL_RECORDABLE_ANDROID, 1,
EGL14.EGL_NONE
};
android.opengl.EGLConfig[] configs = new android.opengl.EGLConfig[1];
int[] numConfigs = new int[1];
if (!EGL14.eglChooseConfig(display, attribList, 0, configs, 0,
configs.length, numConfigs, 0)) {
return null;
}
return configs[0];
}
Now, I tried to simplify the scenario, so when recording is started I initialise an instance of MediaCodec as an encoder and call createInputSurface() on the GLSurfaceView's thread. After I have a surface, I turn it into an EGLSurface (EGL14), as follows:
EGLSurface createEGLSurface(Surface surface) {
if (surface == null) return null;
int[] surfaceAttribs = { EGL14.EGL_NONE };
android.opengl.EGLDisplay display = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
android.opengl.EGLConfig config = chooseConfig14(display);
EGLSurface eglSurface = EGL14.eglCreateWindowSurface(display, config, surface, surfaceAttribs, 0);
return eglSurface;
}
When a new frame arrives from the camera, I send it to the screen and to another class that handles recording. That class just renders it to the EGLSurface built from MediaCodec's input surface, as follows:
public void drawToSurface(EGLSurface targetSurface, int width, int height, long timestampNano, boolean ignoreOrientation) {
if (mTextures[0] == null) {
Log.w(TAG, "Attempting to draw without a source texture");
return;
}
EGLContext currentContext = EGL14.eglGetCurrentContext();
EGLDisplay currentDisplay = EGL14.eglGetCurrentDisplay();
EGL14.eglMakeCurrent(currentDisplay, targetSurface, targetSurface, currentContext);
int error = EGL14.eglGetError();
ShaderProgram program = getProgramForTextureType(mTextures[0].getTextureType());
program.draw(width, height, TextureRendererView.LayoutType.LINEAR_HORIZONTAL, 0, 1, mTextures[0]);
error = EGL14.eglGetError();
EGLExt.eglPresentationTimeANDROID(currentDisplay, targetSurface, timestampNano);
error = EGL14.eglGetError();
EGL14.eglSwapBuffers(currentDisplay, targetSurface);
error = EGL14.eglGetError();
Log.d(TAG, "drawToSurface");
}
For some reason, eglSwapBuffers() fails and reports EGL_BAD_SURFACE and I haven't found a way to debug this further.
Update
I've tried querying the current surface after the call that makes it current and it always returns a malformed surface (looking inside, I can see the handle is 0 and it always fails when queried). It looks like eglMakeCurrent() silently fails to set the bind the surface to the context.
Moreover, I've determined this issue appears on Qualcomm chips (Adreno), not Kirin, so it's definitely related to the native OpenGL implementation (it's somehow funny because I've always noticed Adreno to be more permissive when it comes to "bad" OpenGL configurations)
Fixed! It turns out EGL10 and EGL14 appear to play nice together, but in some cases fail in very subtle ways, such as the one I encountered.
In my case, the EGLContextFactory I wrote was creating a base OpenGL ES context using EGL10 and then created more shared contexts on demand, again using EGL10. While I could retrieve them using EGL14 (either in Java or C) and context handles were always correct (sharing textures between contexts worked like a charm), it failed mysteriously when trying to use an EGLSurface created from a context or EGL10 origin... on Adreno chips.
The solution was to switch the EGLContextFactory to start from a context created with EGL14 and continue to create shared contexts using EGL14. For the GLSurfaceView, which still required EGL10, I had to use a hack
#Override
public javax.microedition.khronos.egl.EGLContext createContext(EGL10 egl10, javax.microedition.khronos.egl.EGLDisplay eglDisplay, javax.microedition.khronos.egl.EGLConfig eglConfig) {
EGLContext context = createContext();
boolean success = EGL14.eglMakeCurrent(mBaseEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE, context);
if (!success) {
int error = EGL14.eglGetError();
Log.w(TAG, "Failed to create a context. Error: " + error);
}
javax.microedition.khronos.egl.EGLContext egl14Context = egl10.eglGetCurrentContext(); //get an EGL10 context representation of our EGL14 context
javax.microedition.khronos.egl.EGLContext trueEGL10Context = egl10.eglCreateContext(eglDisplay, eglConfig, egl14Context, glAttributeList);
destroyContext(context);
return trueEGL10Context;
}
What this does is to create a new shared context with EGL14, make it current and then retrieve an EGL10 version of it. That version is not usable directly (for a reason I cannot exactly understand), but another shared context from it works well. The starting EGL14 context can then be destroyed (in my case it's put into a stack a reused later on).
I would really love to understand why this hack is needed, but I'm glad to have a working solution still.
I am using Open GLES 2.0 to build my variant of GLSurfaceView, I wanted to draw straight lines so I used the below code to draw lines (everything else is already set up)
GLES20.glDrawArrays(GL_LINES,offset,no_of_coordinates)
The problem I am facing with the above line of code is that the lines are not smooth, instead it looks like it has so many breaks. It looks like small zig zak lines have been placed together . You can see below
Then I read this(https://www.codeproject.com/Articles/199525/Drawing-nearly-perfect-D-line-segments-in-OpenGL) and added the below code
glHint(GL_LINES, GL_NICEST);
But still nothing changed. Can you guide me as to how I can get smooth straight lines ?
You need to provide an EGLConfigChooser instance to the renderer, and define it with 4xMSAA (4 multi-samples for anti-aliasing).
Here's how you do it:
First, define a class with the definitions you need:
class MyConfigChooser implements WLGLSurfaceView.EGLConfigChooser {
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
int attribs[] = {
EGL10.EGL_LEVEL, 0,
EGL10.EGL_RENDERABLE_TYPE, 4,
EGL10.EGL_COLOR_BUFFER_TYPE, EGL10.EGL_RGB_BUFFER,
EGL10.EGL_RED_SIZE, 8,
EGL10.EGL_GREEN_SIZE, 8,
EGL10.EGL_BLUE_SIZE, 8,
EGL10.EGL_ALPHA_SIZE, 8,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_SAMPLE_BUFFERS, 1,
EGL10.EGL_SAMPLES, 4, // This is for 4x MSAA.
EGL10.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] configCounts = new int[1];
egl.eglChooseConfig(display, attribs, configs, 1, configCounts);
if (configCounts[0] == 0) {
Log.i("OGLES20", "Config with 4MSAA failed");
// Failed! Error handling.
return null;
} else {
Log.i("OGLES20", "Config with 4MSAA succeeded");
return configs[0];
}
}
}
Next, in your surface view, add the following call:
mySufaceView.setEGLConfigChooser(new MyConfigChooser());
This should give you very good anti-aliasing, but the trade-off is a hit to performance, although most devices today should be able to manage without a significant drop.
Hope this helps.
This piece of code used to work in my Nexus 7 2012 KitKat:
int[] maxSize = new int[1];
GLES10.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
In KitKat I can obtain the max pixel value correctly, but after the upgrade to factory image Lollipop this snippet of code causes problem as it only returns 0. The logcat showed this output when it reached this method:
E/libEGL﹕ call to OpenGL ES API with no current context (logged once per thread)
I already have android:hardwareAccelerated="true" in my Manifest.xml. Is there any API changes that I am not aware of, that causes the above code unusable? Please advise.
The error log points out the basic problem very clearly:
call to OpenGL ES API with no current context (logged once per thread)
You need a current OpenGL context in your thread before you can make any OpenGL calls, which includes your glGetIntegerv() call. This was always true. But it seems like in pre-Lollipop, there was an OpenGL context that was created in the frameworks, and that was sometimes (always?) current when app code was called.
I don't believe this was ever documented or intended behavior. Apps were always supposed to explicitly create a context, and make it current, if they wanted to make OpenGL calls. And it appears like this is more strictly enforced in Lollipop.
There are two main approaches to create an OpenGL context:
Create a GLSurfaceView (documentation). This is the easiest and most convenient approach, but only really makes sense if you plan to do OpenGL rendering to the display.
Use EGL14 (documentation). This provides a lower level interface that allows you to complete the necessary setup for OpenGL calls without creating a view or rendering to the display.
The GLSurfaceView approach is extensively documented with examples and tutorials all over the place. So I will focus on the EGL approach.
Using EGL14 (API level 17)
The following code assumes that you care about ES 2.0, some attribute values would have to be adjusted for other ES versions.
At the start of the file, import the EGL14 class, and a few related classes:
import android.opengl.EGL14;
import android.opengl.EGLConfig;
import android.opengl.EGLContext;
import android.opengl.EGLDisplay;
import android.opengl.EGLSurface;
import android.opengl.GLES20;
Then get a hold of the default display, and initialize. This could get more complex if you have to deal with devices that could have multiple displays, but will be sufficient for a typical phone/tablet:
EGLDisplay dpy = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
int[] vers = new int[2];
EGL14.eglInitialize(dpy, vers, 0, vers, 1);
Next, we need to find a config. Since we won't use this context for rendering, the exact attributes aren't very critical:
int[] configAttr = {
EGL14.EGL_COLOR_BUFFER_TYPE, EGL14.EGL_RGB_BUFFER,
EGL14.EGL_LEVEL, 0,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL14.EGL_SURFACE_TYPE, EGL14.EGL_PBUFFER_BIT,
EGL14.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] numConfig = new int[1];
EGL14.eglChooseConfig(dpy, configAttr, 0,
configs, 0, 1, numConfig, 0);
if (numConfig[0] == 0) {
// TROUBLE! No config found.
}
EGLConfig config = configs[0];
To make a context current, which we will need later, you need a rendering surface, even if you don't actually plan to render. To satisfy this requirement, create a small offscreen (Pbuffer) surface:
int[] surfAttr = {
EGL14.EGL_WIDTH, 64,
EGL14.EGL_HEIGHT, 64,
EGL14.EGL_NONE
};
EGLSurface surf = EGL14.eglCreatePbufferSurface(dpy, config, surfAttr, 0);
Next, create the context:
int[] ctxAttrib = {
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
};
EGLContext ctx = EGL14.eglCreateContext(dpy, config, EGL14.EGL_NO_CONTEXT, ctxAttrib, 0);
Ready to make the context current now:
EGL14.eglMakeCurrent(dpy, surf, surf, ctx);
If all of the above succeeded (error checking was omitted), you can make your OpenGL calls now:
int[] maxSize = new int[1];
GLES20.glGetIntegerv(GLES20.GL_MAX_TEXTURE_SIZE, maxSize, 0);
Once you're all done, you can tear down everything:
EGL14.eglMakeCurrent(dpy, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
EGL14.EGL_NO_CONTEXT);
EGL14.eglDestroySurface(dpy, surf);
EGL14.eglDestroyContext(dpy, ctx);
EGL14.eglTerminate(dpy);
Using EGL10 (API level 1)
If you need something that works for earlier levels, you can use EGL10 (documentation) instead of EGL14, which has been available since API level 1. The code above adopted for 1.0 looks like this:
import android.opengl.GLES10;
import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.egl.EGLContext;
import javax.microedition.khronos.egl.EGLDisplay;
import javax.microedition.khronos.egl.EGLSurface;
EGL10 egl = (EGL10)EGLContext.getEGL();
EGLDisplay dpy = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
int[] vers = new int[2];
egl.eglInitialize(dpy, vers);
int[] configAttr = {
EGL10.EGL_COLOR_BUFFER_TYPE, EGL10.EGL_RGB_BUFFER,
EGL10.EGL_LEVEL, 0,
EGL10.EGL_SURFACE_TYPE, EGL10.EGL_PBUFFER_BIT,
EGL10.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] numConfig = new int[1];
egl.eglChooseConfig(dpy, configAttr, configs, 1, numConfig);
if (numConfig[0] == 0) {
// TROUBLE! No config found.
}
EGLConfig config = configs[0];
int[] surfAttr = {
EGL10.EGL_WIDTH, 64,
EGL10.EGL_HEIGHT, 64,
EGL10.EGL_NONE
};
EGLSurface surf = egl.eglCreatePbufferSurface(dpy, config, surfAttr);
final int EGL_CONTEXT_CLIENT_VERSION = 0x3098; // missing in EGL10
int[] ctxAttrib = {
EGL_CONTEXT_CLIENT_VERSION, 1,
EGL10.EGL_NONE
};
EGLContext ctx = egl.eglCreateContext(dpy, config, EGL10.EGL_NO_CONTEXT, ctxAttrib);
egl.eglMakeCurrent(dpy, surf, surf, ctx);
int[] maxSize = new int[1];
GLES10.glGetIntegerv(GLES10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
egl.eglMakeCurrent(dpy, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE,
EGL10.EGL_NO_CONTEXT);
egl.eglDestroySurface(dpy, surf);
egl.eglDestroyContext(dpy, ctx);
egl.eglTerminate(dpy);
Note that this version of the code uses an ES 1.x context. The reported maximum texture size can be different for ES 1.x and ES 2.0.
The error message is saying that you are calling the GLES function before the OpenGL ES context exists. I have found that KitKat is stricter about correctness in several areas so that may be the reason for the problem appearing now, or there may be some difference in the order in which you app is starting up that is causing it. If you posted more of your initialisation code, the reason may be clearer.
Typically you have a class that implements GLSurfaceView.Renderer that has a function:
public void onSurfaceCreated(GL10 gl, EGLConfig config)
In this function, you should be able to call gl.glGetIntegerv safely as at this point you know that the OpenGL ES context has been created. If you are calling it earlier than this, then that would explain the error you are seeing.
Is there a way to implement Antialiasing technique in OpenGL ES 2.0? I have goggled and found few methods but there was no change in the output.
In the worst case, I've planned to implement multiple pass rendering, to smooth the edges in fragment shader, by displaying average colour of the pixels around every pixel, but it costs more GPU performance.
Any suggestions?
A lot of devices support MSAA (Multi-Sample Anti-Aliasing). To take advantage of this feature, you have to choose a EGLConfig that has multisampling.
On Android, if you use GLSurfaceView, you will have to implement your own EGLConfigChooser. You can then use EGL functions, particularly eglChooseConfig() to find a config you like.
The following code is untested, but it should at least sketch how this can be implemented. In the constructor of your GLSurfaceView derived class, before calling setRenderer(), add:
setEGLConfigChooser(new MyConfigChooser());
Then implement MyConfigChooser. You can make this a nested class inside your GLSurfaceView:
class MyConfigChooser implements GLSurfaceView.EGLConfigChooser {
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
int attribs[] = {
EGL10.EGL_LEVEL, 0,
EGL10.EGL_RENDERABLE_TYPE, 4, // EGL_OPENGL_ES2_BIT
EGL10.EGL_COLOR_BUFFER_TYPE, EGL10.EGL_RGB_BUFFER,
EGL10.EGL_RED_SIZE, 8,
EGL10.EGL_GREEN_SIZE, 8,
EGL10.EGL_BLUE_SIZE, 8,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_SAMPLE_BUFFERS, 1,
EGL10.EGL_SAMPLES, 4, // This is for 4x MSAA.
EGL10.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] configCounts = new int[1];
egl.eglChooseConfig(display, attribs, configs, 1, configCounts);
if (configCounts[0] == 0) {
// Failed! Error handling.
return null;
} else {
return configs[0];
}
}
}
You will obviously want to substitute the specific values you need for your configuration. In reality, it's much more robust to call eglChooseConfig() with a small set of strictly necessary attributes, let it enumerate all configs that match those attributes, and then implement your own logic to choose the best among them. The defined behavior of eglChooseConfig() is kind of odd already (see documentation), and there's no telling how GPU vendors implement it.
On iOS, you can set this property on your GLKView to enable 4x MSAA:
[view setDrawableMultisample: GLKViewDrawableMultisample4X];
There are other antialiasing approaches you can consider:
Supersampling: Render to a texture that is a multiple (typically twice) the size of your final render surface in each direction, and then downsample it. This uses a lot of memory, and the overhead is substantial. But if it meets your performance requirements, the quality will be excellent.
Old school: Render the frame multiple times with slight offsets, and average the frames. This was commonly done with the accumulation buffer in the early days of OpenGL. The accumulation buffer is obsolete, but you can do the same thing with FBOs. See the section "Scene Antialiasing" under "The Framebuffer" in the original Red Book for a description of the method.
On Android platform, you can download this OpenGL demo apps source code from GDC 2011: it contains lots of best practices and show you how to do multisampling, including coverage antialiasing.
What you need to do is just customize GLSurfaceView.EGLConfigChooser and set this chooser:
// Set this chooser before calling setRenderer()
setEGLConfigChooser(new MultisampleConfigChooser());
setRenderer(mRenderer);
MultisampleConfigChooser.java sample code below:
package com.example.gdc11;
import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.egl.EGLDisplay;
import android.opengl.GLSurfaceView;
import android.util.Log;
// This class shows how to use multisampling. To use this, call
// myGLSurfaceView.setEGLConfigChooser(new MultisampleConfigChooser());
// before calling setRenderer(). Multisampling will probably slow down
// your app -- measure performance carefully and decide if the vastly
// improved visual quality is worth the cost.
public class MultisampleConfigChooser implements GLSurfaceView.EGLConfigChooser {
static private final String kTag = "GDC11";
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
mValue = new int[1];
// Try to find a normal multisample configuration first.
int[] configSpec = {
EGL10.EGL_RED_SIZE, 5,
EGL10.EGL_GREEN_SIZE, 6,
EGL10.EGL_BLUE_SIZE, 5,
EGL10.EGL_DEPTH_SIZE, 16,
// Requires that setEGLContextClientVersion(2) is called on the view.
EGL10.EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL10.EGL_SAMPLE_BUFFERS, 1 /* true */,
EGL10.EGL_SAMPLES, 2,
EGL10.EGL_NONE
};
if (!egl.eglChooseConfig(display, configSpec, null, 0,
mValue)) {
throw new IllegalArgumentException("eglChooseConfig failed");
}
int numConfigs = mValue[0];
if (numConfigs <= 0) {
// No normal multisampling config was found. Try to create a
// converage multisampling configuration, for the nVidia Tegra2.
// See the EGL_NV_coverage_sample documentation.
final int EGL_COVERAGE_BUFFERS_NV = 0x30E0;
final int EGL_COVERAGE_SAMPLES_NV = 0x30E1;
configSpec = new int[]{
EGL10.EGL_RED_SIZE, 5,
EGL10.EGL_GREEN_SIZE, 6,
EGL10.EGL_BLUE_SIZE, 5,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL_COVERAGE_BUFFERS_NV, 1 /* true */,
EGL_COVERAGE_SAMPLES_NV, 2, // always 5 in practice on tegra 2
EGL10.EGL_NONE
};
if (!egl.eglChooseConfig(display, configSpec, null, 0,
mValue)) {
throw new IllegalArgumentException("2nd eglChooseConfig failed");
}
numConfigs = mValue[0];
if (numConfigs <= 0) {
// Give up, try without multisampling.
configSpec = new int[]{
EGL10.EGL_RED_SIZE, 5,
EGL10.EGL_GREEN_SIZE, 6,
EGL10.EGL_BLUE_SIZE, 5,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL10.EGL_NONE
};
if (!egl.eglChooseConfig(display, configSpec, null, 0,
mValue)) {
throw new IllegalArgumentException("3rd eglChooseConfig failed");
}
numConfigs = mValue[0];
if (numConfigs <= 0) {
throw new IllegalArgumentException("No configs match configSpec");
}
} else {
mUsesCoverageAa = true;
}
}
// Get all matching configurations.
EGLConfig[] configs = new EGLConfig[numConfigs];
if (!egl.eglChooseConfig(display, configSpec, configs, numConfigs,
mValue)) {
throw new IllegalArgumentException("data eglChooseConfig failed");
}
// CAUTION! eglChooseConfigs returns configs with higher bit depth
// first: Even though we asked for rgb565 configurations, rgb888
// configurations are considered to be "better" and returned first.
// You need to explicitly filter the data returned by eglChooseConfig!
int index = -1;
for (int i = 0; i < configs.length; ++i) {
if (findConfigAttrib(egl, display, configs[i], EGL10.EGL_RED_SIZE, 0) == 5) {
index = i;
break;
}
}
if (index == -1) {
Log.w(kTag, "Did not find sane config, using first");
}
EGLConfig config = configs.length > 0 ? configs[index] : null;
if (config == null) {
throw new IllegalArgumentException("No config chosen");
}
return config;
}
private int findConfigAttrib(EGL10 egl, EGLDisplay display,
EGLConfig config, int attribute, int defaultValue) {
if (egl.eglGetConfigAttrib(display, config, attribute, mValue)) {
return mValue[0];
}
return defaultValue;
}
public boolean usesCoverageAa() {
return mUsesCoverageAa;
}
private int[] mValue;
private boolean mUsesCoverageAa;
}
Before using this feature, you should known this will affect rendering efficiency, and may need to do a fully performance test.
I'm trying to make an NDK based OpenGL application. At some point in my code, I want to check the OpenGL version available on the device.
I'm using the following code :
const char *version = (const char *) glGetString(GL_VERSION);
if (strstr(version, "OpenGL ES 2.")) {
// do something
} else {
__android_log_print(ANDROID_LOG_ERROR, "NativeGL", "Open GL 2 not available (%s)", version=;
}
THe problem is that the version string is always equals to "OpenGL ES-CM 1.1".
I'm testing on both a Moto G (Android 4.4.4) and Samsung Galaxy Nexus (Android 4.3), both of which are OpenGL ES 2.0 compliant (the moto G is also OpenGL ES 3.0 compliant).
I tried to force the EGL_CONTEXT_CLIENT_VERSION when I initialise my display, but then eglChooseConfig returns 0 configurations. And when I test the context client version value in the default configuration, it's always 0 :
const EGLint attrib_list[] = {
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
// get the number of configs matching the attrib_list
EGLint num_configs;
eglChooseConfig(display, attrib_list, NULL, 0, &num_configs);
LOG_D(TAG, " • %d EGL configurations found", num_configs);
// find matching configurations
EGLConfig configs[num_configs];
EGLint client_version = 0, depth_size = 0, stencil_size = 0, surface_type = 0;
eglChooseConfig(display, requirements, configs, num_configs, &num_configs);
for(int i = 0; i < num_configs; ++i){
eglGetConfigAttrib(display, configs[i], EGL_CONTEXT_CLIENT_VERSION, &client_version);
LOG_D(TAG, " client version %d = 0x%08x", i, client_version);
}
// Update the window format from the configuration
EGLint format;
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(window, 0, 0, format);
// create the surface and context
EGLSurface surface = eglCreateWindowSurface(display, config, window, NULL);
EGLContext context = eglCreateContext(display, config, NULL, NULL);
I'm linking against the Open GL ES 2.0 library : here's the excerpt from my Android.mk
LOCAL_LDLIBS := -landroid -llog -lEGL -lGLESv2
Thanks to the hints given by mstorsjo, I managed to have the correct initialisation code, shown here if other people struggle with this.
const EGLint attrib_list[] = {
// this specifically requests an Open GL ES 2 renderer
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
// (ommiting other configs regarding the color channels etc...
EGL_NONE
};
EGLConfig config;
EGLint num_configs;
eglChooseConfig(display, attrib_list, &config, 1, &num_configs);
// ommiting other codes
const EGLint context_attrib_list[] = {
// request a context using Open GL ES 2.0
EGL_CONTEXT_CLIENT_VERSION, 2,
EGL_NONE
};
EGLContext context = eglCreateContext(display, config, NULL, context_attrib_list);
What version you get from glGetString(GL_VERSION) depends on which library you've linked the code against, either libGLESv1_CM.so or libGLESv2.so. Similarly for all the other common GL functions. This means that in practice, you need to build two separate .so files for your GL ES 1 and 2 versions of your rendering, and only load the right one once you know which one of them you can use (or load the function pointers dynamically). (This apparently is different when having compatibility between GL ES 2 and 3, where you can check using glGetString(GL_VERSION).)
You didn't say where you tried using EGL_CONTEXT_CLIENT_VERSION - it should be used in the parameter array to eglCreateContext (which you only call once you actually have chosen a config). The attribute array given to eglChooseConfig should have the pair EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT to get a suitable config.