I am developing for Android using opengl/egl. My app requires a second context for loading textures from a second thread.
My code works fine on android 2.3, but when I try the code on a 4.0.3 android device or emulator, eglMakeCurrent() fails with EGL_BAD_MATCH.
The initialization of the second context and it's pixel buffer all works fine too, so I am not sure where to begin looking for this error.
This is the initialization code:
ANativeWindow *window = (ANativeWindow*)displaySurface;
EGLint dummy, format;
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, 0, 0);
EGLint contextAttribs[] =
{
EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE
};
const EGLint configAttribs[] =
{
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_BUFFER_SIZE, 32,
EGL_DEPTH_SIZE, 24,
EGL_NONE
};
EGLint numConfigs;
EGLConfig config;
eglChooseConfig(display, configAttribs, &config, 1, &numConfigs);
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(window, 0, 0, format);
surface = eglCreateWindowSurface(display, config, window, NULL);
if(surface == NULL)
Trace("error creating window surface: " + GetEglError());
context = eglCreateContext(display, config, EGL_NO_CONTEXT, contextAttribs);
if(context == NULL)
Trace("error creating main context: " + GetEglError());
const EGLint auxConfigAttribs[] =
{
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_DEPTH_SIZE, 0,
EGL_STENCIL_SIZE, 0,
EGL_NONE
};
EGLint pbufferAttribs[] =
{
EGL_WIDTH, 1,
EGL_HEIGHT, 1,
EGL_TEXTURE_TARGET, EGL_NO_TEXTURE,
EGL_TEXTURE_FORMAT, EGL_NO_TEXTURE,
EGL_NONE
};
EGLint auxNumConfigs;
EGLConfig auxConfig;
eglChooseConfig(display, auxConfigAttribs, &auxConfig, 1, &auxNumConfigs);
auxSurface = eglCreatePbufferSurface(display, auxConfig, pbufferAttribs);
if(auxSurface == NULL)
Trace("error creating pbuffer surface: " + GetEglError());
auxContext = eglCreateContext(display, auxConfig, context, contextAttribs);
if(auxSurface == NULL)
Trace("error creating auxilliary context: " + GetEglError());
if(!eglMakeCurrent(display, surface, surface, context))
Trace("could not make main context current: " + GetEglError());
On my Android 2.3 device(HTC Desire), the above initialization code works perfectly, and I can make the auxContext current, and load textures just fine.
BUT, on my android 4.0.3 device(Samsung Nexus S) and my Android 4.1 device (Galaxy Note 2) eglMakeCurrent() fails with EGL_BAD_MATCH after a successful initialization.
Does anyone know why I may be getting this error?
Ah, something I actually know something about. ;) [Having spent best part of 5 years working on various EGL implementations].
I'm pretty certain your surface is a different format to the actual display surface. I'm not sure exactly WHAT the difference would be, or what you need to change. EGL_DEPTH_SIZE perhaps? You could try enumerating the modes that are available and see if any look "likely". I know, it's a bit of a pain, but I've been there done that a few times in the past - with the difference that I could usually look through the EGL source code and figure out what I'd done wrong... ;)
If your getting this error but not dealing with this surface or texture stuff, go to run and type .android
go to AVD and your current Emulator delete the user-date file usually on .img file, restart your emulator then test. This works for me, if it happens on while testing on your device, clear the data and restart your app. Cheers for those who find this helpful.
Ensure you have set EGL_PBUFFER_BIT for the EGL_SURFACE_TYPE in the attributes passed into eglChooseConfig() call. It's work for me
Related
I am using Open GLES 2.0 to build my variant of GLSurfaceView, I wanted to draw straight lines so I used the below code to draw lines (everything else is already set up)
GLES20.glDrawArrays(GL_LINES,offset,no_of_coordinates)
The problem I am facing with the above line of code is that the lines are not smooth, instead it looks like it has so many breaks. It looks like small zig zak lines have been placed together . You can see below
Then I read this(https://www.codeproject.com/Articles/199525/Drawing-nearly-perfect-D-line-segments-in-OpenGL) and added the below code
glHint(GL_LINES, GL_NICEST);
But still nothing changed. Can you guide me as to how I can get smooth straight lines ?
You need to provide an EGLConfigChooser instance to the renderer, and define it with 4xMSAA (4 multi-samples for anti-aliasing).
Here's how you do it:
First, define a class with the definitions you need:
class MyConfigChooser implements WLGLSurfaceView.EGLConfigChooser {
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
int attribs[] = {
EGL10.EGL_LEVEL, 0,
EGL10.EGL_RENDERABLE_TYPE, 4,
EGL10.EGL_COLOR_BUFFER_TYPE, EGL10.EGL_RGB_BUFFER,
EGL10.EGL_RED_SIZE, 8,
EGL10.EGL_GREEN_SIZE, 8,
EGL10.EGL_BLUE_SIZE, 8,
EGL10.EGL_ALPHA_SIZE, 8,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_SAMPLE_BUFFERS, 1,
EGL10.EGL_SAMPLES, 4, // This is for 4x MSAA.
EGL10.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] configCounts = new int[1];
egl.eglChooseConfig(display, attribs, configs, 1, configCounts);
if (configCounts[0] == 0) {
Log.i("OGLES20", "Config with 4MSAA failed");
// Failed! Error handling.
return null;
} else {
Log.i("OGLES20", "Config with 4MSAA succeeded");
return configs[0];
}
}
}
Next, in your surface view, add the following call:
mySufaceView.setEGLConfigChooser(new MyConfigChooser());
This should give you very good anti-aliasing, but the trade-off is a hit to performance, although most devices today should be able to manage without a significant drop.
Hope this helps.
Is there a way to implement Antialiasing technique in OpenGL ES 2.0? I have goggled and found few methods but there was no change in the output.
In the worst case, I've planned to implement multiple pass rendering, to smooth the edges in fragment shader, by displaying average colour of the pixels around every pixel, but it costs more GPU performance.
Any suggestions?
A lot of devices support MSAA (Multi-Sample Anti-Aliasing). To take advantage of this feature, you have to choose a EGLConfig that has multisampling.
On Android, if you use GLSurfaceView, you will have to implement your own EGLConfigChooser. You can then use EGL functions, particularly eglChooseConfig() to find a config you like.
The following code is untested, but it should at least sketch how this can be implemented. In the constructor of your GLSurfaceView derived class, before calling setRenderer(), add:
setEGLConfigChooser(new MyConfigChooser());
Then implement MyConfigChooser. You can make this a nested class inside your GLSurfaceView:
class MyConfigChooser implements GLSurfaceView.EGLConfigChooser {
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
int attribs[] = {
EGL10.EGL_LEVEL, 0,
EGL10.EGL_RENDERABLE_TYPE, 4, // EGL_OPENGL_ES2_BIT
EGL10.EGL_COLOR_BUFFER_TYPE, EGL10.EGL_RGB_BUFFER,
EGL10.EGL_RED_SIZE, 8,
EGL10.EGL_GREEN_SIZE, 8,
EGL10.EGL_BLUE_SIZE, 8,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_SAMPLE_BUFFERS, 1,
EGL10.EGL_SAMPLES, 4, // This is for 4x MSAA.
EGL10.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] configCounts = new int[1];
egl.eglChooseConfig(display, attribs, configs, 1, configCounts);
if (configCounts[0] == 0) {
// Failed! Error handling.
return null;
} else {
return configs[0];
}
}
}
You will obviously want to substitute the specific values you need for your configuration. In reality, it's much more robust to call eglChooseConfig() with a small set of strictly necessary attributes, let it enumerate all configs that match those attributes, and then implement your own logic to choose the best among them. The defined behavior of eglChooseConfig() is kind of odd already (see documentation), and there's no telling how GPU vendors implement it.
On iOS, you can set this property on your GLKView to enable 4x MSAA:
[view setDrawableMultisample: GLKViewDrawableMultisample4X];
There are other antialiasing approaches you can consider:
Supersampling: Render to a texture that is a multiple (typically twice) the size of your final render surface in each direction, and then downsample it. This uses a lot of memory, and the overhead is substantial. But if it meets your performance requirements, the quality will be excellent.
Old school: Render the frame multiple times with slight offsets, and average the frames. This was commonly done with the accumulation buffer in the early days of OpenGL. The accumulation buffer is obsolete, but you can do the same thing with FBOs. See the section "Scene Antialiasing" under "The Framebuffer" in the original Red Book for a description of the method.
On Android platform, you can download this OpenGL demo apps source code from GDC 2011: it contains lots of best practices and show you how to do multisampling, including coverage antialiasing.
What you need to do is just customize GLSurfaceView.EGLConfigChooser and set this chooser:
// Set this chooser before calling setRenderer()
setEGLConfigChooser(new MultisampleConfigChooser());
setRenderer(mRenderer);
MultisampleConfigChooser.java sample code below:
package com.example.gdc11;
import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.egl.EGLDisplay;
import android.opengl.GLSurfaceView;
import android.util.Log;
// This class shows how to use multisampling. To use this, call
// myGLSurfaceView.setEGLConfigChooser(new MultisampleConfigChooser());
// before calling setRenderer(). Multisampling will probably slow down
// your app -- measure performance carefully and decide if the vastly
// improved visual quality is worth the cost.
public class MultisampleConfigChooser implements GLSurfaceView.EGLConfigChooser {
static private final String kTag = "GDC11";
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
mValue = new int[1];
// Try to find a normal multisample configuration first.
int[] configSpec = {
EGL10.EGL_RED_SIZE, 5,
EGL10.EGL_GREEN_SIZE, 6,
EGL10.EGL_BLUE_SIZE, 5,
EGL10.EGL_DEPTH_SIZE, 16,
// Requires that setEGLContextClientVersion(2) is called on the view.
EGL10.EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL10.EGL_SAMPLE_BUFFERS, 1 /* true */,
EGL10.EGL_SAMPLES, 2,
EGL10.EGL_NONE
};
if (!egl.eglChooseConfig(display, configSpec, null, 0,
mValue)) {
throw new IllegalArgumentException("eglChooseConfig failed");
}
int numConfigs = mValue[0];
if (numConfigs <= 0) {
// No normal multisampling config was found. Try to create a
// converage multisampling configuration, for the nVidia Tegra2.
// See the EGL_NV_coverage_sample documentation.
final int EGL_COVERAGE_BUFFERS_NV = 0x30E0;
final int EGL_COVERAGE_SAMPLES_NV = 0x30E1;
configSpec = new int[]{
EGL10.EGL_RED_SIZE, 5,
EGL10.EGL_GREEN_SIZE, 6,
EGL10.EGL_BLUE_SIZE, 5,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL_COVERAGE_BUFFERS_NV, 1 /* true */,
EGL_COVERAGE_SAMPLES_NV, 2, // always 5 in practice on tegra 2
EGL10.EGL_NONE
};
if (!egl.eglChooseConfig(display, configSpec, null, 0,
mValue)) {
throw new IllegalArgumentException("2nd eglChooseConfig failed");
}
numConfigs = mValue[0];
if (numConfigs <= 0) {
// Give up, try without multisampling.
configSpec = new int[]{
EGL10.EGL_RED_SIZE, 5,
EGL10.EGL_GREEN_SIZE, 6,
EGL10.EGL_BLUE_SIZE, 5,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL10.EGL_NONE
};
if (!egl.eglChooseConfig(display, configSpec, null, 0,
mValue)) {
throw new IllegalArgumentException("3rd eglChooseConfig failed");
}
numConfigs = mValue[0];
if (numConfigs <= 0) {
throw new IllegalArgumentException("No configs match configSpec");
}
} else {
mUsesCoverageAa = true;
}
}
// Get all matching configurations.
EGLConfig[] configs = new EGLConfig[numConfigs];
if (!egl.eglChooseConfig(display, configSpec, configs, numConfigs,
mValue)) {
throw new IllegalArgumentException("data eglChooseConfig failed");
}
// CAUTION! eglChooseConfigs returns configs with higher bit depth
// first: Even though we asked for rgb565 configurations, rgb888
// configurations are considered to be "better" and returned first.
// You need to explicitly filter the data returned by eglChooseConfig!
int index = -1;
for (int i = 0; i < configs.length; ++i) {
if (findConfigAttrib(egl, display, configs[i], EGL10.EGL_RED_SIZE, 0) == 5) {
index = i;
break;
}
}
if (index == -1) {
Log.w(kTag, "Did not find sane config, using first");
}
EGLConfig config = configs.length > 0 ? configs[index] : null;
if (config == null) {
throw new IllegalArgumentException("No config chosen");
}
return config;
}
private int findConfigAttrib(EGL10 egl, EGLDisplay display,
EGLConfig config, int attribute, int defaultValue) {
if (egl.eglGetConfigAttrib(display, config, attribute, mValue)) {
return mValue[0];
}
return defaultValue;
}
public boolean usesCoverageAa() {
return mUsesCoverageAa;
}
private int[] mValue;
private boolean mUsesCoverageAa;
}
Before using this feature, you should known this will affect rendering efficiency, and may need to do a fully performance test.
I'm trying to make an NDK based OpenGL application. At some point in my code, I want to check the OpenGL version available on the device.
I'm using the following code :
const char *version = (const char *) glGetString(GL_VERSION);
if (strstr(version, "OpenGL ES 2.")) {
// do something
} else {
__android_log_print(ANDROID_LOG_ERROR, "NativeGL", "Open GL 2 not available (%s)", version=;
}
THe problem is that the version string is always equals to "OpenGL ES-CM 1.1".
I'm testing on both a Moto G (Android 4.4.4) and Samsung Galaxy Nexus (Android 4.3), both of which are OpenGL ES 2.0 compliant (the moto G is also OpenGL ES 3.0 compliant).
I tried to force the EGL_CONTEXT_CLIENT_VERSION when I initialise my display, but then eglChooseConfig returns 0 configurations. And when I test the context client version value in the default configuration, it's always 0 :
const EGLint attrib_list[] = {
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
// get the number of configs matching the attrib_list
EGLint num_configs;
eglChooseConfig(display, attrib_list, NULL, 0, &num_configs);
LOG_D(TAG, " • %d EGL configurations found", num_configs);
// find matching configurations
EGLConfig configs[num_configs];
EGLint client_version = 0, depth_size = 0, stencil_size = 0, surface_type = 0;
eglChooseConfig(display, requirements, configs, num_configs, &num_configs);
for(int i = 0; i < num_configs; ++i){
eglGetConfigAttrib(display, configs[i], EGL_CONTEXT_CLIENT_VERSION, &client_version);
LOG_D(TAG, " client version %d = 0x%08x", i, client_version);
}
// Update the window format from the configuration
EGLint format;
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(window, 0, 0, format);
// create the surface and context
EGLSurface surface = eglCreateWindowSurface(display, config, window, NULL);
EGLContext context = eglCreateContext(display, config, NULL, NULL);
I'm linking against the Open GL ES 2.0 library : here's the excerpt from my Android.mk
LOCAL_LDLIBS := -landroid -llog -lEGL -lGLESv2
Thanks to the hints given by mstorsjo, I managed to have the correct initialisation code, shown here if other people struggle with this.
const EGLint attrib_list[] = {
// this specifically requests an Open GL ES 2 renderer
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
// (ommiting other configs regarding the color channels etc...
EGL_NONE
};
EGLConfig config;
EGLint num_configs;
eglChooseConfig(display, attrib_list, &config, 1, &num_configs);
// ommiting other codes
const EGLint context_attrib_list[] = {
// request a context using Open GL ES 2.0
EGL_CONTEXT_CLIENT_VERSION, 2,
EGL_NONE
};
EGLContext context = eglCreateContext(display, config, NULL, context_attrib_list);
What version you get from glGetString(GL_VERSION) depends on which library you've linked the code against, either libGLESv1_CM.so or libGLESv2.so. Similarly for all the other common GL functions. This means that in practice, you need to build two separate .so files for your GL ES 1 and 2 versions of your rendering, and only load the right one once you know which one of them you can use (or load the function pointers dynamically). (This apparently is different when having compatibility between GL ES 2 and 3, where you can check using glGetString(GL_VERSION).)
You didn't say where you tried using EGL_CONTEXT_CLIENT_VERSION - it should be used in the parameter array to eglCreateContext (which you only call once you actually have chosen a config). The attribute array given to eglChooseConfig should have the pair EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT to get a suitable config.
I'm trying to create the EGL context to draw everything with OpenglES within a native function call. The problem is that I need access to the NativeWindowType instance, but I could only find a function to create one (which I can't find out how to link, anyway). However, even if I create one, I suspect that would be wrong, since what I really need is the one created by the SurfaceView instance from which I'm calling this native function.
Here is the code:
static int egl_init() {
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
EGLint w, h, dummy, format;
EGLint egl_major_version, egl_minor_version;
EGLint numConfigs;
EGLConfig egl_config;
EGLSurface egl_surface;
EGLContext egl_context;
EGLDisplay egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
// v--------- This is where I should get the display window
NativeWindowType display_window;
display_window = android_createDisplaySurface();
eglInitialize(egl_display, &egl_major_version, &egl_minor_version);
printf("GL Version: %d.%d\n", egl_major_version, egl_minor_version);
if (!eglChooseConfig(egl_display, attribs, &egl_config, 1, &numConfigs))
{
printf("eglChooseConfig failed\n");
if (egl_context == 0) printf("Error code: %x\n", eglGetError());
}
eglGetConfigAttrib(egl_display, egl_config, EGL_NATIVE_VISUAL_ID, &format);
// v---------- This requires that I link libandroid, it is found in android/native_window.h
ANativeWindow_setBuffersGeometry(display_window, 0, 0, format);
egl_context = eglCreateContext(egl_display, egl_config, EGL_NO_CONTEXT, NULL);
if (egl_context == 0) LOGE("Error code: %x\n", eglGetError());
egl_surface = eglCreateWindowSurface(egl_display, egl_config, display_window, NULL);
if (egl_surface == 0) LOGE("Error code: %x\n", eglGetError());
if (eglMakeCurrent(egl_display, egl_surface, egl_surface, egl_context) == EGL_FALSE) {
LOGE("Unable to eglMakeCurrent");
return -1;
}
return 0;
}
Thanks for your help
The surface can not support the the requested egl config (red,green and blue being at least 8 bits).
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
Some phones, makes all surface buffers RGB_565 by default.
In Java to get more detailed colors or alpha you can getWindow and setFormat(). Like so:
getWindow().setFormat(PixelFormat.TRANSLUCENT);
To do something equivalent in a native-activity you must do something like the following.
ANativeWindow_setBuffersGeometry(display_window, 0, 0, 1);
as defined in android/native_window.h
/*
* Pixel formats that a window can use.
*/
enum {
WINDOW_FORMAT_RGBA_8888 = 1,
WINDOW_FORMAT_RGBX_8888 = 2,
WINDOW_FORMAT_RGB_565 = 4,
};
Hope this helps. I've seen this question around and I just figured it out.
I have a funny event with one application in development using OpenGL ES on native NDK C++ for Android. The program compiles and runs with no problem. However if I decide to make a unit test and debug the code, it complains with the following message:
Invalid arguments ' Candidates are:
void * eglCreateWindowSurface(void *, void *, unsigned long int,
const int *) '
Which is related to the last line from the following code snapshot:
EGLint lFormat, lNumConfigs, lErrorResult;
EGLConfig lConfig;
// Defines display requirements. 16bits mode here.
const EGLint lAttributes[] = {
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 5, EGL_GREEN_SIZE, 6, EGL_RED_SIZE, 5,
EGL_SURFACE_TYPE, EGL_WINDOW_BIT, EGL_RENDER_BUFFER, EGL_BACK_BUFFER,
EGL_NONE
};
// Retrieves a display connection and initializes it.
packt_Log_debug("Connecting to the display.");
mDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);
if (mDisplay == EGL_NO_DISPLAY) goto ERROR;
if (!eglInitialize(mDisplay, NULL, NULL)) goto ERROR;
// Selects the first OpenGL configuration found.
packt_Log_debug("Selecting a display config.");
if(!eglChooseConfig(mDisplay, lAttributes, &lConfig, 1,
&lNumConfigs) || (lNumConfigs <= 0)) goto ERROR;
// Reconfigures the Android window with the EGL format.
packt_Log_debug("Configuring window format.");
if (!eglGetConfigAttrib(mDisplay, lConfig,
EGL_NATIVE_VISUAL_ID, &lFormat)) goto ERROR;
ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat);
// Creates the display surface.
packt_Log_debug("Initializing the display.");
mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL);
I already looked on OGLES references, but all different things I tried did not work out yet.
Replacing 'mApplication->window' for 'EGLNativeWindowType window' solved the problem.