android: how to get display resolution in native code - android

Is it possible in android to get display dimensions in the native code?
I see that there are EGL related function:
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
surface = eglCreateWindowSurface(display, config, engine->app->window, NULL);
eglQuerySurface(display, surface, EGL_WIDTH, &w);
eglQuerySurface(display, surface, EGL_HEIGHT, &h);
But I need dimension without creating any surface.

You can get the window size while processing the APP_CMD_INIT_WINDOW "app command" like this in your native activity:
static void engine_handle_cmd(struct android_app* app, int32_t cmd) {
struct engine* engine = (struct engine*)app->userData;
switch (cmd) {
case APP_CMD_INIT_WINDOW:
if (engine->app->window != NULL) {
int width = ANativeWindow_getWidth(engine->app->window);
int height = ANativeWindow_getHeight(engine->app->window);
}
Check out main.c in the native-activity sample NDK project if you want to know how to set up the engine struct & the engine_handle_cmd callback.

Below is the relevant code taken from the Google source for screen capture http://code.google.com/p/androidscreenshot/source/browse/trunk/Binary/screenshot.c?r=3
the vinfo structure contains fields xres and yres.
int framebufferHandle = -1;
struct fb_var_screeninfo vinfo;
framebufferHandle = open("/dev/graphics/fb0", O_RDONLY);
if(framebufferHandle < 0)
{
printf("failed to open /dev/graphics/fb0\n");
// handle error
}
if(ioctl(framebufferHandle, FBIOGET_VSCREENINFO, &vinfo) < 0)
{
printf("failed to open ioctl\n");
// handle error
}
fcntl(framebufferHandle, F_SETFD, FD_CLOEXEC);

if you're looking for the pixel values height_pixels and width_pixels are probs what you're looking for. See the developer docs for more info:
http://developer.android.com/reference/android/util/DisplayMetrics.html

Related

libav sws_scale() fails colorspace conversion on real device, works on emulator

I'm making a movie player with libav. I have decoding video packets working, I have play in reverse working, I have seeking working. All this works no an x86 android emulator, but fails to work on a real android phone (arm64-v8a)
The failure is in sws_scale() - it returns 0. The video frames continue to be decoded properly with no errors.
There are no errors, warnings, alerts from libav. I have connected an avlog_callback
void log_callback(void *ptr, int level, const char *fmt, va_list vargs) {
if (level<= AV_LOG_WARNING)
__android_log_print( level, LOG_TAG, fmt, vargs);
}
uint64_t openMovie( char* path, int rotate, float javaDuration )
{
av_log_set_level(AV_LOG_WARNING);
av_log_set_callback(log_callback);
The code to do the sws_scale() is:
int JVM_getBitmapBuffer( JNIEnv* env, jobject thiz, jlong av, jobject bufferAsInt, jbyte transparent ) {
avblock *block = (avblock *) av;
if (!block) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " avblock is null");
return AVERROR(EINVAL);
}
if (!block->pCodecCtx) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " codecctx is null");
return AVERROR(EINVAL);
}
int width = block->pCodecCtx->width;
int height = block->pCodecCtx->height;
if (NULL == block->sws) {
__android_log_print( ANDROID_LOG_ERROR, LOG_TAG, "getBitmapBuffer:\n *** invalid sws context ***" );
}
int scaleRet = sws_scale( block->sws,
block->pFrame->data,
block->pFrame->linesize,
0,
height,
block->pFrameRGB->data,
block->pFrameRGB->linesize
);
if (scaleRet == 0 ) {
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " scale failed");
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " pframe linesize %d", block->pFrame->linesize[0]);
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " pframergb linesize %d", block->pFrameRGB->linesize[0]);
__android_log_print(ANDROID_LOG_ERROR, LOG_TAG, " height %d",
height);
return AVERROR(EINVAL);
}
Setting up the codex and avframes:
//i have tried every combination of 1, 8, 16, and 32 for these values
int alignRGB = 32;
int align = 16;
int width = block->pCodecCtx->width;
int height = block->pCodecCtx->height;
block->pFrame = av_frame_alloc();
block->pFrameRGB = av_frame_alloc();
block->pFrameRGBBuffer = av_malloc(
(size_t)av_image_get_buffer_size(AV_PIX_FMT_RGB32, width, height, alignRGB)
);
av_image_fill_arrays(
block->pFrameRGB->data,
block->pFrameRGB->linesize,
block->pFrameRGBBuffer,
AV_PIX_FMT_RGB32,
width,
height,
alignRGB
);
block->pFrameBuffer = av_malloc(
(size_t) av_image_get_buffer_size(block->pCodecCtx->pix_fmt,
width, height, align
)
);
av_image_fill_arrays(
block->pFrame->data,
block->pFrame->linesize,
block->pFrameBuffer,
block->pCodecCtx->pix_fmt,
width, height,
align
);
block->sws = sws_getContext(
width, height,
AV_PIX_FMT_YUV420P,
width, height,
AV_PIX_FMT_RGB32,
SWS_BILINEAR, NULL, NULL, 0
);
Wildcards are that:
I'm using React-Native
My emulator is x86 android api 28
My real-device is arm64-v8a AOSP (around api 28, don't remember exactly(
Other notes:
libav .so files are compiled from mobile-ffmpeg project.
I can also sws_scale also works on x86_64 linux using SDL to project YV12
Test video is here: https://github.com/markkimsal/video-thumbnailer/tree/master/fixtures
block is a simple C struct with pointers to relevant AV memory structures.
Using FFMPEG 4.3.2
I'm pretty certain it has something to do with the pixel alignment. But documentation is practically non-existent on this topic. It could also be the difference between pixel formats RGBA and RGB32, or possibly little-endian vs big-endian.
This is a known bug in FFMPEG on ARM architecture.
A workaround was posted by mythtv that involves subtracting 1 from the destination width in order to bypass broken optimization code.
https://code.mythtv.org/trac/ticket/12888
https://code.mythtv.org/trac/changeset/7de03a90c1b144fc0067261af1c9cfdd8d358972/mythtv
Reported against FFMPEG 3.2.1
http://trac.ffmpeg.org/ticket/6192
Still exists in FFMPEG 4.3.2
int new_width = width;
#if ARCH_ARM
// The ARM build of FFMPEG has a bug that if sws_scale is
// called with source and dest sizes the same, and
// formats as shown below, it causes a bus error and the
// application core dumps. To avoid this I make a -1
// difference in the new width, causing it to bypass
// the code optimization which is failing.
if (pix_fmt == AV_PIX_FMT_YUV420P
&& dst_pix_fmt == AV_PIX_FMT_BGRA)
new_width = width - 1;
#endif
d->swsctx = sws_getCachedContext(d->swsctx, width, height, pix_fmt,
new_width, height, dst_pix_fmt,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
UPDATE:
Building mobile-ffmpeg 4.3.2 with --debug option produces a working binary.
UPDATE 2:
The --debug flag does not seem to be working anymore. Must've been some unclean builds. This does seem to be related to alpha channel.
The only non-alpha RGB format supported by both Android.Bitmap.Config and ffmpeg is RGB_565. Using 565 works with even numbered destination widths.
in libswscale/yuv2rgb.c:
SwsFunc ff_yuv2rgb_get_ptr(SwsContext *c) {
...
switch(c->dstFormat) {
case AV_PIX_FMT_RGBA:
return (CONFIG_SWSCALE_ALPHA & isALPHA(c->srcFormat)) ? yuva2rgb_c: yuv2rgb_c_32;
....
case AV_PIX_FMT_RGB565:
return yuv2rgb_c_16_ordered_dither;
...
My source is YUV420P, and does not support alpha channels (like yuv A 420p). When my destination format is RGB565, this method seems to not have any problems.
It's a shame that I can't produce any alpha channel for Android. I guess the real answer is to just use OpenGL and use a surface that supports blitting YUV directly.
UPDATE 3
I have found the codepaths in libswscale/swscale_unscaled.c and libswscale/yuv2rgb.c
If you scale the output then you can convert to RGBA dest format.
If you add the flag SWS_ACCURATE_RND to your SwsContext then you also avoid the bad codepath.

Real-time image process and display using Android Camera2 api and ANativeWindow

I need to do some real-time image processing with the camera preview data, such as face detection which is a c++ library, and then display the processed preview with face labeled on screen.
I have read http://nezarobot.blogspot.com/2016/03/android-surfacetexture-camera2-opencv.html and Eddy Talvala's answer from Android camera2 API - Display processed frame in real time. Following the two webpages, I managed to build the app(no calling the face detection lib, only trying to display preview using ANativeWindow), but everytime I run this app on Google Pixel - 7.1.0 - API 25 running on Genymotion, the app always collapses throwing the following log
08-28 14:23:09.598 2099-2127/tau.camera2demo A/libc: Fatal signal 11 (SIGSEGV), code 2, fault addr 0xd3a96000 in tid 2127 (CAMERA2)
[ 08-28 14:23:09.599 117: 117 W/ ]
debuggerd: handling request: pid=2099 uid=10067 gid=10067 tid=2127
I googled this but no answer found.
The whole project on Github:https://github.com/Fung-yuantao/android-camera2demo
Here is the key code(I think).
Code in Camera2Demo.java:
private void startPreview(CameraDevice camera) throws CameraAccessException {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
// to set PREVIEW size
texture.setDefaultBufferSize(mPreviewSize.getWidth(),mPreviewSize.getHeight());
surface = new Surface(texture);
try {
// to set request for PREVIEW
mPreviewBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
} catch (CameraAccessException e) {
e.printStackTrace();
}
mImageReader = ImageReader.newInstance(mImageWidth, mImageHeight, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener,mHandler);
mPreviewBuilder.addTarget(mImageReader.getSurface());
//output Surface
List<Surface> outputSurfaces = new ArrayList<>();
outputSurfaces.add(mImageReader.getSurface());
/*camera.createCaptureSession(
Arrays.asList(surface, mImageReader.getSurface()),
mSessionStateCallback, mHandler);
*/
camera.createCaptureSession(outputSurfaces, mSessionStateCallback, mHandler);
}
private CameraCaptureSession.StateCallback mSessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
updatePreview(session);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
};
private void updatePreview(CameraCaptureSession session)
throws CameraAccessException {
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
session.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
}
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// get the newest frame
Image image = reader.acquireNextImage();
if (image == null) {
return;
}
// print image format
int format = reader.getImageFormat();
Log.d(TAG, "the format of captured frame: " + format);
// HERE to call jni methods
JNIUtils.display(image.getWidth(), image.getHeight(), image.getPlanes()[0].getBuffer(), surface);
//ByteBuffer buffer = image.getPlanes()[0].getBuffer();
//byte[] bytes = new byte[buffer.remaining()];
image.close();
}
};
Code in JNIUtils.java:
import android.media.Image;
import android.view.Surface;
import java.nio.ByteBuffer;
public class JNIUtils {
// TAG for JNIUtils class
private static final String TAG = "JNIUtils";
// Load native library.
static {
System.loadLibrary("native-lib");
}
public static native void display(int srcWidth, int srcHeight, ByteBuffer srcBuffer, Surface surface);
}
Code in native-lib.cpp:
#include <jni.h>
#include <string>
#include <android/log.h>
//#include <android/bitmap.h>
#include <android/native_window_jni.h>
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, "Camera2Demo", __VA_ARGS__)
extern "C" {
JNIEXPORT jstring JNICALL Java_tau_camera2demo_JNIUtils_display(
JNIEnv *env,
jobject obj,
jint srcWidth,
jint srcHeight,
jobject srcBuffer,
jobject surface) {
/*
uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));
if (srcLumaPtr == nullptr) {
LOGE("srcLumaPtr null ERROR!");
return NULL;
}
*/
ANativeWindow * window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_acquire(window);
ANativeWindow_Buffer buffer;
ANativeWindow_setBuffersGeometry(window, srcWidth, srcHeight, 0/* format unchanged */);
if (int32_t err = ANativeWindow_lock(window, &buffer, NULL)) {
LOGE("ANativeWindow_lock failed with error code: %d\n", err);
ANativeWindow_release(window);
return NULL;
}
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
return NULL;
}
}
After I commented the memcpy out, the app no longer collapses but displays nothing. So I guess the problem is now turning to how to correctly use memcpy to copy the captured/processed buffer to buffer.bits.
Update:
I change
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
to
memcpy(buffer.bits, srcLumaPtr, srcWidth * srcHeight * 4);
the app no longer collapses and starts to display but it's displaying something strange.
As mentioned by yakobom, you're trying to copy a YUV_420_888 image directly into a RGBA_8888 destination (that's the default, if you haven't changed it). That won't work with just a memcpy.
You need to actually convert the data, and you need to ensure you don't copy too much - the sample code you have copies width*height*4 bytes, while a YUV_420_888 image takes up only stride*height*1.5 bytes (roughly). So when you copied, you were running way off the end of the buffer.
You also have to account for the stride provided at the Java level to correctly index into the buffer. This link from Microsoft has a useful diagram.
If you just care about the luminance (so grayscale output is enough), just duplicate the luminance channel into the R, G, and B channels. The pseudocode would be roughly:
uint8_t *outPtr = buffer.bits;
for (size_t y = 0; y < height; y++) {
uint8_t *rowPtr = srcLumaPtr + y * srcLumaStride;
for (size_t x = 0; x < width; x++) {
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = 255; // gamma for RGBA_8888
++rowPtr;
}
}
You'll need to read the srcLumaStride from the Image object (row stride of the first Plane) and pass it down via JNI as well.
Just to put it as an answer, to avoid a long chain of comments - such a crash issue may be due to improper size of bites being copied by the memcpy (UPDATE following other comments: In this case it was due to forbidden direct copy).
If you are now getting a weird image, it is probably another issue - I would suspect the image format, try to modify that.

ZBar Android scan local QR or bar code image

I am trying to scan a local image through ZBar, but as ZBar don't give any documentation for Android but only give detailed documentation for iPhone I had customized camera test activity too much. But I didn't get any success.
In ZBar cameratest activity
PreviewCallback previewCb = new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
Image barcode = new Image(size.width, size.height, "Y800");
barcode.setData(data);
int result = scanner.scanImage(barcode);
if (result != 0) {
previewing = false;
mCamera.setPreviewCallback(null);
mCamera.stopPreview();
SymbolSet syms = scanner.getResults();
for (Symbol sym : syms) {
scanText.setText("barcode result " + sym.getData());
barcodeScanned = true;
}
}
}
};
I want to customize this code so that it uses a local image from the gallery and gives me the result. How do I customize this code for giving a local image from the gallery and scan that image?
Try this out:
Bitmap barcodeBmp = BitmapFactory.decodeResource(getResources(),
R.drawable.barcode);
int width = barcodeBmp.getWidth();
int height = barcodeBmp.getHeight();
int pixels = new int;
barcodeBmp.getPixels(pixels, 0, width, 0, 0, width, height);
Image barcode = new Image(width, height, "RGB4");
barcode.setData(pixels);
int result = scanner.scanImage(barcode.convert("Y800"));
Or using the API, refer to HOWTO: Scan images using the API.
Java port of Zbar's scanner accepts only Y800 and GRAY pixels format (https://github.com/ZBar/ZBar/blob/master/java/net/sourceforge/zbar/ImageScanner.java) which is ok for raw bytes captured from the camera preview. But images from the Androis's Gallery are JPEG comressed usually and their pixels are not in Y800, so you can make scanner work by converting image's pixels to Y800 format. See this official support forum's thread for sample code. To calculate pixels array length just use imageWidth*imageHeight formula.
#shujatAli your example image palette's format is grayscale, convertation it to RGB made your code snippet to work for me. You can change pallete's format using some image manipulation program. I used GIMP.
I don't know clearly for Android, but on iOS do as:
//Action when user tap on button to call ZBarReaderController
- (IBAction)brownQRImageFromAlbum:(id)sender {
ZBarReaderController *reader = [ZBarReaderController new];
reader.readerDelegate = self;
reader.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; // Set ZbarReaderController point to the local album
ZBarImageScanner *scanner = reader.scanner;
[scanner setSymbology: ZBAR_QRCODE
config: ZBAR_CFG_ENABLE
to: 1];
[self presentModalViewController: reader animated: YES];
}
- (void) imagePickerController: (UIImagePickerController *) picker
didFinishPickingMediaWithInfo: (NSDictionary *) info {
UIImage *imageCurrent = (UIImage*)[info objectForKey:UIImagePickerControllerOriginalImage];
self.imageViewQR.image = imageCurrent;
imageCurrent = nil;
// ADD: get the decode results
id<NSFastEnumeration> results =
[info objectForKey: ZBarReaderControllerResults];
ZBarSymbol *symbol = nil;
for (symbol in results)
break;
NSLog(#"Content: %#", symbol.data);
[picker dismissModalViewControllerAnimated: NO];
}
Reference for more details: http://zbar.sourceforge.net/iphone/sdkdoc/optimizing.html
Idea from HOWTO: Scan images using the API:
#include <iostream>
#include <Magick++.h>
#include <zbar.h>
using namespace std;
using namespace zbar;
int main (int argc, char **argv)
{
if(argc < 2)
return(1);
// Create a reader
ImageScanner scanner;
// Configure the reader
scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1);
// Obtain image data
Magick::Image magick(argv[1]); // Read an image file
int width = magick.columns(); // Extract dimensions
int height = magick.rows();
Magick::Blob blob; // Extract the raw data
magick.modifyImage();
magick.write(&blob, "GRAY", 8);
const void *raw = blob.data();
// Wrap image data
Image image(width, height, "Y800", raw, width * height);
// Scan the image for barcodes
int n = scanner.scan(image);
// Extract results
for (Image::SymbolIterator symbol = image.symbol_begin();
symbol != image.symbol_end();
++symbol) {
// Do something useful with results
cout << "decoded " << symbol->get_type_name()
<< " symbol \"" << symbol->get_data() << '"' << endl;
}
// Clean up
image.set_data(NULL, 0);
return(0);
}
Follow the above code and change it so it is relevant in your language programming.

android ANativeWindow_lock api doesn't work for GLSurfaceView

I'm finding methods to get drawing buffers very quickly from Android GLSurfaceView.
eventhough I know glreadpixel can do this job, glreadpixel is too slow to get drawing buffer.
I want to read buffers maintaining 30 fps.
ANativeWindow api seems like what I am looking for..
ANativeWindow api performance
I couldn't found any example of ANativeWindow api for GLSurfaceView.
My procedure :
Send GLSurfaceView surface to jni code.(using GLSurfaceView.getHolder().getSurface())**
Get Window handle using ANativeWindow_fromSurface method.**
Set Window Buffer**
Lock surface and get window buffer**
Do something using this buffer**
UnlockAndPost window**
I tried below jni code using Android "BasicGLSurfaceView" example.
JNIEXPORT void JNICALL Java_com_example_android_basicglsurfaceview_BasicGLSurfaceViewActivity_readSurface(JNIEnv* jenv, jobject obj, jobject surface)
{
LOG_INFO("Java_com_example_android_basicglsurfaceview_BasicGLSurfaceViewActivity_readSurface");
if (surface != 0) {
ANativeWindow *window = ANativeWindow_fromSurface(jenv, surface);
LOG_INFO("Got window %p", window);
if(window > 0)
{
int width = ANativeWindow_getWidth(window);
int height = ANativeWindow_getHeight(window);
LOG_INFO("Got window %d %d", width,height);
ANativeWindow_setBuffersGeometry(window,0,0,WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer buffer;
memset((void*)&buffer,0,sizeof(buffer));
int lockResult = -22;
lockResult = ANativeWindow_lock(window, &buffer, NULL);
if (lockResult == 0) { \
LOG_INFO("ANativeWindow_locked");
ANativeWindow_unlockAndPost(window);
}
else
{
LOG_INFO("ANativeWindow_lock failed error %d",lockResult);
}
LOG_INFO("Releasing window");
ANativeWindow_release(window);
}
} else {
LOG_INFO("surface is null");
}
return;
}
ANativeWindow_fromSurface and getHeight, setBuffersGeometry api work well.
But ANativeWindow_lock api always fails returning -22 value.
Error Message
[Surfaceview] connect: already connected(cur=1, req=2)
I tried this code at onDrawFrame in Renderer or main Thread or onSurfaceChanged.
but Always It return -22 value.
I am calling this api at wrong place?
is it possible to use ANativeWindow_lock for GLSurfaceView?
Here is my example code
Any help will be really appreciated~
Try doing this instead:
http://forums.arm.com/index.php?/topic/15782-glreadpixels/
Hopefully you can use an ANativeWindow for the buffer so you don't have to call gralloc directly...
I'm not sure that the memset operation is needed.
This worked for me:
ANativeWindow *window = ANativeWindow_fromSurface(env, surface);
if(window > 0){
unsigned char *data = ... // an RGBA(8888) image
int32_t w = ANativeWindow_getWidth(window);
int32_t h = ANativeWindow_getHeight(window);
ANativeWindow_setBuffersGeometry(window, w, h, WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer buffer;
int lockResult = -1;
lockResult = ANativeWindow_lock(window, &buffer, NULL);
if(lockResult == 0){
//write data
memcpy(buffer.bits, data, w * h * 4);
ANativeWindow_unlockAndPost(window);
}
//free data...
ANativeWindow_release(window);
}
When you encountered the same error message,
[Surfaceview] connect: already connected
You can resolve this by GLSurface.onPause() before calling the native part.
You can render the by ANativeWindow_lock, write frame buffer, ANativeWindow_unlockAndPost in native code.
// Java
SurfaceHolder holder = mGLSurfaceView.getHolder();
mPreviewSurface = holder.getSurface();
mGLSurfaceView.onPause();
camera.nativeSetPreviewDisplay(mPreviewSurface);
I call GLSurfaceView onPause() only for release EGL context. The GLSurfaceView will be rendered by native code.
GLSurfaceView
At least I think this can be the answer for the question "is it possible to use ANativeWindow_lock for GLSurfaceView?".
Yes it is.
But in this way, you can render the view only using window buffer. You cannot use opengl api because GLSurfaceView.Renderer onDrawFrame() is not called after GLSurfaceView paused. It still show the better performance than using but it is useless if you need opengl api.
Shader calls should be within a GL thread that is onSurfaceChanged(), onSurfaceCreated() or onDrawFrame()

createWindowSurface failed: EGL_BAD_MATCH?

the version android is 2.2.1 the device is a samsung galaxy II the full crash log is:
java.lang.RuntimeException: createWindowSurface failed: EGL_BAD_MATCH
at android.opengl.GLSurfaceView$EglHelper.throwEglException(GLSurfaceView.java:1077)
at android.opengl.GLSurfaceView$EglHelper.createSurface(GLSurfaceView.java:981)
at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1304)
at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1116)
this is the relevant code to the crash:
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
glView = new GLSurfaceView(this);
glView.setEGLConfigChooser(8 , 8, 8, 8, 16, 0);
glView.setRenderer(this);
setContentView(glView);
\\etc..............}
i used setEGLConfigChooser() because the app would crash on API-17 if it wasnt in there so for this specific device that it is crashing on i been looking around and it has something to do with the PixelFormat for the device.
What im wondering is how can i use some code so this will not crash on the samsung galaxy II android version 2.2.1, i cant test this in an emulator and i dont have the device to test it in, i just need for sure code and im not sure how to change it up?
Update: I found a way to work around this issue and actually it is fairly straightforward.
First of all: Android's default EGLConfigChooser implementation makes bad decisions on some
devices. Especially the older Android devices seem to suffer this EGL_BAD_MATCH issue. During my debugging sessions I also discovered that those older troublemaker devices had quite a limited set of available OpenGL ES configurations.
The cause of this "bad match" problem is more than just a mismatch between the GLSurfaceView's pixel format and the color bit depth settings of OpenGL ES. Overall we have to deal with the following issues:
A mismatch of the OpenGL ES API version
A mismatch of the requested target surface type
The requested color bit depth cannot be rendered on the surface view
The Android developer documentation is severely lacking when it comes to explaining the OpenGL ES API. It is therefore important to read the original documentation over at Khronos.org. Especially the doc page about eglChooseConfig is helpful here.
In order to remedy above listed problems you have to make sure to specify the following minimum configuration:
EGL_RENDERABLE_TYPE must match the OpenGL ES API version you are using. In the likely case of OpenGL ES 2.x you must set that attribute to 4(see egl.h)
EGL_SURFACE_TYPE should have the EGL_WINDOW_BIT set
And of course you also want to set up an OpenGL ES context that provides you with the correct color, depth and stencil buffer settings.
Unfortunately it is not possible to cherry-pick these configuration options in a straightforward way. We have to choose from whatever is available on any given device. That's why it is necessary to implement a custom EGLConfigChooser, that goes through the list of available configuration sets and picks the most suitable one that matches best the given criteria.
Anyway, I whipped up a sample implementation for such a config chooser:
public class MyConfigChooser implements EGLConfigChooser {
final private static String TAG = "MyConfigChooser";
// This constant is not defined in the Android API, so we need to do that here:
final private static int EGL_OPENGL_ES2_BIT = 4;
// Our minimum requirements for the graphics context
private static int[] mMinimumSpec = {
// We want OpenGL ES 2 (or set it to any other version you wish)
EGL10.EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
// We want to render to a window
EGL10.EGL_SURFACE_TYPE, EGL10.EGL_WINDOW_BIT,
// We do not want a translucent window, otherwise the
// home screen or activity in the background may shine through
EGL10.EGL_TRANSPARENT_TYPE, EGL10.EGL_NONE,
// indicate that this list ends:
EGL10.EGL_NONE
};
private int[] mValue = new int[1];
protected int mAlphaSize;
protected int mBlueSize;
protected int mDepthSize;
protected int mGreenSize;
protected int mRedSize;
protected int mStencilSize;
/**
* The constructor lets you specify your minimum pixel format,
* depth and stencil buffer requirements.
*/
public MyConfigChooser(int r, int g, int b, int a, int depth, int
stencil) {
mRedSize = r;
mGreenSize = g;
mBlueSize = b;
mAlphaSize = a;
mDepthSize = depth;
mStencilSize = stencil;
}
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
int[] arg = new int[1];
egl.eglChooseConfig(display, mMinimumSpec, null, 0, arg);
int numConfigs = arg[0];
Log.i(TAG, "%d configurations available", numConfigs);
if(numConfigs <= 0) {
// Ooops... even the minimum spec is not available here
return null;
}
EGLConfig[] configs = new EGLConfig[numConfigs];
egl.eglChooseConfig(display, mMinimumSpec, configs,
numConfigs, arg);
// Let's do the hard work now (see next method below)
EGLConfig chosen = chooseConfig(egl, display, configs);
if(chosen == null) {
throw new RuntimeException(
"Could not find a matching configuration out of "
+ configs.length + " available.",
configs);
}
// Success
return chosen;
}
/**
* This method iterates through the list of configurations that
* fulfill our minimum requirements and tries to pick one that matches best
* our requested color, depth and stencil buffer requirements that were set using
* the constructor of this class.
*/
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display,
EGLConfig[] configs) {
EGLConfig bestMatch = null;
int bestR = Integer.MAX_VALUE, bestG = Integer.MAX_VALUE,
bestB = Integer.MAX_VALUE, bestA = Integer.MAX_VALUE,
bestD = Integer.MAX_VALUE, bestS = Integer.MAX_VALUE;
for(EGLConfig config : configs) {
int r = findConfigAttrib(egl, display, config,
EGL10.EGL_RED_SIZE, 0);
int g = findConfigAttrib(egl, display, config,
EGL10.EGL_GREEN_SIZE, 0);
int b = findConfigAttrib(egl, display, config,
EGL10.EGL_BLUE_SIZE, 0);
int a = findConfigAttrib(egl, display, config,
EGL10.EGL_ALPHA_SIZE, 0);
int d = findConfigAttrib(egl, display, config,
EGL10.EGL_DEPTH_SIZE, 0);
int s = findConfigAttrib(egl, display, config,
EGL10.EGL_STENCIL_SIZE, 0);
if(r <= bestR && g <= bestG && b <= bestB && a <= bestA
&& d <= bestD && s <= bestS && r >= mRedSize
&& g >= mGreenSize && b >= mBlueSize
&& a >= mAlphaSize && d >= mDepthSize
&& s >= mStencilSize) {
bestR = r;
bestG = g;
bestB = b;
bestA = a;
bestD = d;
bestS = s;
bestMatch = config;
}
}
return bestMatch;
}
private int findConfigAttrib(EGL10 egl, EGLDisplay display,
EGLConfig config, int attribute, int defaultValue) {
if(egl.eglGetConfigAttrib(display, config, attribute,
mValue)) {
return mValue[0];
}
return defaultValue;
}
}
I don't have the reputation score to add a comment yet, or else I would have put a brief comment on Nobu Games' answer. I encountered this same EGL_BAD_MATCH error and their answer helped put me on the right path. Instead, I have to create a separate answer.
As Nobu Games mentions, there appears to be a mismatch between the GLSurfaceView's PixelFormat and the pixel format parameters passed to setEGLConfigChooser(). In my case, I was asking for RGBA8888 but my GLSurfaceView was RGB565. This caused the EGL_BAD_MATCH error later on within my initialization.
The enhancement to their answer is that you can get the desired PixelFormat for the window and use it to dynamically choose an EGL context.
To make my code as generic as possible, I changed the GLSurfaceView to take in an additional parameter -- the pixel format of the display. I get this from my activity by calling:
getWindowManager().getDefaultDisplay().getPixelFormat();
I pass this value down to the GLSurfaceView and then extract the optimal bit depths for each of RGBA like this:
if (pixelFormatVal > 0) {
PixelFormat info = new PixelFormat();
PixelFormat.getPixelFormatInfo(pixelFormatVal, info);
if (PixelFormat.formatHasAlpha(pixelFormatVal)) {
if (info.bitsPerPixel >= 24) {
m_desiredABits = 8;
} else {
m_desiredABits = 6; // total guess
}
} else {
m_desiredABits = 0;
}
if (info.bitsPerPixel >= 24) {
m_desiredRBits = 8;
m_desiredGBits = 8;
m_desiredBBits = 8;
} else if (info.bitsPerPixel >= 16) {
m_desiredRBits = 5;
m_desiredGBits = 6;
m_desiredRBits = 5;
} else {
m_desiredRBits = 4;
m_desiredGBits = 4;
m_desiredBBits = 4;
}
} else {
m_desiredRBits = 8;
m_desiredGBits = 8;
m_desiredBBits = 8;
}
I then pass these values down to my config chooser. This code works for me on a RGB565 device as well as a RGBA8888 device.
My assumption is that the vendor has chosen the default for a reason and that it will give the most performant results. Of course I have nothing to back that statement up, but it is the strategy I'm going with.

Categories

Resources