I am new to Android, but have done some iOS work. I get an error when I load a bitmap from a camera image on a Samsung S4 (4.4.4):
bmp = BitmapFactory.decodeFile(photoFilePath, options);
imgView.setImageBitmap(bmp);
Here imgView is a TouchImageView. (I tried using ImageViewZoom but can't get it to import into AndroidStudio.)
I get this error:
W/OpenGLRenderer: Bitmap too large to be uploaded into a texture (4128x3096, max=4096x4096)
Coming from iOS I am amazed that Android can't load an image from the phone's own camera! I hope there is a way that is not too hard. It must be possible since the gallery seems to be able to do it.
The examples I see on the web (suggesting that you subsample the image) seem to miss the point. With pan and zoom (which should be built into an ImageView) an image view is just a smaller window into a much larger image. You should not have to fit the bitmap to the pixel size of the view. I fear that this silly advice (subsampling) reflects a serious deficiency in Android.
I believe error message is related to OpenGL not Android. GPU may put restrictions on max texture resolution. For your case that is 4096x4096 pixels. Developers are advised to load scaled down version into UI because it is not reasonable to view large image in limited screen display such as phone. More info about this can be found here.
If you need to view large image at actual size then you can just draw visible part of image.
I think the answer is "no easy way". I suspected this, but had to check to make sure. I did find that setting android:hardwareAccelerated = "false" in the manifest allowed large bitmaps to be loaded, but then the app would soon crash out of memory. Interesting that the lack of hardware acceleration did not seem to make much of a difference in performance.
My library is designed to solve this problem, it subsamples the image initially and then loads higher resolution tiles as you zoom in. If you look closely you can see that most Android gallery apps do the same thing.
https://github.com/davemorrissey/subsampling-scale-image-view
What you can do is get the maximum texture size and scale image if any of dimensions is larger.
Unfortunately I do not remember where I got this snippet (it is not mine).
It works fine for this purpose in production for quite a while now.
public int provideMaxTextureSize() {
EGL10 egl = (EGL10) EGLContext.getEGL();
EGLDisplay dpy = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
int[] vers = new int[2];
egl.eglInitialize(dpy, vers);
int[] configAttr = {
EGL10.EGL_COLOR_BUFFER_TYPE, EGL10.EGL_RGB_BUFFER,
EGL10.EGL_LEVEL, 0,
EGL10.EGL_SURFACE_TYPE, EGL10.EGL_PBUFFER_BIT,
EGL10.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] numConfig = new int[1];
egl.eglChooseConfig(dpy, configAttr, configs, 1, numConfig);
if (numConfig[0] == 0) {
// TROUBLE! No config found.
}
EGLConfig config = configs[0];
int[] surfAttr = {
EGL10.EGL_WIDTH, 64,
EGL10.EGL_HEIGHT, 64,
EGL10.EGL_NONE
};
EGLSurface surf = egl.eglCreatePbufferSurface(dpy, config, surfAttr);
final int EGL_CONTEXT_CLIENT_VERSION = 0x3098; // missing in EGL10
int[] ctxAttrib = {
EGL_CONTEXT_CLIENT_VERSION, 1,
EGL10.EGL_NONE
};
EGLContext ctx = egl.eglCreateContext(dpy, config, EGL10.EGL_NO_CONTEXT, ctxAttrib);
egl.eglMakeCurrent(dpy, surf, surf, ctx);
int[] maxSize = new int[1];
GLES10.glGetIntegerv(GLES10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
egl.eglMakeCurrent(dpy, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE,
EGL10.EGL_NO_CONTEXT);
egl.eglDestroySurface(dpy, surf);
egl.eglDestroyContext(dpy, ctx);
egl.eglTerminate(dpy);
return maxSize[0];
}
Related
Previous versions of the Skobbler SDK allowed for the developer to request a screenshot of the current mapView state by calling something like
mapView.requestScreenshot();
Followed by overriding a callback to listen for the bitmap result similar to:
#Override
public void onScreenshotReady(Bitmap bitmap) {
// do something with passed bitmap
}
This code worked as expected in SDK version 2.5.1 but somewhere along the line the SDK appears to have changed such that this code no longer works. Instead now, when we receive the bitmap object, we see that the dimensions of the bitmap match those of the mapview, but the content is all just transparent pixels.
It's almost as if the bitmap was properly initialized with a transparent background with the correct width and height but when the surfaceview request to render to the bitmap was made internally, that portion failed.
Some additional details, the call to mapView.requestScreenshot() is made on the main UI thread, as is the handling of the callback onScreenshotReady().
Looking at the logs, the only output I see in between making the requestScreenshot() call and the calling of the callback is this possibly related error:
D/SKMapActivity: requesting screenshot
E/libEGL: call to OpenGL ES API with no current context (logged once per thread)
D/SKMapActivity: screenshot ready
Since I'm not sure of the internals of how the SKMapSurfaceView class requests the render, I'm not sure if there is an additional step I need to take to ensure that a current OpenGL ES context is in place when the screenshot request is made.
Does anyone have any thoughts on the matter? Thanks!
Keith
You could force to use openGL version 2.0.
Something like that.
EGL10 mEgl = (EGL10) EGLContext.getEGL();
int[] version = new int[2];
EGLDisplay display = mEgl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
boolean success = mEgl.eglInitialize(display, version);
int EGL_OPENGL_ES2_BIT = 4;
int[] configAttribs = {EGL10.EGL_RED_SIZE, 4, EGL10.EGL_GREEN_SIZE, 4, EGL10.EGL_BLUE_SIZE, 4, EGL10.EGL_RENDERABLE_TYPE,
EGL_OPENGL_ES2_BIT, EGL10.EGL_NONE};
EGLConfig[] configs = new EGLConfig[10];
int[] num_config = new int[1];
mEgl.eglChooseConfig(display, configAttribs, configs, 10, num_config);
Log.d("OpenGL", "glversion: " + String.valueOf(success));
EGLConfig eglConfig = this.mapView.chooseConfig(mEgl, display);
`
I am trying to get the maximum texture size limit in Android for OpenGL 2.0.
But I've found that the next instruction only works if I'm currently within the OpenGL Context, in other words I must have a GL Surface and a GL Renderer, etc, which I don't want.
int[] maxTextureSize = new int[1];
GLES20.glGetIntegerv(GLES20.GL_MAX_TEXTURE_SIZE, maxTextureSize, 0);
So I came with the next algorithm, which gives me the maximum texture size without having to create any surface or renderer.
It works correctly, so my question is if this will work with all Android devices, and if anyone can spot any bug, just in case.
public int getMaximumTextureSize()
{
EGL10 egl = (EGL10)EGLContext.getEGL();
EGLDisplay display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
// Initialise
int[] version = new int[2];
egl.eglInitialize(display, version);
// Query total number of configurations
int[] totalConfigurations = new int[1];
egl.eglGetConfigs(display, null, 0, totalConfigurations);
// Query actual list configurations
EGLConfig[] configurationsList = new EGLConfig[totalConfigurations[0]];
egl.eglGetConfigs(display, configurationsList, totalConfigurations[0], totalConfigurations);
int[] textureSize = new int[1];
int maximumTextureSize = 0;
// Iterate through all the configurations to located the maximum texture size
for (int i = 0; i < totalConfigurations[0]; i++)
{
// Only need to check for width since opengl textures are always squared
egl.eglGetConfigAttrib(display, configurationsList[i], EGL10.EGL_MAX_PBUFFER_WIDTH, textureSize);
// Keep track of the maximum texture size
if (maximumTextureSize < textureSize[0])
{
maximumTextureSize = textureSize[0];
}
Log.i("GLHelper", Integer.toString(textureSize[0]));
}
// Release
egl.eglTerminate(display);
Log.i("GLHelper", "Maximum GL texture size: " + Integer.toString(maximumTextureSize));
return maximumTextureSize;
}
PBUFFER max size is unfortunately not related to max texture size (but may be the same).
I believe the best way to obtain max texture size is to create GL context (on the same context as one on which You will actually use this textures) and ask for GL_MAX_TEXTURE_SIZE.
There is strong reason behind this: the ogl driver is not initialized for current process before surface (and context) creation. Some drivers perform underlying HW/SKU detection on initialization and calculate max surface sizes depending on HW capabilities.
Furthermore, max texture size is permitted to vary depending on context (and EGLConfig context was created on).
And one more thing: eglGetConfigs will get You all EGLconfigs, even these from default, software android renderer, or theese from OpenGL ES 1.1CM HW driver (if there are separate drivers for 1.1 and 2.0 on target platform). Drivers are sort of independent in graphics stack and can return different max-es.
I'm trying to enable hw acceleration in Honeycomb, and display some Bitmaps on Canvas.
All works fine, but for large bitmaps (>2048 in one dimension), I get error in log:
OpenGLRenderer: Bitmap too large to be uploaded into a texture
I know this is because of hw limitation, and can work-around it by reducing max bitmap size to be displayed if hw acceleration is enabled (checking by View.isHardwareAccelerated()).
My question is: how to easily determine max texture size available for Bitmap drawing by hardware.
2048 seems to be limit on my device, but it may be different on different ones.
Edit: I'm not creating OpenGL app, just normal app, which can utilize hw acceleration. Thus I'm not familiar with OpenGL at all, I just see OpenGL related error in log, and look to solve it.
Currently the minimum limit is 2048px (i.e. the hardware must support textures at least 2048x2048.) In ICS we will introduce a new API on the Canvas class that will give you this information:
Canvas.getMaximumBitmapWidth() and Canvas.getMaximumBitmapHeight().
Another way of getting the maximum allowed size would be to loop through all EGL10 configurations and keep track of the largest size.
public static int getMaxTextureSize() {
// Safe minimum default size
final int IMAGE_MAX_BITMAP_DIMENSION = 2048;
// Get EGL Display
EGL10 egl = (EGL10) EGLContext.getEGL();
EGLDisplay display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
// Initialise
int[] version = new int[2];
egl.eglInitialize(display, version);
// Query total number of configurations
int[] totalConfigurations = new int[1];
egl.eglGetConfigs(display, null, 0, totalConfigurations);
// Query actual list configurations
EGLConfig[] configurationsList = new EGLConfig[totalConfigurations[0]];
egl.eglGetConfigs(display, configurationsList, totalConfigurations[0], totalConfigurations);
int[] textureSize = new int[1];
int maximumTextureSize = 0;
// Iterate through all the configurations to located the maximum texture size
for (int i = 0; i < totalConfigurations[0]; i++) {
// Only need to check for width since opengl textures are always squared
egl.eglGetConfigAttrib(display, configurationsList[i], EGL10.EGL_MAX_PBUFFER_WIDTH, textureSize);
// Keep track of the maximum texture size
if (maximumTextureSize < textureSize[0])
maximumTextureSize = textureSize[0];
}
// Release
egl.eglTerminate(display);
// Return largest texture size found, or default
return Math.max(maximumTextureSize, IMAGE_MAX_BITMAP_DIMENSION);
}
From my testing, this is pretty reliable and doesn't require you to create an instance.
Performance-wise, this took 18 milliseconds to execute on my Note 2 and only 4 milliseconds on my G3.
If you want to know dynamically the texture size limit of your device (because it's change depending on the device), you have to call this method:
int[] maxTextureSize = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxTextureSize, 0);
And don't forget that for some device (the Nexus One for example), the texture size must be a power of 2 !
I know my answer comes a long time after the last update of this topic...sorry
According to the specification, calling glGetIntegerv with GL_MAX_TEXTURE_SIZE.
GL_MAX_TEXTURE_SIZE params returns one value. The value gives a rough estimate of the largest texture that the GL can handle. The value must be at least 64.
http://www.khronos.org/opengles/sdk/docs/man/xhtml/glGet.xml
I'm creating my own mipmaps when creating textures, and as soon as I enable mipmaps via GL_LINEAR_MIPMAP_NEAREST (or anything MIPMAP), the textures are just a very dark, blurry mess. If I switch to GL_LINEAR, they're fine.
Here's how I'm creating the texture:
glGenTextures(1, &m_TextureId);
glBindTexture( GL_TEXTURE_2D, id );
int level = 0;
jobject mippedBitmap = srcBitmap;
while (width >= 2 && height >= 2) {
jniEnv->CallStaticVoidMethod(s_GLUtilsClass, s_texImage2DMethodID,
GL_TEXTURE_2D, level, mippedBitmap, 0);
width >>= 1;
height >>= 1;
level++;
mippedBitmap = createScaledBitmap(jniEnv, srcBitmap, width, height, true);
}
I've omitted all Bitmap.recycle()/NewGlobalRef() calls for brevity. createScaledBitmap is obviously a JNI call to Bitmap.createScaledBitmap().
I also tried the other version of texImage2D that takes the format and type of the bitmap. I've verified that it's always RGBA.
EDIT: To elaborate on the textures - they're really almost black. I tried eraseColor() on the mips with bright colors, and they're still extremely dark.
glGenerateMipmap will do this task faster and much more convenient for you.
The code you're showing will not generate the lower mip levels (1x1 will be missing, e.g.), so your texture will be incomplete. That should make the rendering as if the texturing was not present at all. It's unclear from your description if this is what you're observing.
In all cases, you should provide the mipmaps all the way down to the 1x1 (and for non square textures, this requires some changes in the computation of the new texture size to keep passing 1, as in 4x1 -> 2x1 -> 1x1)
It may be the case that Bitmap.createScaledBitmap does not correct the gamma. Have a look at this thread for code to create gamma-corrected mipmaps
I'm trying to find out the maximum texture size for the original Motorola Droid. I believe the G1 has a maximum texture size of 512, but it would be nice if there was a more official way I could find out so I can build a proper tile system.
You can request the max texture size using glGetIntegerv:
int[] maxTextureSize = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxTextureSize, 0);
Log.i("glinfo", "Max texture size = " + maxTextureSize[0]);
Also check out http://www.glbenchmark.com/ - it has an extensive database of OpenGL environment and performance details for mobile devices