Can someone answer me how come this line:
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_R16F, width, height, 0, GLES30.GL_RED, GLES30.GL_HALF_FLOAT, myBuffer);
works on tegra4 but doesn't work on ARM Mali-T628 MP6?
I am not attaching this to a framebuffer by the way, I am using this as a read only texture. The code returned on ARM is 1280 where Tegra 'doesn't complain' at all.
Also, I know that Tegra4 got extension for half float textures, and that specific Mali doesn't have that extension, but since it's OpenGL ES 3.0, shouldn't it support such textures?
That call looks completely valid to me. Error 1280 is GL_INVALID_ENUM, which suggests that one of the 3 enum type arguments is invalid. But each one by itself, as well as the combination of them, is spec compliant.
The most likely explanation is a driver bug. I found that several ES 3.0 drivers have numerous issues, so it's not a big surprise to discover problems.
The section below was written under the assumption that the texture would be used as a render target (FBO attachment). Please ignore if you are looking for a direct answer to the question.
GL_R16F is not color-renderable in standard ES 3.0.
If you pull up the spec document, which can be found on www.khronos.org (direct link), table 3.13 on pages 130-132 lists all texture formats and their properties. R16F does not have the checkmark in the "Color-renderable" column, which means that it can not be used as a render target.
Correspondingly, R16F is also listed under "Texture-only color formats" in section "Required Texture Formats" on pages 129-130.
This means that the device needs the EXT_color_buffer_half_float extension to support rendering to R16F. This is still the case in ES 3.1 as well.
Related
While working with the NDK Camera interface I stumbled upon the problem that a SurfaceTexture can request a defaultSize which can be overridden.
I wanted to get the actual size, but SurfaceTexture offers no way to query that.
So instead I tried this:
// this runs directly after SurfaceTexture.updateTexImage
GLfloat width, height;
glGetTexLevelParameterfv(GL_TEXTURE_EXTERNAL_OES, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameterfv(GL_TEXTURE_EXTERNAL_OES, 0, GL_TEXTURE_HEIGHT, &height);
Which worked flawlessly on a Mali system but returns GL_INVALID_ENUM on an Adreno system.
The extension's description does not introduce GL_TEXTURE_EXTERNAL_OES as a valid paremeter for GetTexLevelParameter so I assume I was just lucky on the Mali system.
Is there any way to query the size of an GL_TEXTURE_EXTERNAL_OES in OpenGL ES (3.1)?
I have an Android app that decodes video into yuv420p format then renders video frames using OpenGLES.
I use glTexSubImage2D() to upload y/u/v buffer to GPU then do a YUV2RGB conversion using shader. All EGL/OpenGL setup/rendering code is native code.
Now I am not saying there is no problem with my code, but considering the same code is running perfecting fine on iOS (iPad/iPhone), Nexus 7, Kindle HD 8.9, Samsung Note 1 and a few other cheap chinese tablets (A31/RockChip 3188) running Android 4.0/4.1/4.2. I would say it's less likely my code is wrong. On those devices, glTexSubImage2D() uses less than 16ms to upload a SD or 720P HD texture.
However, on Nexus 10, glTexSubImage2D() it takes about 50~90ms for a SD or 720P HD texture which is way too slow for a 30fps or 60fps video.
I would like to know
1) if I should pick a different texture format (RGBA or BGRA). Is there a ways to detect which is the best texture format used by a GPU?
2) if there is a feature that is 'OFF' on all other SOCs but set to 'ON' on Exynos 5. For example, the automatic MIPMAP generation option. (I have it off, btw)
3) if this is a known issue of Samsung Exynos SOC - I can't find a support forum for Exynos CPU
4) Is there any option I need to set when configure the EGL surface? like, transparency, surface format, etc? (I have no idea what I am talking about)
5) It could mean GPU is doing an implicit format conversion but I checked GL_LUMINANCE is always used. Again it works on all other platform.
6) anything else?
My EGL config:
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
Initial setup:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, ctx->frameW, ctx->frameH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL); /* also for U/V */
subsequent partial replacement:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, ctx->frameW, ctx->frameH, GL_LUMINANCE, GL_UNSIGNED_BYTE, yBuffer); /*also for U/V */
I am trying to render video at ~30FPS or ~60FPS at SD or 720P HD resolution.
This is a known driver issue that we have reported to ARM. A future update should fix it.
EDIT Status update
We've now managed to reproduce slow upload conditions for one path on the public firmware, which you are possibly hitting, and this will be fixed in the next driver release.
If you double-buffer texture IDs (e.g. frame N = ID X, N+1 = ID Y, N+2 = ID X, N+3 = ID Y, etc) for the textures you are uploading to it should help avoid this on the current firmware.
Thanks,
Iso
I can confirm this has been fixed in Android 4.3 - I'm seeing a performance increase by a factor of 2-3 with RGBA format and by a factor of 10-50 with other texture formats over Android 4.2.2. These results apply for both glTexImage2D and glTexSubImage2D. (Can't add comments yet so I had to put this here)
EDIT: If you're stuck with 4.2.2, you could try using RGBA texture instead, it should have better performance (3-10x or so with larger power-of-two texture sizes).
I'm working on an app for Android allows the user to tap the screen to draw colors. I've got all of the drawing code working nicely under OpenGL (testing on Android 4.0.4, Galaxy Nexus, though I'm trying to make this backward compatible as far as possible; my SDK targets API 14 but has a minSDK of 8).
The issue I've run into is with antialiasing; I want all my polygons and lines to be antialiased, but they're coming out jagged. I'm positive the Galaxy Nexus supports antialiasing (I've seen it in other apps), so I'm sure I'm doing something wrong.
I've been up and down Google for over an hour now, and through several StackOverflow Q/As, and I've found a few answers:
gl.glEnable(GL10.GL_BLEND);
gl.glEnable(GL10.GL_ALPHA_BITS);
gl.glEnable(GL10.GL_MULTISAMPLE);
gl.glEnable(GL10.GL_SMOOTH);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glHint(GL10.GL_POLYGON_SMOOTH_HINT, GL10.GL_NICEST);
gl.glHint(GL10.GL_POINT_SMOOTH_HINT, GL10.GL_NICEST);
I've added some or all of these lines in various orders, and to no effect. (These were added in onSurfaceCreated.)
gl.glEnable(GL10.GL_DITHER);
I think this one helped slightly... but that might be my mind playing tricks on me. Even when using it, though, there are still jagged lines to be found. (Also added in onSurfaceCreated.)
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
This one seems to be the most common answer. But when doing this, everything is drawn invisible; that is, when starting with a black background, everything is just black, always. (I know it's still being drawn due to the memory flushing messages in LogCat.) I've also tried this in combination with all the other methods above. (And this was added in onSurfaceCreated, as well.)
To recap: I'm using OpenGL on Android 4+ and no multisampling methods appear to be working; while most just have no effect, using glBlendFunc seems to break the rendering entirely.
So, I'm quite stumped. I'm open to any suggestions at all... they will surely help more than defenestrating my computer!
Thanks in advance to everyone patient enough to read this.
If you have not requested multisampling on EGL context, you can not turn it on with just the GL functions. See here how to do that:
http://code.google.com/p/gdc2011-android-opengl/source/browse/trunk/src/com/example/gdc11/MultisampleConfigChooser.java
https://stackoverflow.com/a/7388176/675078
You can enable multisampling esay on c++ (android ndk)
If you can not programing with c++ srry.
Install android-ndk (my android ndk version is r8b)
Open android-ndk-r8b/samples/android-native-egl-example/jni/renderer.cpp
Change add to includes EGL/egl.h GLES/gl.h GLES2/gl2.h GLES2/gl2ext.h files
in bool Renderer::initialize() function :
change
const EGLint attribs to {
EGL_RED_SIZE, 5,
EGL_GREEN_SIZE, 6,
EGL_BLUE_SIZE, 5,
EGL_DEPTH_SIZE, 16,
// Requires that setEGLContextClientVersion(2) is called on the view.
EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL_SAMPLE_BUFFERS, 1 /* true */,
EGL_SAMPLES, 2,
EGL_NONE };
EGL_SAMPLES is important this arg change number of sample
I have played for a while with OpenGL on Android on various devices. And unless I'm wrong, the default rendering is always performed with the RGB565 pixel format.
I would however like to render more accurate colors using RGB888.
The GLSurfaceView documentation mentions two methods which relate to pixel formats:
the setFormat() method exposed by SurfaceHolder, as returned by SurfaceView.getHolder()
the GLSurfaceView.setEGLConfigChooser() family of methods
Unless I'm wrong, I think I only need to use the later. Or is using SurfaceHolder.setFormat() relevant here?
The documentation of the EGLConfigChooser class mentions EGL10.eglChooseConfig(), to discover which configurations are available.
In my case it is ok to fallback to RGB565 if RGB888 isn't available, but I would prefer this to be quite rare.
So, is it possible to use RGB888 on most devices?
Are there any compatibility problems or weird bugs with this?
Do you have an example of a correct and reliable way to setup the GLSurfaceView for rendering RGB888?
On newer devices, most of them should support RGBA8888 as a native format. One way to force RGBA color format is to set the translucency of the surface, you'd still want to pick the EGLConfig to best guess the config for the channels in addition to the depth and stencil buffers.
setEGLConfigChooser(8, 8, 8, 8, 0, 0);
getHolder().setFormat(PixelFormat.RGBA_8888);
However, if I read your question correctly you're asking for RGB888 support (alpha don't care) in other words, RGBX8888 which might not be supported by all devices (driver vendor limitation).
Something to keep in mind about performance though, since RGBA8888 is the color format natively supported by most GPUs hardware it's best to avoid any other color format (non natively supported) since that usually translate into a color conversion underneath adding non necessary work load to the GPU.
This is how I do it;
{
window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// cocos2d will inherit these values
[window setUserInteractionEnabled:YES];
[window setMultipleTouchEnabled:NO];
// must be called before any othe call to the director
[Director useFastDirector];
[[Director sharedDirector] setDisplayFPS:YES];
// create an openGL view inside a window
[[Director sharedDirector] attachInView:window];
// Default texture format for PNG/BMP/TIFF/JPEG/GIF images
// It can be RGBA8888, RGBA4444, RGB5_A1, RGB565
// You can change anytime.
[Texture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_];
glClearColor(0.7f,0.7f,0.6f,1.0f);
//glClearColor(1.0f,1.0f,1.0f,1.0f);
[window makeKeyAndVisible];
[[Director sharedDirector] runWithScene:[GameLayer node]];
}
I hope that helps!
My scene in OpenGL ES requires several large resolution textures, but they are grayscale, since I am using them just for masks. I need to reduce my memory use.
I have tried loading these textures with Bitmap.Config.ALPHA_8, and as RGB_565. ALPHA_8 seems to actually increase memory use.
Is there some way to get a texture loaded into OpenGL and have it use less than 16bits per pixel?
glCompressedTexImage2D looks like it might be promising, but from what I can tell, different phones offer different texture compression methods. Also, I don't know if the compression actually reduces memory use at runtime. Is the solution to store my textures in both ATITC and PVRTC formats? If so, how do I detect which format is supported by the device?
Thanks!
PVRTC, ATITC, S3TC and so forth, the GPU native compressed texture should reduce memory usage and improve rendering performance.
For example (sorry in C, you can implement it as using GL11.glGetString in Java),
const char *extensions = glGetString(GL_EXTENSIONS);
int isPVRTCsupported = strstr(extensions, "GL_IMG_texture_compression_pvrtc") != 0;
int isATITCsupported = strstr(extensions, "GL_ATI_texture_compression_atitc") != 0;
int isS3TCsupported = strstr(extensions, "GL_EXT_texture_compression_s3tc") != 0;
if (isPVRTCsupportd) {
/* load PVRTC texture using glCompressedTexImage2D */
} else if (isATITCsupported) {
...
Besides you can specify supported devices using texture format in AndroidManifest.xml.
The AndroidManifest.xml File - supports-gl-texture
EDIT:
MOTODEV - Understanding Texture Compression
With Imagination Technologies-based (aka PowerVR) systems, you should be able to use PVRTC 4bpp and (depending on the texture and quality requirements) maybe even 2bpp PVRTC variant.
Also, though I'm not sure what is exposed in Android systems, the PVRTextool lists I8 (i.e. greyscale 8bpp) as target texture format, which would give you a lossless option.
ETC1 texture compression is supported on all Android devices with Android 2.2 and up.