I'm working on an app for Android allows the user to tap the screen to draw colors. I've got all of the drawing code working nicely under OpenGL (testing on Android 4.0.4, Galaxy Nexus, though I'm trying to make this backward compatible as far as possible; my SDK targets API 14 but has a minSDK of 8).
The issue I've run into is with antialiasing; I want all my polygons and lines to be antialiased, but they're coming out jagged. I'm positive the Galaxy Nexus supports antialiasing (I've seen it in other apps), so I'm sure I'm doing something wrong.
I've been up and down Google for over an hour now, and through several StackOverflow Q/As, and I've found a few answers:
gl.glEnable(GL10.GL_BLEND);
gl.glEnable(GL10.GL_ALPHA_BITS);
gl.glEnable(GL10.GL_MULTISAMPLE);
gl.glEnable(GL10.GL_SMOOTH);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glHint(GL10.GL_POLYGON_SMOOTH_HINT, GL10.GL_NICEST);
gl.glHint(GL10.GL_POINT_SMOOTH_HINT, GL10.GL_NICEST);
I've added some or all of these lines in various orders, and to no effect. (These were added in onSurfaceCreated.)
gl.glEnable(GL10.GL_DITHER);
I think this one helped slightly... but that might be my mind playing tricks on me. Even when using it, though, there are still jagged lines to be found. (Also added in onSurfaceCreated.)
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
This one seems to be the most common answer. But when doing this, everything is drawn invisible; that is, when starting with a black background, everything is just black, always. (I know it's still being drawn due to the memory flushing messages in LogCat.) I've also tried this in combination with all the other methods above. (And this was added in onSurfaceCreated, as well.)
To recap: I'm using OpenGL on Android 4+ and no multisampling methods appear to be working; while most just have no effect, using glBlendFunc seems to break the rendering entirely.
So, I'm quite stumped. I'm open to any suggestions at all... they will surely help more than defenestrating my computer!
Thanks in advance to everyone patient enough to read this.
If you have not requested multisampling on EGL context, you can not turn it on with just the GL functions. See here how to do that:
http://code.google.com/p/gdc2011-android-opengl/source/browse/trunk/src/com/example/gdc11/MultisampleConfigChooser.java
https://stackoverflow.com/a/7388176/675078
You can enable multisampling esay on c++ (android ndk)
If you can not programing with c++ srry.
Install android-ndk (my android ndk version is r8b)
Open android-ndk-r8b/samples/android-native-egl-example/jni/renderer.cpp
Change add to includes EGL/egl.h GLES/gl.h GLES2/gl2.h GLES2/gl2ext.h files
in bool Renderer::initialize() function :
change
const EGLint attribs to {
EGL_RED_SIZE, 5,
EGL_GREEN_SIZE, 6,
EGL_BLUE_SIZE, 5,
EGL_DEPTH_SIZE, 16,
// Requires that setEGLContextClientVersion(2) is called on the view.
EGL_RENDERABLE_TYPE, 4 /* EGL_OPENGL_ES2_BIT */,
EGL_SAMPLE_BUFFERS, 1 /* true */,
EGL_SAMPLES, 2,
EGL_NONE };
EGL_SAMPLES is important this arg change number of sample
Related
I am trying to render a smooth gradient from 0% to 10% gray across the screen of Asus Rog Phone 2 which supposedly has an HDR10 screen. In standard (8bit?) rendering mode I can clearly see banding between the gradient levels.
I followed the instructions from Android to modify the Vulkan tutorial sample code in the following way:
Added android:colorMode="wideColorGamut to AndroidManifest.xml.
Changed VkSwapchainCreateInfoKHR.imageFormat, VkAttachmentDescription.format and VkImageViewCreateInfo.format to VK_FORMAT_R16G16B16A16_SFLOAT (everything to setup swapchain).
Changed VkSwapchainCreateInfoKHR.imageColorSpace to VK_COLOR_SPACE_DISPLAY_P3_NONLINEAR_EXT.
I render the gradient dynamically in a custom fragment shader as a function of texturing coordinates mapped to a full-screen quad:
layout (location = 0) in vec2 texcoord;
layout (location = 0) out vec4 uFragColor;
void main() {
uFragColor = vec4(vec3(texcoord.x*0.1), 1.0);
}
As a result I observe absolutely no difference from the original 8bit(?) mode. I am not quite sure how to debug it and what to try.
I also tried to implement the same effect in Unity with HDR options enabled. There I see that all the intermediate rendering passes (in Forward mode) render to Float16 but the last extra pass likely converts it into RGB8. The visual result is the same banding again.
This makes me wonder whether this is not caused by some missing setting. In another thread I saw discussion of Window.setFormat(PixelFormat) or SurfaceHolder.setFormat(PixelFormat) but I am not sure how this related to a Native Android app. I cannot see a way how to call such function and I am not even sure if it would make sense.
Thank you for any suggestions.
Where are you setting HDR? HDR is PQ transfer function, neither 10 bit, not BT.2020 has anything to do with HDR.
https://developer.android.com/training/wide-color-gamut
This talks only about WCG.
This should be used to trigger HDR. https://developer.android.com/ndk/reference/struct/a-hdr-metadata-smpte2086
For Java this should be used to check whether HDR is there.
https://developer.android.com/reference/android/view/Display.HdrCapabilities?hl=en
See this question with further links What is the difference between Display.HdrCapabilities and configuration.isScreenHdr
I am working with minko and seem to be facing a light issue with Android.
I managed to compile for linux64, Android and html a modified code (based on the tutorials provided by Minko). I simply load and rotate 4 .obj files (the pirate one provided and 3 found on turbosquid for demo purposes only).
The correct result is viewed in the linux64 and html version but the Android one has a "redish" light thrown into it, although the binaries are being generated from the same c++ code.
Here are some pics to demonstrate the problem:
linux64 :
http://tinypic.com/r/qzm2s5/8
Android version :
http://tinypic.com/r/23mn0p3/8
(Couldn’t link the html version but it is close to the linux64 one.)
Here is the part of the code related to the light :
// create the spot light node
auto spotLightNode = scene::Node::create("spotLight");
// change the spot light position
//spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 2.f, 0.f)))); //ok linux - html
spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 8.f, 0.f))));
// create the point light component
auto spotLight = SpotLight::create(.15f, .4f); //ok linux and html
// update the spot light component attributes
spotLight->diffuse(4.5f); //ori - ok linux - html
// add the component to the spot light node
spotLightNode->addComponent(spotLight);
//sets a red color to our spot light
//spotLightNode->component<SpotLight>()->color()->setTo(2.0f, 1.0f, 1.0f);
// add the node to the root of the scene graph
rootNode->addChild(spotLightNode);
As you can notice the color()->setTo has been turned off and works for all except Android (clean and rebuild). Any idea what might be the source of the problem here ?
Any pointer would be much appreciated.
Thx.
Can you test it on other Android devices or with a more recent ROM and give us the result? LG-D855 (LG G3) is powered by an Adreno 330: those GPUs are known to have GLSL compiling deffects, especially with loops and/or structs like we use in Phong.fragment.glsl on the master branch.
The Phong.fragment.glsl on the dev branch has been heavily refactored to fix this (for directional lights only for now).
You could try the dev branch and a directional light and see if it fixes the issue. Be careful though: the dev branch introduces the beta 3, with some API changes. The biggest API change being the math API now using GLM, and the *.effect file format. The best way to go is simply to update your math code to use the new API, everything else should be straight forward.
Can someone answer me how come this line:
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_R16F, width, height, 0, GLES30.GL_RED, GLES30.GL_HALF_FLOAT, myBuffer);
works on tegra4 but doesn't work on ARM Mali-T628 MP6?
I am not attaching this to a framebuffer by the way, I am using this as a read only texture. The code returned on ARM is 1280 where Tegra 'doesn't complain' at all.
Also, I know that Tegra4 got extension for half float textures, and that specific Mali doesn't have that extension, but since it's OpenGL ES 3.0, shouldn't it support such textures?
That call looks completely valid to me. Error 1280 is GL_INVALID_ENUM, which suggests that one of the 3 enum type arguments is invalid. But each one by itself, as well as the combination of them, is spec compliant.
The most likely explanation is a driver bug. I found that several ES 3.0 drivers have numerous issues, so it's not a big surprise to discover problems.
The section below was written under the assumption that the texture would be used as a render target (FBO attachment). Please ignore if you are looking for a direct answer to the question.
GL_R16F is not color-renderable in standard ES 3.0.
If you pull up the spec document, which can be found on www.khronos.org (direct link), table 3.13 on pages 130-132 lists all texture formats and their properties. R16F does not have the checkmark in the "Color-renderable" column, which means that it can not be used as a render target.
Correspondingly, R16F is also listed under "Texture-only color formats" in section "Required Texture Formats" on pages 129-130.
This means that the device needs the EXT_color_buffer_half_float extension to support rendering to R16F. This is still the case in ES 3.1 as well.
I have an Android app that decodes video into yuv420p format then renders video frames using OpenGLES.
I use glTexSubImage2D() to upload y/u/v buffer to GPU then do a YUV2RGB conversion using shader. All EGL/OpenGL setup/rendering code is native code.
Now I am not saying there is no problem with my code, but considering the same code is running perfecting fine on iOS (iPad/iPhone), Nexus 7, Kindle HD 8.9, Samsung Note 1 and a few other cheap chinese tablets (A31/RockChip 3188) running Android 4.0/4.1/4.2. I would say it's less likely my code is wrong. On those devices, glTexSubImage2D() uses less than 16ms to upload a SD or 720P HD texture.
However, on Nexus 10, glTexSubImage2D() it takes about 50~90ms for a SD or 720P HD texture which is way too slow for a 30fps or 60fps video.
I would like to know
1) if I should pick a different texture format (RGBA or BGRA). Is there a ways to detect which is the best texture format used by a GPU?
2) if there is a feature that is 'OFF' on all other SOCs but set to 'ON' on Exynos 5. For example, the automatic MIPMAP generation option. (I have it off, btw)
3) if this is a known issue of Samsung Exynos SOC - I can't find a support forum for Exynos CPU
4) Is there any option I need to set when configure the EGL surface? like, transparency, surface format, etc? (I have no idea what I am talking about)
5) It could mean GPU is doing an implicit format conversion but I checked GL_LUMINANCE is always used. Again it works on all other platform.
6) anything else?
My EGL config:
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
Initial setup:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, ctx->frameW, ctx->frameH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL); /* also for U/V */
subsequent partial replacement:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, ctx->frameW, ctx->frameH, GL_LUMINANCE, GL_UNSIGNED_BYTE, yBuffer); /*also for U/V */
I am trying to render video at ~30FPS or ~60FPS at SD or 720P HD resolution.
This is a known driver issue that we have reported to ARM. A future update should fix it.
EDIT Status update
We've now managed to reproduce slow upload conditions for one path on the public firmware, which you are possibly hitting, and this will be fixed in the next driver release.
If you double-buffer texture IDs (e.g. frame N = ID X, N+1 = ID Y, N+2 = ID X, N+3 = ID Y, etc) for the textures you are uploading to it should help avoid this on the current firmware.
Thanks,
Iso
I can confirm this has been fixed in Android 4.3 - I'm seeing a performance increase by a factor of 2-3 with RGBA format and by a factor of 10-50 with other texture formats over Android 4.2.2. These results apply for both glTexImage2D and glTexSubImage2D. (Can't add comments yet so I had to put this here)
EDIT: If you're stuck with 4.2.2, you could try using RGBA texture instead, it should have better performance (3-10x or so with larger power-of-two texture sizes).
I have played for a while with OpenGL on Android on various devices. And unless I'm wrong, the default rendering is always performed with the RGB565 pixel format.
I would however like to render more accurate colors using RGB888.
The GLSurfaceView documentation mentions two methods which relate to pixel formats:
the setFormat() method exposed by SurfaceHolder, as returned by SurfaceView.getHolder()
the GLSurfaceView.setEGLConfigChooser() family of methods
Unless I'm wrong, I think I only need to use the later. Or is using SurfaceHolder.setFormat() relevant here?
The documentation of the EGLConfigChooser class mentions EGL10.eglChooseConfig(), to discover which configurations are available.
In my case it is ok to fallback to RGB565 if RGB888 isn't available, but I would prefer this to be quite rare.
So, is it possible to use RGB888 on most devices?
Are there any compatibility problems or weird bugs with this?
Do you have an example of a correct and reliable way to setup the GLSurfaceView for rendering RGB888?
On newer devices, most of them should support RGBA8888 as a native format. One way to force RGBA color format is to set the translucency of the surface, you'd still want to pick the EGLConfig to best guess the config for the channels in addition to the depth and stencil buffers.
setEGLConfigChooser(8, 8, 8, 8, 0, 0);
getHolder().setFormat(PixelFormat.RGBA_8888);
However, if I read your question correctly you're asking for RGB888 support (alpha don't care) in other words, RGBX8888 which might not be supported by all devices (driver vendor limitation).
Something to keep in mind about performance though, since RGBA8888 is the color format natively supported by most GPUs hardware it's best to avoid any other color format (non natively supported) since that usually translate into a color conversion underneath adding non necessary work load to the GPU.
This is how I do it;
{
window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// cocos2d will inherit these values
[window setUserInteractionEnabled:YES];
[window setMultipleTouchEnabled:NO];
// must be called before any othe call to the director
[Director useFastDirector];
[[Director sharedDirector] setDisplayFPS:YES];
// create an openGL view inside a window
[[Director sharedDirector] attachInView:window];
// Default texture format for PNG/BMP/TIFF/JPEG/GIF images
// It can be RGBA8888, RGBA4444, RGB5_A1, RGB565
// You can change anytime.
[Texture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_];
glClearColor(0.7f,0.7f,0.6f,1.0f);
//glClearColor(1.0f,1.0f,1.0f,1.0f);
[window makeKeyAndVisible];
[[Director sharedDirector] runWithScene:[GameLayer node]];
}
I hope that helps!