I have played for a while with OpenGL on Android on various devices. And unless I'm wrong, the default rendering is always performed with the RGB565 pixel format.
I would however like to render more accurate colors using RGB888.
The GLSurfaceView documentation mentions two methods which relate to pixel formats:
the setFormat() method exposed by SurfaceHolder, as returned by SurfaceView.getHolder()
the GLSurfaceView.setEGLConfigChooser() family of methods
Unless I'm wrong, I think I only need to use the later. Or is using SurfaceHolder.setFormat() relevant here?
The documentation of the EGLConfigChooser class mentions EGL10.eglChooseConfig(), to discover which configurations are available.
In my case it is ok to fallback to RGB565 if RGB888 isn't available, but I would prefer this to be quite rare.
So, is it possible to use RGB888 on most devices?
Are there any compatibility problems or weird bugs with this?
Do you have an example of a correct and reliable way to setup the GLSurfaceView for rendering RGB888?
On newer devices, most of them should support RGBA8888 as a native format. One way to force RGBA color format is to set the translucency of the surface, you'd still want to pick the EGLConfig to best guess the config for the channels in addition to the depth and stencil buffers.
setEGLConfigChooser(8, 8, 8, 8, 0, 0);
getHolder().setFormat(PixelFormat.RGBA_8888);
However, if I read your question correctly you're asking for RGB888 support (alpha don't care) in other words, RGBX8888 which might not be supported by all devices (driver vendor limitation).
Something to keep in mind about performance though, since RGBA8888 is the color format natively supported by most GPUs hardware it's best to avoid any other color format (non natively supported) since that usually translate into a color conversion underneath adding non necessary work load to the GPU.
This is how I do it;
{
window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// cocos2d will inherit these values
[window setUserInteractionEnabled:YES];
[window setMultipleTouchEnabled:NO];
// must be called before any othe call to the director
[Director useFastDirector];
[[Director sharedDirector] setDisplayFPS:YES];
// create an openGL view inside a window
[[Director sharedDirector] attachInView:window];
// Default texture format for PNG/BMP/TIFF/JPEG/GIF images
// It can be RGBA8888, RGBA4444, RGB5_A1, RGB565
// You can change anytime.
[Texture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_];
glClearColor(0.7f,0.7f,0.6f,1.0f);
//glClearColor(1.0f,1.0f,1.0f,1.0f);
[window makeKeyAndVisible];
[[Director sharedDirector] runWithScene:[GameLayer node]];
}
I hope that helps!
Related
I am trying to render a smooth gradient from 0% to 10% gray across the screen of Asus Rog Phone 2 which supposedly has an HDR10 screen. In standard (8bit?) rendering mode I can clearly see banding between the gradient levels.
I followed the instructions from Android to modify the Vulkan tutorial sample code in the following way:
Added android:colorMode="wideColorGamut to AndroidManifest.xml.
Changed VkSwapchainCreateInfoKHR.imageFormat, VkAttachmentDescription.format and VkImageViewCreateInfo.format to VK_FORMAT_R16G16B16A16_SFLOAT (everything to setup swapchain).
Changed VkSwapchainCreateInfoKHR.imageColorSpace to VK_COLOR_SPACE_DISPLAY_P3_NONLINEAR_EXT.
I render the gradient dynamically in a custom fragment shader as a function of texturing coordinates mapped to a full-screen quad:
layout (location = 0) in vec2 texcoord;
layout (location = 0) out vec4 uFragColor;
void main() {
uFragColor = vec4(vec3(texcoord.x*0.1), 1.0);
}
As a result I observe absolutely no difference from the original 8bit(?) mode. I am not quite sure how to debug it and what to try.
I also tried to implement the same effect in Unity with HDR options enabled. There I see that all the intermediate rendering passes (in Forward mode) render to Float16 but the last extra pass likely converts it into RGB8. The visual result is the same banding again.
This makes me wonder whether this is not caused by some missing setting. In another thread I saw discussion of Window.setFormat(PixelFormat) or SurfaceHolder.setFormat(PixelFormat) but I am not sure how this related to a Native Android app. I cannot see a way how to call such function and I am not even sure if it would make sense.
Thank you for any suggestions.
Where are you setting HDR? HDR is PQ transfer function, neither 10 bit, not BT.2020 has anything to do with HDR.
https://developer.android.com/training/wide-color-gamut
This talks only about WCG.
This should be used to trigger HDR. https://developer.android.com/ndk/reference/struct/a-hdr-metadata-smpte2086
For Java this should be used to check whether HDR is there.
https://developer.android.com/reference/android/view/Display.HdrCapabilities?hl=en
See this question with further links What is the difference between Display.HdrCapabilities and configuration.isScreenHdr
I want to integrate OSG scene into my Qt Quick application.
It seems that the proper way to do it is to use QQuickFramebufferObject class and call osgViewer::Viewer::frame() inside QQuickFramebufferObject::Renderer::render(). I've tried to use https://bitbucket.org/leon_manukyan/qtquick2osgitem/overview.
However, it seems this approach doesn't work correctly in all cases. For example, in Android platform this code renders only the first frame.
I think the problem is that QQuickFramebufferObject uses the same OpenGL context both for Qt Quick Scene Graph and code called within QQuickFramebufferObject::Renderer::render().
So I'm wondering, is it possible to integrate OpenSceneGraph into Qt Quick using QQuickFramebufferObject correctly or it is better to use implementation that uses QQuickItem and separate OpenGL context such as https://github.com/podsvirov/osgqtquick?
Is it possible to integrate OpenSceneGraph into Qt Quick using
QQuickFramebufferObject correctly or it is better to use
implementation that uses QQuickItem and separate OpenGL context?
The easiest way would be using QQuickPaintedItem which is derived from QQuickItem. While it is by default offering raster-image type of drawing you can switch its render target to OpenGL FramebufferObject:
QPainter paints into a QOpenGLFramebufferObject using the GL paint
engine. Painting can be faster as no texture upload is required, but
anti-aliasing quality is not as good as if using an image. This render
target allows faster rendering in some cases, but you should avoid
using it if the item is resized often.
MyQQuickItem::MyQQuickItem(QQuickItem* parent) : QQuickPaintedItem(parent)
{
// unless we set the below the render target would be slow rastering
// but we can definitely use the GL paint engine just by doing this:
this->setRenderTarget(QQuickPaintedItem::FramebufferObject);
}
How do we render with this OpenGL target then? The answer can be still good old QPainter filled with the image called on update/paint:
void MyQQuickItem::presentImage(const QImage& img)
{
m_image = img;
update();
}
// must implement
// virtual void QQuickPaintedItem::paint(QPainter *painter) = 0
void MyQQuickItem::paint(QPainter* painter)
{
// or we can precalculate the required output rect
painter->drawImage(this->boundingRect(), m_image);
}
While QOpenGLFramebufferObject used behind the scenes here is not QQuickFramebufferObject the semantics of it is pretty much what the question is about and we've confirmed with the question author that we can use QImage as a source to render in OpenGL.
P.S. I successfully use this technique since Qt 5.7 on PC desktop and singleboard touchscreen Linux device. Just a bit unsure of Android.
Using OpenGLES 1.1 (don't have a choice at this time). Target OS is Android.
I'm having some inconsistency when rendering to the main framebuffer, and when rendering to a texture.
When I render to the normal screen, everything is fine. When I render to a texture, I get a dark rim around my graphics wherever alpha is translucent.
Here's my helper functions:
void RenderNormal()
{
if (!gIsRenderToTexture)
{
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationOES(GL_FUNC_ADD_OES);
}
else
{
glBlendFuncSeparateOES(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA,GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparateOES(GL_FUNC_ADD_OES,GL_FUNC_ADD_OES);
}
}
void RenderAdditive()
{
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
}
void RenderMultiply()
{
glBlendFunc(GL_ZERO, GL_SRC_COLOR);
}
So, some data:
On newer systems, this works just fine (also on iOS, OSX, and Linux)
On Kindle Fire, I still get the dark rims.
On an older Android device running KitKat, my additive/multiply functions don't turn off (I assume because of juggling between glBlendFunc and glBlendFuncSeparate... I'm not turning something off, but whatever I try to do to fix it makes it worse)
I'm looking for a way to square these three functions so that they can operate both with render to texture, and with rendering to the normal ol' screen. Can you assist?
Okay, a day of working and research and I finally figure out that on the target device, the OES are not supported. So that said, anyone having this problem... the Kindle Fire and a lot of older tablets just outright don't support glBlendFuncSeparateOES or glBlendEquationOES and they will fail SILENTLY.
Can someone answer me how come this line:
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_R16F, width, height, 0, GLES30.GL_RED, GLES30.GL_HALF_FLOAT, myBuffer);
works on tegra4 but doesn't work on ARM Mali-T628 MP6?
I am not attaching this to a framebuffer by the way, I am using this as a read only texture. The code returned on ARM is 1280 where Tegra 'doesn't complain' at all.
Also, I know that Tegra4 got extension for half float textures, and that specific Mali doesn't have that extension, but since it's OpenGL ES 3.0, shouldn't it support such textures?
That call looks completely valid to me. Error 1280 is GL_INVALID_ENUM, which suggests that one of the 3 enum type arguments is invalid. But each one by itself, as well as the combination of them, is spec compliant.
The most likely explanation is a driver bug. I found that several ES 3.0 drivers have numerous issues, so it's not a big surprise to discover problems.
The section below was written under the assumption that the texture would be used as a render target (FBO attachment). Please ignore if you are looking for a direct answer to the question.
GL_R16F is not color-renderable in standard ES 3.0.
If you pull up the spec document, which can be found on www.khronos.org (direct link), table 3.13 on pages 130-132 lists all texture formats and their properties. R16F does not have the checkmark in the "Color-renderable" column, which means that it can not be used as a render target.
Correspondingly, R16F is also listed under "Texture-only color formats" in section "Required Texture Formats" on pages 129-130.
This means that the device needs the EXT_color_buffer_half_float extension to support rendering to R16F. This is still the case in ES 3.1 as well.
My scene in OpenGL ES requires several large resolution textures, but they are grayscale, since I am using them just for masks. I need to reduce my memory use.
I have tried loading these textures with Bitmap.Config.ALPHA_8, and as RGB_565. ALPHA_8 seems to actually increase memory use.
Is there some way to get a texture loaded into OpenGL and have it use less than 16bits per pixel?
glCompressedTexImage2D looks like it might be promising, but from what I can tell, different phones offer different texture compression methods. Also, I don't know if the compression actually reduces memory use at runtime. Is the solution to store my textures in both ATITC and PVRTC formats? If so, how do I detect which format is supported by the device?
Thanks!
PVRTC, ATITC, S3TC and so forth, the GPU native compressed texture should reduce memory usage and improve rendering performance.
For example (sorry in C, you can implement it as using GL11.glGetString in Java),
const char *extensions = glGetString(GL_EXTENSIONS);
int isPVRTCsupported = strstr(extensions, "GL_IMG_texture_compression_pvrtc") != 0;
int isATITCsupported = strstr(extensions, "GL_ATI_texture_compression_atitc") != 0;
int isS3TCsupported = strstr(extensions, "GL_EXT_texture_compression_s3tc") != 0;
if (isPVRTCsupportd) {
/* load PVRTC texture using glCompressedTexImage2D */
} else if (isATITCsupported) {
...
Besides you can specify supported devices using texture format in AndroidManifest.xml.
The AndroidManifest.xml File - supports-gl-texture
EDIT:
MOTODEV - Understanding Texture Compression
With Imagination Technologies-based (aka PowerVR) systems, you should be able to use PVRTC 4bpp and (depending on the texture and quality requirements) maybe even 2bpp PVRTC variant.
Also, though I'm not sure what is exposed in Android systems, the PVRTextool lists I8 (i.e. greyscale 8bpp) as target texture format, which would give you a lossless option.
ETC1 texture compression is supported on all Android devices with Android 2.2 and up.