In a recent build of Samsung's OS (I'm testing with a Galaxy S6) Chrome disables WebGL. The reason for this is documented here. I'm running Chrome without the blacklist using this flag:
--ignore-gpu-blacklist
These are the errors that Three.js spits out at me. I'm curious to know if Three.js can successfully run/render if these OES extensions don't exist:
OES_texture_float
OES_texture_float_linear
OES_texture_half_float
OES_texture_half_float_linear
OES_texture_filter_anisotropic
If so, how would I go about altering Three.js so that this could work? If not, where can I read up further on these extensions so I can get a better understanding of how they work?
All five of these extensions deal with textures in the following ways:
The first four extensions support floating-point textures, that is, textures whose components
are floating-point values.
OES_texture_float means 32-bit floating point textures with nearest-neighbor
filtering.
OES_texture_float_linear means 32-bit floating point textures with linear
filtering.
OES_texture_half_float means 16-bit floating point textures with nearest-neighbor
filtering.
OES_texture_half_float_linear means 16-bit floating point textures with linear
filtering.
See texture_float,
texture_float_linear,
and ARB_texture_float
(which OES_texture_float_linear is based on)
in the OpenGL ES Registry.
Three.js checks for these extensions (and outputs error messages if necessary) in order
to enable their functionality.
The last extension (called EXT_texture_filter_anisotropic) provides support for anisotropic filtering, which can provide better quality
when filtering some kinds of textures.
Registry: https://www.khronos.org/registry/gles/extensions/EXT/texture_filter_anisotropic.txt
This page
includes a visualization of this filtering.
Here too, Three.js checks for this extension to see if it can be used.
For all these extensions, it depends on your application
whether it makes sense to "go about altering Three.js". For instance,
does your application require floating-point textures for some of its effects? (You can check for
that by checking if you use THREE.HalfFloatType or THREE.FloatType.)
Although Three.JS checks for these extensions, it doesn't inherently rely on these extensions in order to work, and only at least one example requires their use. Therefore, the issue is not so much to modify Three.js as it is to modify your application. Nonetheless, here, in WebGLExtensions.js, is where the warning is generated.
Related
I want to fetch depth from tile local memory with OpenGL ES.
That is needed to make soft particles effect and color-only deferred decals in a videogame.
ARM_shader_framebuffer_fetch_depth_stencil extension works great and gives direct access to depth value.
Now I want to achieve same result with ARM_shader_framebuffer_fetch and EXT_shader_framebuffer_fetch.
In GDC talk Bringing Fortnite to Mobile with Vulkan and OpenGL ES I see that one possible solution is writing depth value to alpha.
This approach doesn't work for me because of major precision loss, my alpha is 8 bits only.
I consider adding second attachment with enough precision and writing to it with MRT.
The question is: is MRT a way to go, or I miss some important trick?
Implementations that support ARM_shader_framebuffer_fetch do not guarantee to support MRT at all. If they do support it, then only the color of attachment zero can be retrieved. There are also some restrictions around color format; e.g. it only supports unorm color formats, so likely cannot have enough precision for depth even if you put the depth info in attachment zero.
Using EXT_shader_framebuffer_fetch is more generic and adds full MRT support, but not all tile-based GPUs support it.
I have some code that uses glBlendFuncSeparateOES and glBendEquationSeparateOES in order to render onto framebuffers with alpha.
However, I've found that a couple of my target devices do NOT appear to support these functions. They fail silently, and all that happens is that your render mode doesn't get set. My Kinda Fire cheapie tablet and an older Samsung both exhibit this behavior.
Is there a good way, on android, to query if they're actually implemented? I have tried eglGetProcAddress, but it returns an address for any string you throw at it!!!!!
Currently, I just have the game, on startup, do a quick render on a small FBO to see if the transparency is correct or if it has artifacts. It works, but it's a very kludgy method.
I'd much prefer if there was something like glIsBlendFuncSeparateSupported().
You can get a list of all available extensions using glGetString(GL_EXTENSIONS). This returns a space-separated list of supported extensions. For more details see Khronos specification.
I am trying to setup an EGLImage where the source sibling is a GL_RENDERBUFFER (the EGLClientBuffer specified as an argument to eglCreateImageKHR). In another context, I create a GL_TEXTURE_2D and specify it as the target sibling of the EGLImage by using glEGLImageTargetTexture2DOES. Unfortunately, the latter call leads to a GL_INVALID_OPERATION. If both the source and target siblings are GL_TEXTURE_2D's, the setup works like a charm.
From my reading of the specification, this should be a permissable operation. It is also possible that my reduced test case has some other orthogonal issue. Though I doubt this since the setup works fine when both the source and target siblings are GL_TEXTURE_2D's. However, if this were the issue (and this kind of usage of EGLImages was permissable), what could be the possible issue that may lead to to a GL_INVALID_OPERATION. Or am I just mistaken in my interpretation of the specification?
Referenced Extensions:
http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt
http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt
Clarifications:
I do check for the presence of all extensions in the specification (EGL_KHR_image, EGL_KHR_image_base, EGL_KHR_gl_texture_2D_image, EGL_KHR_gl_renderbuffer_image, etc..).
I also realize that there may be differences in the internal format of the EGLImage when I am using a GL_RENDERBUFFER vs a GL_TEXTURE_2D as the source. So I tried using the OES_EGL_image_external extension first with the texture as the source and then the renderbuffer. The texture works fine as always, the same GL_INVALID_OPERATION for the renderbuffer. Using external images when binding makes no difference to the error generated.
Both GL and EGL errors are checked after each call.
I'm afraid this could be a legitimate failure point. A GL_INVALID_OPERATION error can arise if the driver is unable to create a texture from the EGLImage supplied.
http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt
If the GL is unable to specify a texture object using the supplied
eglImageOES (if, for example, refers to a multisampled
eglImageOES), the error INVALID_OPERATION is generated.
Do you call glFramebufferRenderbufferOES with the renderbuffer before passing it to eglCreateImageKHR? If so, suggest you try tweaking how your create your renderbuffer (e.g. try a different format, size) to pin down what conditions get you this error.
I spent a lot more time on this and EGLImage's in general after running into this issue. alonorbach's hypothesis was correct.
If for any reason, the driver cannot create a texture sibling from the supplied EGLImage, then a rather ambiguous GL_INVALID_OPERATION is returned. I was under the impression that if I was able to create a valid EGLImage (i.e., not an EGL_NO_IMAGE_KHR) and the appropriate extensions were supported, I would be able to bind to the same using either a rendebuffer or texture sibling (in GL_OES_EGL_image). This is certainly not the case. It also seems to vary a lot from device to device. I was able to get this working on NVidia Tegra units but not on the Adreno 320.
Does this mean it is impossible to reliably use EGLImages on Android? Not quite. Specifically for the issue I was running into, I was able to bind a texture sibling to an EGLImage created using a renderbuffer source by specifying GL_RGBA8_OES in extension GL_OES_rgb8_rgba8 as the internal format of the source renderbuffer (argument 2 to glRenderbufferStorage). This is less than ideal but proves the point that somehow the internal formats of the source and target siblings must match, and, in case of a mismatch, the driver is under no obligation to accomodate the variation and is free to give up.
Another source of entropy when trying to use EGLImages successfully (at least generically on Android) is the way in which the EGLImages themselves are created. I found that it was far more reliable to create an EGLImage using the EGL_NATIVE_BUFFER_ANDROID target specified in the EGL_ANDROID_image_native_buffer extension. If such extensions are available for your platform, it is highly advisable to use them in a fallback manner.
In summary, the solution that seems to work reliably for me seems to be to first try and create a valid EGLImage using any and all available extensions in a fallback manner. Then using that EGLImage try binding to the target sibling kind. If this pair of operations yields no errors, then that EGLImage/TargetKind pair is supported on that device and can be used for subsequent operations. If either operation fails, the next item in the fallback chain is checked. If all else fails, a solution that does not use EGLImages should likely be present. I have not encountered such an Android device yet (fingers crossed)
I am still discovering corner cases and optimizations and will keep this answer updated with findings.
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
What I'm curious to know is whether or not OpenGL 3 is comparable to OpenGL ES 2.0. In other words, given that I'm about to make a game engine for both desktop and Android, are there any differences I should be aware of in particular regarding OpenGL 3.x+ and OpenGL ES 2.0?
This can also include OpenGL 4.x versions as well.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
Define "isn't anything like" it. Desktop GL 2.1 has a bunch of functions that ES 2.0 doesn't have. But there is a mostly common subset of the two that would work on both (though you'll have to fudge things for texture image loading, because there are some significant differences there).
Desktop GL 3.x provides a lot of functionality that unextended ES 2.0 simply does not. Framebuffer objects are core in 3.x, whereas they're extensions in 2.0 (and even then, you only get one destination image without another extension). There's transform feedback, integer textures, uniform buffer objects, and geometry shaders. These are all specific hardware features that either aren't available in ES 2.0, or are only available via extensions. Some of which may be platform-specific.
But there are also some good API convenience features available on desktop GL 3.x. Explicit attribute locations (layout(location=#)), VAOs, etc.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
It rather depends on how much work you intend to do and what you're prepared to do to make it work. At the very least, you should read up on what OpenGL ES 2.0 does, so that you can know how it differs from desktop GL.
It's easy to avoid the actual hardware features. Rendering to texture (or to multiple textures) is something that is called for by your algorithm. As is transform feedback, geometry shaders, etc. So how much you need it depends on what you're trying to do, and there may be alternatives depending on the algorithm.
The thing you're more likely to get caught on are the convenience features of desktop GL 3.x. For example:
layout(location = 0) in vec4 position;
This is not possible in ES 2.0. A similar definition would be:
attribute vec4 position;
That would work in ES 2.0, but it would not cause the position attribute to be associated with the attribute index 0. That has to be done via code, using glBindAttribLocation before the program is linked. Desktop GL also allows this, but the book you linked to doesn't do it. For obvious reasons (it's a 3.3-based book, not one trying to maintain compatibility with older GL versions).
Uniform buffers is another. The book makes liberal use of them, particularly for shared perspective matrices. It's a simple and effective technique for that. But ES 2.0 doesn't have that feature; it only has the per-program uniforms.
Again, you can code to the common subset if you like. That is, you can deliberately forgo using explicit attribute locations, uniform buffers, vertex array objects and the like. But that book isn't exactly going to help you do it either.
Will it be a waste of your time? Well, that book isn't for teaching you the OpenGL 3.3 API (it does do that, but that's not the point). The book teaches you graphics programming; it just so happens to use the 3.3 API. The skills you learn there (except those that are hardware based) transfer to any API or system you're using that involves shaders.
Put it this way: if you don't know graphics programming very much, it doesn't matter what API you use to learn. Once you've mastered the concepts, you can read the various documentation and understand how to apply those concepts to any new API easily enough.
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord, gl_ModelViewMatrix and the various other matrices from the fixed function pipeline.
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.
I'm trying to decide on whether to primarily use floats or ints for all 3D-related elements in my app (which is C++ for the most part). I understand that most ARM-based devices have no hardware floating point support, so I figure that any heavy lifting with floats would be noticeably slower.
However, I'm planning to prep all data for the most part (i.e. have vertex buffers where applicable and transform using matrices that don't change a lot), so I'm just stuffing data down OpenGL's throat. Can I assume that this goes more or less straight to the GPU and will as such be reasonably fast? (Btw, the minimum requirement is OpenGL ES 2.0, so that presumably excludes older 1.x-based phones.)
Also - how is the penalty when I mix and match ints and floats? Assuming that all my geometry is just pre-built float buffers, but I use ints for matrices since those do require expensive operations like matrix multiplications, how much wrath will I incur here?
By the way, I know that I should keep my expectations low (sounds like even asking for floats on the CPU is asking for too much), but is there anything remotely like 128-bit VMX registers?
(And I'm secretly hoping that fadden is reading this question and has an awesome answer.)
Older Android devices like the G1 and MyTouch have ARMv6 CPUs without floating point support. Most newer devices, like the Droid, Nexus One, and Incredible, use ARMv7-A CPUs that do have FP hardware. If your game is really 3D-intensive, it might demand more from the 3D implementation than the older devices can provide anyway, so you need to decide what level of hardware you want to support.
If you code exclusively in Java, your app will take advantage of the FP hardware when available. If you write native code with the NDK, and select the armv5te architecture, you won't get hardware FP at all. If you select the armv7-a architecture, you will, but your app won't be available on pre-ARMv7-A devices.
OpenGL from Java should be sitting on top of "direct" byte buffers now, which are currently slow to access from Java but very fast from the native side. (I don't know much about the GL implementation though, so I can't offer much more than that.)
Some devices additionally support the NEON "Advanced SIMD" extension, which provides some fancy features beyond what the basic VFP support has. However, you must test for this at runtime if you want to use it (looks like there's sample code for this now -- see the NDK page for NDK r4b).
An earlier answer has some info about the gcc flags used by the NDK for "hard" fp.
Ultimately, the answer to "fixed or float" comes down to what class of devices you want your app to run on. It's certainly easier to code for armv7-a, but you cut yourself off from a piece of the market.
In my opinion you should stick with fixed-point as much as possible.
Not only old phones miss floating point support, but also new ones such as the HTC Wildfire.
Also, if you choose to require ARMv7, please note that for example the Motorola Milestone (Droid for Europe) does feature an ARMv7 CPU, but because of the way Android 2.1 has been built for this device, the device will not use your armeabi-v7a libs (and might hide your app from the Market).
I personally worked around this by detecting ARMv7 support using the new cpufeatures library provided with NDK r4b, to load some armeabi-v7a lib on demand with dlopen().