Is OpenGL function loader needed? - android

On desktop OSes, OpenGL function loaders like GLEW, GLAD, etc. are used to load functions at runtime. But what about on Android? How are functions loaded? I've looked at a few apps and all of them seem to depend on EGL and GLES. But AFAIK EGL isn't a loading library, but an interface. Well, an interface to an interface as GLES is actually an interface.
This leads to another question: How come Android uses EGL when it is generally not used on desktops?

Back when I used android a bit, you could either link to the GLES 2.0 library, or you could link to the GLES 3.0 library, so kinda as if they provide the function pointers for you. ish. If you used GLES3.0, but the phone you ran it on only supported 2.0, your app would not load. To work around this, I always linked to GLES 2.0, and wrote my own function loader using eglGetProcAddress to extract the GLES3.0 API if available. This is pretty much how function loaders on windows/linux work (using wglGetProcAddress or glxGetProcAddress).
GLES has always been a stripped down version of the full blown desktop GL. It has always targeted a smaller subset of the full blown API (removing all of the legacy cruft), which in turn simplifies the OpenGL driver code somewhat, which in turn should reduce memory usage and save a little bit of battery life. Basically it's just more suited to use on a low power system where battery life is a concern.

How come Android uses EGL when it is generally not used on desktops?
It was widely used on embedded electronics prior to Android, so I suspect it was path of least resistance for a new smartphone OS as all GPU vendors had an EGL implementation already.

Related

How do I use GL_MAP_PERSISTENT_BIT in OpenGL ES 3.1 on Android?

I recently switched from using glBufferData to glMapBufferRange which gives me direct access to GPU memory rather than copying the data from CPU to GPU every frame.
This works just fine and in OpenGL ES 3.0 I do the following per frame:
Get a pointer to my GPU buffer memory via glMapBufferRange.
Directly update my buffer using this pointer.
Use glUnmapBuffer to unmap the buffer so that I can render.
But some Android devices may have at least OpenGL ES 3.1 and, as I understand it, may also have the EXT_buffer_storage extension (please correct me if that's the wrong extension ?). Using this extension it's possible to set up persistent buffer pointers which do not require mapping/unmapping every frame using the GL_MAP_PERSISTENT_BIT flag. But I can't figure out or find much online in the way of how to access these features.
How exactly do I invoke glMapBufferRange with GL_MAP_PERSISTENT_BIT set in OpenGL ES 3.1 on Android ?
Examining glGetString(GL_EXTENSIONS) does seem to show the extension is present on my device, but I can't seem to find GL_MAP_PERSISTENT_BIT anwhere, e.g. in GLES31 or GLES31Ext, and I'm just not sure how to proceed.
The standard Android Java bindings for OpenGL ES only expose extensions that are guaranteed to be supported by all implementations on Android. If you want to expose less universally available vendor extensions you'll need to roll your own JNI bindings, using eglGetProcAddress() from native code compiled with the NDK to fetch the entry points.
For this one you want the extension entry point glBufferStorageEXT().

Is it worth it performance-wise to do OpenGL+OpenCV entirely thru the NDK NativeActivity instead of thru the Java SDK?

I'm about to start working on an augmented reality project that will involve using the EPSON Moverio AR glasses or similar. Assuming we go for those glasses, they run on Android 4.0.4 - API 15. The app will involve near realtime analysis of video (frames from the glasses camera) for feature detection / tracking with markers and overlaying 3d objects in the 'real world' based on the markers.
So far then, on the technical side, it looks like we'll be dealing with:
API 15
OpenCV
OpenGLES2
Considering the above, I'm wondering if it's worth doing it all thru the NDK, using a single NativeActivity with the android_native_app_glue code. When I say worth it I mean performance wise.
Sure, doing it all on the C/C++ side has for instance the advantage that the code could then potentially be ported with minimal modification to run on other environments. But OpenCV does have Java bindings for Android and GL can also be used to a certain extent from Java. So I'm just wondering if performance-wise it's worth it or it would be about the same as, say, using a GLSurfaceView.
I work in augmented reality. The vast majority of applications I've seen have been native. Google recommends avoiding native application unless the gains are absolutely necessary. I think AR is one of the relatively few cases where it is necessary. The benefits I'm aware of are:
Native camera access will allow you to get a higher capture framerate. Passing the data to the Java layer considerably slows this down. Even OpenCV's non-native capture can be slower in Java because OpenCV primary maintains the data in native objects. Your capture framerate is a limiting factor on how fast you can update the pose information for your objects. Beware though, OpenCV's native camera will not work on devices running Qualcomm's MSM optimized fork of android - this includes many snapdragon devices.
Every call to an OpenGL method in Java not only has a cost related to dropping into native, and they also perform quite a few additional checks. Look through GLES20.cpp which contains the native implementation of the GLES20 class's methods. You'll see that you could bypass quite a lot of logic by using native calls. This is fine in most mobile application, but 3D rendering often gets a significant benefit from bypassing those checks and the JNI overhead. This is even more important in AR because your will already be swamping the system with CV processing.
You will very likely want your detection related code in native. OpenCV have samples if you want to see the difference between native and Java detection performance. The former will use fewer resources and be more consistent. Using a native application means that you can call your native functions without paying the cost of passing large amount of data from Java to native.
Native sensor access is more efficient and the rate says far more consistent in native thanks to the lack of garbage collection and JNI. This is relevant if you will be using IMU data interesting ways.
You may be able to build an non-native application that has most of it's code in native and runs well despite being Java based, but it is considerably more difficult.

Is RenderScript the only device-independent way to run GPGPU code on Android?

Is RenderScript the only device-independent way to run GPGPU code on Android ?
I don't count Tegra as there is only few phones that have it.
RenderScript is the official Android compute platform. As a result it will be on all Android devices. It was designed specifically to address the problem of running one code base across many different devices.
Well, using RenderScript doesn't necessarily mean that your code will run on the GPU. It might also use the CPU and (hopefully) parallelize tasks on several CPU cores and use CPU vector instructions. But as far as I know, you can never be sure about that and the decision process is kind of a blackbox.
If you want to make sure that your code runs on the GPU, you can "simulate" some GPGPU functions with OpenGL ES 2.0 shaders. This will run on all devices that support OpenGL ES 2.0. It depends on what you want to do, but for example many image processing functions can be implemented very efficiently this way. There is a library called ogles_gpgpu that provides an architecture for GPGPU on Android and iOS systems: https://github.com/internaut/ogles_gpgpu
OpenGL ES 3.1 also supports "Compute Shaders" but few devices support this, yet.

how to show an image without surfaceflinger on android

How to draw some stuff before surfacefligner start on android?
In this situation, it's some kind of traditional Linux with framebuffer device, so directly access framebuffer should be OK.
How about to use HWComposer directly and how about to use egl directly?
If SurfaceFlinger isn't running, you can just open the framebuffer device and write to it (assuming you're on a device that has a framebuffer device).
For an example of this, see the implementation of the "recovery" UI. The key file there is minui/graphics.c. The current implementation relies on libpixelflinger.
Using OpenGL ES / EGL will be a bit more tricky. Some of the early GLES tests, such as San Angeles, use the FramebufferWindow class, but that uses a fair bit of the framework. (FWIW, an upcoming release is expected to deprecate FramebufferWindow and switch the OpenGL tests that use them to a new library that talks to SurfaceFlinger.)
Update: the upcoming release happened, and you can see the replacement for FramebufferWindow ("WindowSurface") here.
If SurfaceFlinger isn't running you can talk to HardwareComposer directly, using the internal interface. There are some old tests that exercise it, but I don't know if they still work. The code in SurfaceFlinger is probably a better example at this point. Only one process can open HardwareComposer at a time, so SurfaceFlinger must not be running.

OpenGL ES 2.0 vs OpenGL 3 - Similarities and Differences

From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
What I'm curious to know is whether or not OpenGL 3 is comparable to OpenGL ES 2.0. In other words, given that I'm about to make a game engine for both desktop and Android, are there any differences I should be aware of in particular regarding OpenGL 3.x+ and OpenGL ES 2.0?
This can also include OpenGL 4.x versions as well.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
Define "isn't anything like" it. Desktop GL 2.1 has a bunch of functions that ES 2.0 doesn't have. But there is a mostly common subset of the two that would work on both (though you'll have to fudge things for texture image loading, because there are some significant differences there).
Desktop GL 3.x provides a lot of functionality that unextended ES 2.0 simply does not. Framebuffer objects are core in 3.x, whereas they're extensions in 2.0 (and even then, you only get one destination image without another extension). There's transform feedback, integer textures, uniform buffer objects, and geometry shaders. These are all specific hardware features that either aren't available in ES 2.0, or are only available via extensions. Some of which may be platform-specific.
But there are also some good API convenience features available on desktop GL 3.x. Explicit attribute locations (layout(location=#)), VAOs, etc.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
It rather depends on how much work you intend to do and what you're prepared to do to make it work. At the very least, you should read up on what OpenGL ES 2.0 does, so that you can know how it differs from desktop GL.
It's easy to avoid the actual hardware features. Rendering to texture (or to multiple textures) is something that is called for by your algorithm. As is transform feedback, geometry shaders, etc. So how much you need it depends on what you're trying to do, and there may be alternatives depending on the algorithm.
The thing you're more likely to get caught on are the convenience features of desktop GL 3.x. For example:
layout(location = 0) in vec4 position;
This is not possible in ES 2.0. A similar definition would be:
attribute vec4 position;
That would work in ES 2.0, but it would not cause the position attribute to be associated with the attribute index 0. That has to be done via code, using glBindAttribLocation before the program is linked. Desktop GL also allows this, but the book you linked to doesn't do it. For obvious reasons (it's a 3.3-based book, not one trying to maintain compatibility with older GL versions).
Uniform buffers is another. The book makes liberal use of them, particularly for shared perspective matrices. It's a simple and effective technique for that. But ES 2.0 doesn't have that feature; it only has the per-program uniforms.
Again, you can code to the common subset if you like. That is, you can deliberately forgo using explicit attribute locations, uniform buffers, vertex array objects and the like. But that book isn't exactly going to help you do it either.
Will it be a waste of your time? Well, that book isn't for teaching you the OpenGL 3.3 API (it does do that, but that's not the point). The book teaches you graphics programming; it just so happens to use the 3.3 API. The skills you learn there (except those that are hardware based) transfer to any API or system you're using that involves shaders.
Put it this way: if you don't know graphics programming very much, it doesn't matter what API you use to learn. Once you've mastered the concepts, you can read the various documentation and understand how to apply those concepts to any new API easily enough.
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord, gl_ModelViewMatrix and the various other matrices from the fixed function pipeline.
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.

Categories

Resources