EGL vs GLES 2.0 on Android (e.g. Java) - android

(experienced c programmer, pre-shader, fixed function open gl. competent Java programmer)
I have been working with GLES on Android and have gotten the examples to run (both native and Java). In particular, the textured triangle example. What is completely confusing me is the "relationship" of Khronos EGL and the android GLES interfaces.
Are these parallel, independent interfaces (API)?
Is EGL supposed to be a platform independent (neutral) interface?
EGL appears to fully support GLES 1.0 and 1.1 but does not support ES 2.0 (on Android)?
So, it appears to me that EGL is supposed to be a platform neutral, parallel interface, BUT it does not fully support GLES 2.0 (on Android); So if you're writing GLES 2.0 code (on Android), you are better off just using the GLxxx API rather than the EGLxxx API (and having to resort to the GLxxx API anyway). As far as I can tell, you don't >HAVE< to use EGL for anything since it only supports a subset of the ES 2.0 API.
(Every example/book/reference either mixes the two, uses the native interface or uses only EGL 1.1 features; Am I missing something fundamental here?)

EGL is a complement to OpenGL ES. EGL is used for getting surfaces to render to using functions like eglCreateWindowSurface, and you can then draw to that surface with OpenGL ES. Its role is similar to GLX/WGL/CGL.
Whether or not EGL can give you a context that supports OpenGL ES 2.0 may vary by platform, but if the Android device supports ES 2.0 and EGL, you should be able to get such a context from EGL. Take a look at the EGL_RENDERABLE_TYPE attribute and the EGL_OPENGL_ES2_BIT when requesting an EGLConfig.
http://www.khronos.org/files/egl-1-4-quick-reference-card.pdf

Related

Using OpenCL buffer To draw texture in OpenGL 2.0 ES

I am working with ARM mali 72 on my Android smartphonne.
I would like to use the output buffer fron OpenCL to render it into OpenGL like a texture.
I have no problem with openCL alone nether openGL alone.
I got no cloud to how to use both in the same application.
The goal is to use mY output OpenCL and send it to openGL.
Some peice of code step by step would be very nice.
I can use openCL 2.0 and opengl ES 3.0 on my smartphonne.
************** ADDED THE 30/09/2020 ************
It Look like I need more information on how to manage my problem.
So my configuration is ! I got Java OpenGL ES Application to already develloped. I retreive the camera frame from Camera.OnPreviousFrame then a send it to OpenCL using JNI.
So i would like to Get the EGL display from Java OpenGL ES Send it through JNI and then Compute my openCL kernel send It back to java OpenGL ES.
I know how to retreive data from OpenCL, transform it to bitmap and use SurfaceTexture and GL_TEXTURE_EXTERNAL_OES to display it into openGL ES.
My problem is how to Retreive EGL display from java OpenGL ES. How to send it to C++, this i can manage to find out using JNI. But i do not know how to implement the C++ part using EGL and OpenCL.
The Answer from BenMark is interresting concerning the processing but I am missing some part. It is possible to use my configuration, using java openGL ES or do i nedd to do all the EGL, openGL, openCL code in native ?
Thanks a lot for help me to anderstand the problem and try to find a solution. ;))
I haven't code a code example but -
Using the EGL API makes interoperability between the GLES and OpenCL APIs easier.
This page provides some tips: https://developer.arm.com/documentation/101574/0400/Using-OpenCL-extensions/Inter-operation-with-EGL/EGL-images
From that page, amongst other things:
You'll want the EGL_KHR_image_base extension to share EGL images.
In OpenCL you'll want cl_khr_egl_image to use the EGL image, and then you must flush in OpenCL with clFinish or clWaitForEvents to be sure that the image is then ready for use by OpenGL ES.
The start and end of the EGL image accesses by OpenCL applications must be signaled by enqueuing clEnqueueAcquireEGLObjectsKHR and clEnqueueReleaseEGLObjectsKHR commands.
I hope that helps get you going.
It was a long anderstanding problem ;))
So the solution are :
It is not possible to Share Context from other thread. So JAVA/OpenCL C++ cannot share data. So depending on GLSL version they is different possibility.
GLES 2.0:
need to rewrite SurfaceTexture.cpp to acces (EGL IMAGE) surface form C++ and i even do not know if it is possible due to context nono thread. so Forget it at the moment ;)).
But you can still use camera onPrevious to get Data, then send it through JNI to C++ OpenCL, that is what i am doing a this time. Then send back the OpenCL output to the Display View and cath it using Canvas and GL_TEXTURE_EXTERNAL_OES. It works but it is eavy. ;)) And you cannot get nothing from GLSL texture back to C++.
GLSL 3.1:
Use Compute shader rather than OpenCL in JAVA. ;))
Have a look at What is the difference between OpenCL and OpenGL's compute shader?

Checking for availability of shader compiler before committing to use OpenGL ES 2.0

I am writing an Android app and am trying to maximize device support as much as possible. This means (for me at least) supporting both OpenGL ES 1.0 and OpenGL ES 2.0, depending on which is available.
Shader compiler support in OpenGL ES 2.0 is optional, but required for my app (unless I use OpenGL ES 1.0, of course), and so far I haven't been able to find any information about availability. So, I need to check whether compiler support is available and, if not, fall back to OpenGL ES 1.0.
The problem is that if I call GLES20.glGetIntegerv(GLES20.GL_SHADER_COMPILER, ...) to check for support before my onSurfaceCreated(...) method is called, it returns GL_FALSE (if I call it inside onSurfaceCreated, it returns GL_TRUE).
This means I need to create my GL20 Renderer and commit my GLSurfaceView to use GL20 via setEGLContextClientVersion(2) and setRenderer(myGL20Renderer) before I can figure out whether that's really what I want to do.
Is there any way around this other than throwing all that setup away again and basically starting over or accepting the fact that my app might crash on systems that say they support GL 2.0, but don't offer shader compilation?

Android OpenGL ES shader compiler support

The OpenGL ES 2.0 Specs state that "[s]hader compiler support is optional" (see "Notes" here).
Are there any Android devices that do not support shader compilation? If so, is there some shader compiler that I can include with my app to generate a binary instead? Or is the format of the binary also standardized so that I can precompile my shaders before hand and ship the binary with my app if needed? Or is there a requirement I can put into my app so that it isn't offered to devices without compiler support?
Since Android 4.0 (actually 3.0 but Google/Android never released the code as distinct product) OpenGL ES 2.0 has always been part of the spec required to get Android Market/Google Play. See: Android 4.0 Compatibility Definition Document and Android Compatibility Definition Document Archive for the other versions.
Since OpenGL ES 2.0 uses shaders written in OpenGL ES Shader Language I believe your reference to 'optional' for the Shader Compiler refers to the fact the driver vendor can provide a different interface (binary) to load shaders in. Given that there is no specified binary format, everyone as far as I can tell has got GLSL text fed into the graphics driver to build the shaders at runtime. And don't forget there are multiple GPU vendors/chipsets so specific binaries for each doesn't look too attractive from a developer point of view at least in the multi CPU architecture (ARM,x86,MIPS) multi GPU (Qualcomm,PowerVR,nVidia) world of Android. Vendors can still interpret the text differently but at least it would be within the proscribed Khronos spec.
Since text is the what is being sent over to the GPU driver, well performance could be better since it has to do the translation, mapping, scheduling, etc. which leads to the recent announcement for Vulkan see: Android Developer Blog Vulkan Announcement. If you look at the specs, it describes an intermediate binary format but is probably at least a year away from a consumer available implementation.
Unless your intention is to support Gingerbread (2.3) and below - you should be able to rely upon OpenGL ES 2.0 availability.

Mixing OpenGL ES 2.0 and 3.0

I'm trying to port an iOS project to Android (java). I've however encountered a few ES 2.0 extension functions (OES), which do not appear in the Android GLES20 API:
glGenVertexArraysOES
glBindVertexArrayOES
glDeleteVertexArraysOES
It appears I have to call these functions from NDK, dynamically bind the extensions at runtime and check for support od devices. Not something I'd love to do.
While googling I found these functions in the GLES30 api. So my question is:
- is it possible to mix GLES20 and GLES30 calls?
- are these functions basically calls to the same api or is this completely different?
- any other sugggestions?
Just looking at the API entry points, ES 3.0 is a superset of ES 2.0. So the transition is mostly smooth. You request API version 3 when making the GLSurfaceView.setEGLContextClientVersion() call, and your ES 2.0 code should still work. Then you can start using methods from GLES30 on top of the GLES20 methods.
There are some very subtle differences, e.g. related to slight differences in cube map sampling, but you're unlikely to run into them. If you want details, see appendix F.2 of the spec document. Some features like client side vertex arrays have been declared legacy, but are still supported.
The only thing you're likely to encounter are differences in GLSL. You can still use ES 2.0 shaders as long as you keep the #version 100 in the shader code. But if you want to use the latest GLSL version (#version 300 es), there are incompatible changes. The necessary code changes are simple, it's mostly replacing attribute and varying with in and out, and not using the built-in gl_FragColor anymore. You have to switch over to the new GLSL version if you want to take advantage of certain new ES 3.0 features, like multiple render targets.
The downside of using ES 3.0 is of course that you're much more restricted in the devices your software runs on. While the latest higher-end devices mostly support ES 3.0, there are still plenty of devices out there that only support 2.0, and will stay at that level. According to the latest data from Google (http://developer.android.com/about/dashboards/index.html), 18.2% of all devices support ES 3.0 as of July 7, 2014.
As #vadimvolk explained, you will need to check whether OpenGL driver supports OES_vertex_array_object extension. More info here:
http://www.khronos.org/registry/gles/extensions/OES/OES_vertex_array_object.txt
If you just stick to use OpenGL ES 3.0, you can use these methods after checking that you've got OpenGL ES 3.0 context. In Android, you can mix calls to GLES20 and GLES30 because these APIs are backwards-compatible.
All you need is to create OpenGL ES 2.0 context and check if returned context version is 3.0 by reading GL_VERSION string. If it is 3.0, you can use mix both GLES20 and GLES30 functions. Additional info: https://plus.google.com/u/0/+RomainGuy/posts/iJmTjpUfR5E
Functions are same. In GLES20 they are exists only on some devices as not mandatory extensions.
In GLES30 they are mandatory.
If you use them from GLES30 your application will work only on devices supports GLES30 (only devices made for android 4.4).

OpenGL ES 2.0 vs OpenGL 3 - Similarities and Differences

From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
What I'm curious to know is whether or not OpenGL 3 is comparable to OpenGL ES 2.0. In other words, given that I'm about to make a game engine for both desktop and Android, are there any differences I should be aware of in particular regarding OpenGL 3.x+ and OpenGL ES 2.0?
This can also include OpenGL 4.x versions as well.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
Define "isn't anything like" it. Desktop GL 2.1 has a bunch of functions that ES 2.0 doesn't have. But there is a mostly common subset of the two that would work on both (though you'll have to fudge things for texture image loading, because there are some significant differences there).
Desktop GL 3.x provides a lot of functionality that unextended ES 2.0 simply does not. Framebuffer objects are core in 3.x, whereas they're extensions in 2.0 (and even then, you only get one destination image without another extension). There's transform feedback, integer textures, uniform buffer objects, and geometry shaders. These are all specific hardware features that either aren't available in ES 2.0, or are only available via extensions. Some of which may be platform-specific.
But there are also some good API convenience features available on desktop GL 3.x. Explicit attribute locations (layout(location=#)), VAOs, etc.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
It rather depends on how much work you intend to do and what you're prepared to do to make it work. At the very least, you should read up on what OpenGL ES 2.0 does, so that you can know how it differs from desktop GL.
It's easy to avoid the actual hardware features. Rendering to texture (or to multiple textures) is something that is called for by your algorithm. As is transform feedback, geometry shaders, etc. So how much you need it depends on what you're trying to do, and there may be alternatives depending on the algorithm.
The thing you're more likely to get caught on are the convenience features of desktop GL 3.x. For example:
layout(location = 0) in vec4 position;
This is not possible in ES 2.0. A similar definition would be:
attribute vec4 position;
That would work in ES 2.0, but it would not cause the position attribute to be associated with the attribute index 0. That has to be done via code, using glBindAttribLocation before the program is linked. Desktop GL also allows this, but the book you linked to doesn't do it. For obvious reasons (it's a 3.3-based book, not one trying to maintain compatibility with older GL versions).
Uniform buffers is another. The book makes liberal use of them, particularly for shared perspective matrices. It's a simple and effective technique for that. But ES 2.0 doesn't have that feature; it only has the per-program uniforms.
Again, you can code to the common subset if you like. That is, you can deliberately forgo using explicit attribute locations, uniform buffers, vertex array objects and the like. But that book isn't exactly going to help you do it either.
Will it be a waste of your time? Well, that book isn't for teaching you the OpenGL 3.3 API (it does do that, but that's not the point). The book teaches you graphics programming; it just so happens to use the 3.3 API. The skills you learn there (except those that are hardware based) transfer to any API or system you're using that involves shaders.
Put it this way: if you don't know graphics programming very much, it doesn't matter what API you use to learn. Once you've mastered the concepts, you can read the various documentation and understand how to apply those concepts to any new API easily enough.
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord, gl_ModelViewMatrix and the various other matrices from the fixed function pipeline.
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.

Categories

Resources