I am working with ARM mali 72 on my Android smartphonne.
I would like to use the output buffer fron OpenCL to render it into OpenGL like a texture.
I have no problem with openCL alone nether openGL alone.
I got no cloud to how to use both in the same application.
The goal is to use mY output OpenCL and send it to openGL.
Some peice of code step by step would be very nice.
I can use openCL 2.0 and opengl ES 3.0 on my smartphonne.
************** ADDED THE 30/09/2020 ************
It Look like I need more information on how to manage my problem.
So my configuration is ! I got Java OpenGL ES Application to already develloped. I retreive the camera frame from Camera.OnPreviousFrame then a send it to OpenCL using JNI.
So i would like to Get the EGL display from Java OpenGL ES Send it through JNI and then Compute my openCL kernel send It back to java OpenGL ES.
I know how to retreive data from OpenCL, transform it to bitmap and use SurfaceTexture and GL_TEXTURE_EXTERNAL_OES to display it into openGL ES.
My problem is how to Retreive EGL display from java OpenGL ES. How to send it to C++, this i can manage to find out using JNI. But i do not know how to implement the C++ part using EGL and OpenCL.
The Answer from BenMark is interresting concerning the processing but I am missing some part. It is possible to use my configuration, using java openGL ES or do i nedd to do all the EGL, openGL, openCL code in native ?
Thanks a lot for help me to anderstand the problem and try to find a solution. ;))
I haven't code a code example but -
Using the EGL API makes interoperability between the GLES and OpenCL APIs easier.
This page provides some tips: https://developer.arm.com/documentation/101574/0400/Using-OpenCL-extensions/Inter-operation-with-EGL/EGL-images
From that page, amongst other things:
You'll want the EGL_KHR_image_base extension to share EGL images.
In OpenCL you'll want cl_khr_egl_image to use the EGL image, and then you must flush in OpenCL with clFinish or clWaitForEvents to be sure that the image is then ready for use by OpenGL ES.
The start and end of the EGL image accesses by OpenCL applications must be signaled by enqueuing clEnqueueAcquireEGLObjectsKHR and clEnqueueReleaseEGLObjectsKHR commands.
I hope that helps get you going.
It was a long anderstanding problem ;))
So the solution are :
It is not possible to Share Context from other thread. So JAVA/OpenCL C++ cannot share data. So depending on GLSL version they is different possibility.
GLES 2.0:
need to rewrite SurfaceTexture.cpp to acces (EGL IMAGE) surface form C++ and i even do not know if it is possible due to context nono thread. so Forget it at the moment ;)).
But you can still use camera onPrevious to get Data, then send it through JNI to C++ OpenCL, that is what i am doing a this time. Then send back the OpenCL output to the Display View and cath it using Canvas and GL_TEXTURE_EXTERNAL_OES. It works but it is eavy. ;)) And you cannot get nothing from GLSL texture back to C++.
GLSL 3.1:
Use Compute shader rather than OpenCL in JAVA. ;))
Have a look at What is the difference between OpenCL and OpenGL's compute shader?
Related
I recently switched from using glBufferData to glMapBufferRange which gives me direct access to GPU memory rather than copying the data from CPU to GPU every frame.
This works just fine and in OpenGL ES 3.0 I do the following per frame:
Get a pointer to my GPU buffer memory via glMapBufferRange.
Directly update my buffer using this pointer.
Use glUnmapBuffer to unmap the buffer so that I can render.
But some Android devices may have at least OpenGL ES 3.1 and, as I understand it, may also have the EXT_buffer_storage extension (please correct me if that's the wrong extension ?). Using this extension it's possible to set up persistent buffer pointers which do not require mapping/unmapping every frame using the GL_MAP_PERSISTENT_BIT flag. But I can't figure out or find much online in the way of how to access these features.
How exactly do I invoke glMapBufferRange with GL_MAP_PERSISTENT_BIT set in OpenGL ES 3.1 on Android ?
Examining glGetString(GL_EXTENSIONS) does seem to show the extension is present on my device, but I can't seem to find GL_MAP_PERSISTENT_BIT anwhere, e.g. in GLES31 or GLES31Ext, and I'm just not sure how to proceed.
The standard Android Java bindings for OpenGL ES only expose extensions that are guaranteed to be supported by all implementations on Android. If you want to expose less universally available vendor extensions you'll need to roll your own JNI bindings, using eglGetProcAddress() from native code compiled with the NDK to fetch the entry points.
For this one you want the extension entry point glBufferStorageEXT().
As we know, during a program, it calls functions from the opengl es library (and of course, libegl). I would like to understand this in more detail. That is, what kind of library calls another library and so to the GPU. How do surfaceflinger, garrots interacts with all of this.
I know there are a lot of pictures depicting the approximate scheme. But there is no clear tree of calls. I'll be glad to any answer. Maybe there are some useful resources that I could not find.
Your Question is too broad. Still I will try to make some of its clear.
Application will either draw on Canvas or Its an OpenGL ES based app. Canvas based app may or may not use Hardware Rendering. In case of Hardware Rendering and OpenGL app final image is written to a buffer called "Surface" using GPU. Same buffer is written using CPU in case of Canvas and Software Rendering.
There can be multiple such buffers. Which are sent to Surface Flinger for compositing. Surface Flinger again; may or may not use OpenGL(or GPU) for compositing. SurfaceFlinger can also offload this compositing task to HardwareComposer depending upon different various conditions.
GrAlloc is used to allocate contiguous chunk of memory for graphics purpose.
Thus Final composited buffer is sent to LCD Display for final Display.
Edit
How OpenGL Works ?
So Open GL is just a soecification. GPU venodors provide implementation of that specification in GPU drivers. LibGLES will hav all function declarations and its Graphics Drivers job to convert libgl calls to GPU instructions.
If you want in depth understanding of Surface Flinger and Hardware Composer read about Android Graphics Architecture on android source code site.
I found some ways to speed up glReadPixels by OpenGL ES 3.0, but I am not sure if it works or not.
specifies the fifth argument of glReadPixels() as GL_BGRA to avoid unnecessary swizzle.
use PBO as this mentioned.
In order to verify, I updated to the latest Android SDK and ADT, and try to use OpenGL ES 3.0. However, I can't find GL_BGRA definition as I expected, and I don't know how to use glMapBuffer(). Am I missing something?
To summarize,
Is there any other faster way to access framebuffer than using glReadPixels() ?
How to use GL_BGRA and PBO by OpenGL ES 3.0 on Android?
If anyone knows, please point me out. Some code snippets would be better.
Thanks in advance.
Not really. I don't know whether you would get performance gains by using BGRA format, but it's worth a try. In ES 2.0, GL_BGRA is only available through extension: EXT_read_format_bgra, so you would need to use the enum GL_BGRA_EXT, and apparently it's the same in OpenGL ES 3.0 too.
Yes, glMapBufferRange is the way to go on OpenGL ES 3.0. It looks like your call is correct, as per your comment. According to spec and man page, glMapBufferRange should not generate GL_INVALID_ENUM errors, so I think something else is wrong here. Make sure the error is not generated by an earlier call.
This question is specifically about OpenGL 2.0 ES on Android, but if there are more general answers based on the OpenGL specs, I'd be interested in those too.
Is there a way to pass a message (string) out of a GL 2.0 ES shader to the application code (either Java or native)? E.g.
void main()
{
...
if (somecondition)
{
logMessage("Things are messed up man");
}
}
If not, why would a programming environment be defined (OpenGL ES 2.0 shader language in this case) without this type of facility? I don't know anything about hardware but surely this would not be that hard to implement in the GPU. If there are performance issues, it could always be optionally #ifdef'd out of the shader code...
No, there are no logging facilities like you're asking for in any of the OpenGL shading languages (including OpenGL ES). Debug contexts on desktop OpenGL (and in ES when they get there) may provide some information, but there's nothing like what you're asking for.
The most common technique (and this is usually done in the context of fragment shaders) is to set the output of the shader to an error color, or some other signal that indicates the shader failed some condition you were banking on.
Given many instances of a particular shader execute simultaneously (think of how many fragment shader threads are executed when filling in a full-screen quad), tracking the state of each thread would require considerable amounts of state to do the job right.
You say:
I don't know anything about hardware but surely this would not be that hard to implement in the GPU.
No; modern GPUs are complex machines with complex designs. While doing this is possible, it's not worth the additional complexity and hardware validation. That's the magic of APIs, while something like that looks conceptually simple, it's not.
radical7 answer is correct.
If you want to do debugging of the shader code, GPU vendors provide compilers and simulators. Or you could try the glm library and code the shader in C++, with all the tools to print messages you need.
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
What I'm curious to know is whether or not OpenGL 3 is comparable to OpenGL ES 2.0. In other words, given that I'm about to make a game engine for both desktop and Android, are there any differences I should be aware of in particular regarding OpenGL 3.x+ and OpenGL ES 2.0?
This can also include OpenGL 4.x versions as well.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
Define "isn't anything like" it. Desktop GL 2.1 has a bunch of functions that ES 2.0 doesn't have. But there is a mostly common subset of the two that would work on both (though you'll have to fudge things for texture image loading, because there are some significant differences there).
Desktop GL 3.x provides a lot of functionality that unextended ES 2.0 simply does not. Framebuffer objects are core in 3.x, whereas they're extensions in 2.0 (and even then, you only get one destination image without another extension). There's transform feedback, integer textures, uniform buffer objects, and geometry shaders. These are all specific hardware features that either aren't available in ES 2.0, or are only available via extensions. Some of which may be platform-specific.
But there are also some good API convenience features available on desktop GL 3.x. Explicit attribute locations (layout(location=#)), VAOs, etc.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
It rather depends on how much work you intend to do and what you're prepared to do to make it work. At the very least, you should read up on what OpenGL ES 2.0 does, so that you can know how it differs from desktop GL.
It's easy to avoid the actual hardware features. Rendering to texture (or to multiple textures) is something that is called for by your algorithm. As is transform feedback, geometry shaders, etc. So how much you need it depends on what you're trying to do, and there may be alternatives depending on the algorithm.
The thing you're more likely to get caught on are the convenience features of desktop GL 3.x. For example:
layout(location = 0) in vec4 position;
This is not possible in ES 2.0. A similar definition would be:
attribute vec4 position;
That would work in ES 2.0, but it would not cause the position attribute to be associated with the attribute index 0. That has to be done via code, using glBindAttribLocation before the program is linked. Desktop GL also allows this, but the book you linked to doesn't do it. For obvious reasons (it's a 3.3-based book, not one trying to maintain compatibility with older GL versions).
Uniform buffers is another. The book makes liberal use of them, particularly for shared perspective matrices. It's a simple and effective technique for that. But ES 2.0 doesn't have that feature; it only has the per-program uniforms.
Again, you can code to the common subset if you like. That is, you can deliberately forgo using explicit attribute locations, uniform buffers, vertex array objects and the like. But that book isn't exactly going to help you do it either.
Will it be a waste of your time? Well, that book isn't for teaching you the OpenGL 3.3 API (it does do that, but that's not the point). The book teaches you graphics programming; it just so happens to use the 3.3 API. The skills you learn there (except those that are hardware based) transfer to any API or system you're using that involves shaders.
Put it this way: if you don't know graphics programming very much, it doesn't matter what API you use to learn. Once you've mastered the concepts, you can read the various documentation and understand how to apply those concepts to any new API easily enough.
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord, gl_ModelViewMatrix and the various other matrices from the fixed function pipeline.
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.