OpenGL debugging - android

When I write OpenGL (1.0) programs for Android, I found it not easy to debug my programs. OpenGL is just a fixed pipeline of several steps which process vertex coordintates. Is there any way to peek and see what are results of consecutive steps of the pipeline?
Added: Mad Scientist, thanks for your advice. However, I tried to use Tracer for OpenGL ES (following instructions from http://developer.android.com/tools/help/gltracer.html ) and I'm still not sure on how do I find the information I need. I can see slider to choose a frame, then I can see a list of functions called. When I choose one of functions, I can see GL state. But when I look (inside this GL state) at context 0 -> vertex array data -> generic vertex attributes, all coordinates I can see are zeroes. Is it normal? My main hope is that in those situations when I can see nothing but a black screen I would be able to see what are vertexes' coordinates before and after multiplication by matrices and this will help me find out why they are invisible.

Use the OpenGL Tracer for Android, it is part of the Android SDK. Just start the Monitor program and trace your App.
I found the Tracer a bit temperamental and it does not work with my Nexus 10, but if it works it provides a lot of information.

Related

Is it possible to debug shaders in Android OpenGL ES 2?

Is there a possibility to debug the shaders (fragment and vertex) in an Android Application with OpenGL-ES 2?
Since we only pass a String with code and a bunch of variables to replace with handles, it is very tedious to find the proper changes that need to be done.
Is it possible to write to the Android Log, as in Log.d()
Is it possible to use break points and to inspect the current values in the shader calculations?
I am simply not used to write code with a pen anymore and that's what it feels like to code within the shader text code.
This is an old question but since it appears first in searches and the old answer can be expanded upon, I'm leaving an alternative answer:
While printing or debugging like we do on Java or Kotlin is not possible, this doesn't mean that it cannot be debugged. There used to be a tool on the now deprecated Android Monitor for letting you see a trace of your GPU execution frame by frame, which included inspecting calls and geometry.
Right now the official GPU debugger is the Android GPU Inspector, which has some useful performance metrics and will include debugging frame by frame in a future update.
If the Android GPU Inspector doesn't have what you need, you can go with vendor-specific debuggers depending on your device (Mali Graphics Debugger, Snapdragon Debugger, etc.)
No. Remember that the GPU is going to execute every program millions of times (once per vertex, and once per fragment), often with hundreds of threads running concurrently, so any concept of "connect a debugger" is pretty much impossible.

Object selection in opengl es

I'm new to 3D programming and have been playing around with OpenGL ES for Android for a little while now and I've seen some options of this questions beaning ray tracking/tracing and Object Picking and something about using the pixels to select 3D Objects. I'm trying to make something like a paint program with OpenGL ES for Android to where I can select a line from a cube and delete it or objects to be deleted or modified. Anyway, I'm unsure of where to start learning this I've tried Google and didn't really find anything helpful. Maybe if there's a video tutorial or a website that explains this better or any help to point me in the direction to go would be very grateful. Thank you so much in advanced.
Yes I know this is a possible duplicate Question.
I'm an iOS dev myself, but I recently implemented ray casting for my game, so I'll try to answer this in a platform agnostic way.
There are two steps to the ray-casting operation: firstly, you need to get the ray from the user's tap, and secondly, you need to test the triangles defining your model for intersections. Note that this requires you to still have them in memory or be able to recover them -- you can't just be keeping them in a vbo on the graphics card.
First, the conversion to world coordinates. Since you are no doubt using a projection matrix to get a 3-D perspective for your models, you need to unproject the point to get it in world coordinates. There are many libraries with this already implemented, such as glut's glunproject which I believe are available on Android. I believe that mathematically this amounts to taking the inverse of all the transformations which are currently acting on your models. Regardless, there are many implementations publicly available online you can copy from.
At this point, you are going to need a Z coordinate for the point you are trying to unproject. You actually want to unproject twice, once with a Z coord of 0 and once with a Z coord of 1. The vector which results from the z-Coord of 0 is the origin of the ray, and by subtracting this vector from your z-coord of 1 vector you will get the direction. Now you are ready to test for intersections of your model's polygons.
I have had success with the method presented in this paper (http://www.cs.virginia.edu/~gfx/Courses/2003/ImageSynthesis/papers/Acceleration/Fast%20MinimumStorage%20RayTriangle%20Intersection.pdf) for doing the actual intersection test. The algorithm is implemented in C at the end, but you should be able to convert it to Java with little trouble.
The selection features of OpenGL are not available with OpenGL ES, so you would have to build that yourself.
I recommend starting with OpenGL ES sample programs that are specific to Android. I think this article will help:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

Is compiling a shader during rendering a good / valid practice in OpenGL ES?

System: Android 4.03, OpenGL ES 2.0
Problem: When glAttachShader is invoked after the first frame has already been rendered with another program / shader, some devices (Galaxy S3) crash with a "GL_INVALID_VALUE" error (no further details are available in the error stack). Other devices (Asus eee TF101) are perfectly fine with that. The error does not always occur and sometimes it's also a "GL_INVALID_ENUM" instead.
If I force all shaders to be compiled right at the first call to onDrawFrame, it works on all (my) devices.
Questions:
Are there states in which the openGL(ES) machine is incapable of compiling a shader?
Is it possible that bound buffers, textures or enabled attribute arrays interfere with attaching a shader to a program?
If so, what is the ideal state one must ensure before attaching shaders and linking the program?
Is it even valid to compile shaders after other objects have already been rendered with other shaders?
Background: I'm developing an Android library that will allow me to use openGL graphics in a more object oriented way (using objects like "scene", "material", "model" etc.), utlimatively to write games easily. The scenes, models etc. are created in a thread different to the GL context. Only when onDrawFrame encounters one of these objects it will do the buffer object binding, texture binding and shader compilation, within the right thread.
I would like to avoid to compile all shaders at the beginning of my code. The shader source is assembled depending on the requirements of the material, the model and the scene (eg: Material: include bump-mapping, Model: include matrix-palette-skimming, scene: include fog). When a model is removed from a scene, I'm going to delete the shader again - and if I add another model, the new shader should be compiled ad-hoc.
At this point I'm trying to be as concise as possible without posting code - you can imagine that extracting the relevant parts from this library is difficult.
It is perfectly valid to compile during rendering although is discouraged as the driver needs to take resources (CPU) for that. Some driver states my trigger a shader recompile at the driver side as some states are injected into the shader. It wise to reorganize your drawing calls into chunks sharing the same driver state (preferred by shader program as is one of the most expensive operations done by the driver).
TIP: Be sure to "use" all variables, uniforms and attribs declared into your shader, otherwise, the Mali driver removes them during compile and when you try to get an uniform location, an attrib location and son on, the drivers returns GL_INVALID_VALUE.
Hope that helps.
If you copy the BasicGLSurfaceView sample code that comes with the Android development kit to start your project, then the first call to
checkGlError
is after attaching the vertex shader. However, you might have used an invalid value or enum a lot earlier or in a different location in your code. But this will only be picked up by this call, after glAttachShader.
In my case I deleted a texture which was still linked as render target for a framebuffer. My older Android device which runs slower compiled the shader before deleting, my newer device somehow managed to call
glFramebufferTexture2D
before compiling the shader. The whole thing somehow links to queueEvent and my poor understanding of thread-safety.
Thanks for your efforts, TraxNet and Prateek Nina.

Workaround to write depth buffer / depth value in OpenGL ES 2.0

I need to write to the depth buffer on an android device (OpenGL ES 2.0). Since gl_FragDepth is not writable under OGL ES 2.0, I have to find a workaround. I actually want to render spheres via raycasting, similar to this: http://www.sunsetlakesoftware.com/2011/05/08/enhancing-molecules-using-opengl-es-20 .
However, the solution explained on this website (offscreen render pass writing the depth using a special glBlendEquation) is only working on Apple devices, not on Android, because GL_MIN_EXT-blending is not supported.
On my Tegra3 tablet I was able to implement this method: Android GLES20.glBlendEquation not working? (btw, I recommend using linearized depth values, they give better results!)
It works quite good, but of course this is only available on Nvidia GPUs.
In theory, there is the extension GL_EXT_frag_depth (see Can an OpenGL ES fragment shader change the depth value of a fragment?), but it is not available on Android devices as well.
Finally, you could of course write the depth buffer for just one sphere (in an offscreen render pass), then write the depth buffer for the next sphere in a second render pass and combine the two in a third render pass. In doing so, you would have 2*n+1 render passes for n spheres - which seems to be quite inefficient!
So since I am running out of ideas, my question is: Can you think of another, generic way/workaround to write the depth buffer on an OpenGL ES 2.0 Android device?
Well, you are sure running out of options here. I don't know of any further workaround because I don't know Opengl ES soooo well.
The only thing that comes into my mind would be combining the brute-force multi-pass approach with some preprocessing:
Sort your spheres into groups where the atoms are not overlapping each other. It should be possible to sort all your spheres from proteins in less then ten groups. Then render all spheres of each group in one pass. The vertex-depth is sufficient here, because the spheres do not overlap. Then you can "depth-blend" the results.
This requires some preprocessing which could be a problem.

Reason for no simple way to determine 3D coordinate of screen coordinate in Open GL ES 2.0 Android

My question comes from why OpenGL and/or Android does not have a way to simply grab the current matrix and store it as a float[]. All the research I have found suggests using these classes which it looks like I have to download and put in my project called MatrixGrabber to be able to grab the current state of the Open GL matrix.
My overall goal is to easily determine what the Open GL world location is of an event caused by touching the screen where I can retriever the X and Y coordinates by the event.
The best workaround I have found is Android OpenGL 3D picking. but I wonder why there isn't a way where you can simply retriever the matrices you want and then just call
GLU.gluUnProject(...);
My question comes from why OpenGL and/or Android does not have a way to simply grab the current matrix and store it as a float[].
Because OpenGL ES 2.0 (and core desktop GL 3.1 and above) do not necessarily have a "current matrix." All transforms are done via shader logic, so matrices don't even have to be involved. It could be doing anything in there.
There is no current matrix, so there is nothing to get. And nothing to unproject.
In ES 1 you can grab the current matrix and store it as a float using glGetFloatv, with the pname GL_MODELVIEW_MATRIX, GL_PROJECTION_MATRIX or GL_TEXTURE_MATRIX as applicable.
GLU is not inherently part of OpenGL ES because ES is intended to be a minimal specification and GLU is sort of an optional extra. But you can grab SGI's reference implementation of GLU and use its gluUnProject quite easily.
EDIT: and to round off the thought, as Nicol points out there's no such thing as the current matrix in ES 2. You supply to your shaders arbitrarily many matrices for arbitrarily many purposes, and since you supplied them in the first place you shouldn't need to ask GL to get them back again.
I just took a look at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/spritetext/MatrixGrabber.html which looks like the MatrixGrabber you're referring to and it doesn't look especially complicated — in fact, it's overbuilt. You can just use gl2.getMatrix directly and plug that into gluUnProject.
The probable reason for the design of the MatrixGrabber code is that it caches the values for multiple uses — because the GPU runs asynchronously to the CPU, your code may have to wait for the getMatrix response, so it is more efficient to get it as few times as possible and reuse the data.
Another source of complexity in the general problem is that a touch specifies only two dimensions. A single touch does not indicate any depth, so you have to do that in some application-specific way. The obvious approacj is to read the depth buffer (though some OpenGL implementations don't support that), but that doesn't work if you have e.g things that should be "transparent" to touches. An alternative is to construct a ray (such as by unprojecting twice with two different depths) and then do raycasting into your scene.

Categories

Resources