Android OpenGL ES Bump Mapping - android

So, I have scoured the net, looking for information on how one would do this. And so far, all I've come up with is....nohting.
Any one got a better starting point than http://www.paulsprojects.net/tutorials/simplebump/simplebump.html

If you are targeting OpenGL 2.0 is in fact quite easy and you just need a normal map, and a shader that handles the lighting equation per pixel. I found this for you: http://www.learnopengles.com/android-lesson-four-introducing-basic-texturing/

Related

Object selection in opengl es

I'm new to 3D programming and have been playing around with OpenGL ES for Android for a little while now and I've seen some options of this questions beaning ray tracking/tracing and Object Picking and something about using the pixels to select 3D Objects. I'm trying to make something like a paint program with OpenGL ES for Android to where I can select a line from a cube and delete it or objects to be deleted or modified. Anyway, I'm unsure of where to start learning this I've tried Google and didn't really find anything helpful. Maybe if there's a video tutorial or a website that explains this better or any help to point me in the direction to go would be very grateful. Thank you so much in advanced.
Yes I know this is a possible duplicate Question.
I'm an iOS dev myself, but I recently implemented ray casting for my game, so I'll try to answer this in a platform agnostic way.
There are two steps to the ray-casting operation: firstly, you need to get the ray from the user's tap, and secondly, you need to test the triangles defining your model for intersections. Note that this requires you to still have them in memory or be able to recover them -- you can't just be keeping them in a vbo on the graphics card.
First, the conversion to world coordinates. Since you are no doubt using a projection matrix to get a 3-D perspective for your models, you need to unproject the point to get it in world coordinates. There are many libraries with this already implemented, such as glut's glunproject which I believe are available on Android. I believe that mathematically this amounts to taking the inverse of all the transformations which are currently acting on your models. Regardless, there are many implementations publicly available online you can copy from.
At this point, you are going to need a Z coordinate for the point you are trying to unproject. You actually want to unproject twice, once with a Z coord of 0 and once with a Z coord of 1. The vector which results from the z-Coord of 0 is the origin of the ray, and by subtracting this vector from your z-coord of 1 vector you will get the direction. Now you are ready to test for intersections of your model's polygons.
I have had success with the method presented in this paper (http://www.cs.virginia.edu/~gfx/Courses/2003/ImageSynthesis/papers/Acceleration/Fast%20MinimumStorage%20RayTriangle%20Intersection.pdf) for doing the actual intersection test. The algorithm is implemented in C at the end, but you should be able to convert it to Java with little trouble.
The selection features of OpenGL are not available with OpenGL ES, so you would have to build that yourself.
I recommend starting with OpenGL ES sample programs that are specific to Android. I think this article will help:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

iOS/Android Going back to OpenGL 1.1 instead of using 2.0

So this question has been asked a few times but the examples I have seen where in 2010,2011
2 years ago I wrote a nice little OpenGL 1.1 3D framework for iOS and I got to understand most of what I wrote :) (did not use GLKit)
This year I thought I would try and write a OpenGl 2.0 version for Android and iOS (GLKIT). Simple 3D movement, cubes, object picking, collision. Aim, to build a simple 3D city on both platforms.
OpenGL 2.0 seems like bloody hard work, lighting, shaders etc. I got the basic movement down and make objects but god I dont know how half of it works.
My question is, in 2013, should I keep at OpenGL 2.0 as no one uses 1.x. Or cut my losses and go back to 1.x which I understood?
I dont need any advance effects that I know of.
Take the time to move to 2.0. It's not just the "effects" that are the benefit of 2.0.
You will get so much more out of it than you are currently getting with 1.1. The performance enhancements from shaders alone is worth the jump. Also, consider that the hardware is moving towards 2.0 as well.
In addition, you'll reap the rewards from having a better understanding of lighting, materials, texture mapping, normals, etc.
So, in short, make the jump. It's worth the time and effort.

Workaround to write depth buffer / depth value in OpenGL ES 2.0

I need to write to the depth buffer on an android device (OpenGL ES 2.0). Since gl_FragDepth is not writable under OGL ES 2.0, I have to find a workaround. I actually want to render spheres via raycasting, similar to this: http://www.sunsetlakesoftware.com/2011/05/08/enhancing-molecules-using-opengl-es-20 .
However, the solution explained on this website (offscreen render pass writing the depth using a special glBlendEquation) is only working on Apple devices, not on Android, because GL_MIN_EXT-blending is not supported.
On my Tegra3 tablet I was able to implement this method: Android GLES20.glBlendEquation not working? (btw, I recommend using linearized depth values, they give better results!)
It works quite good, but of course this is only available on Nvidia GPUs.
In theory, there is the extension GL_EXT_frag_depth (see Can an OpenGL ES fragment shader change the depth value of a fragment?), but it is not available on Android devices as well.
Finally, you could of course write the depth buffer for just one sphere (in an offscreen render pass), then write the depth buffer for the next sphere in a second render pass and combine the two in a third render pass. In doing so, you would have 2*n+1 render passes for n spheres - which seems to be quite inefficient!
So since I am running out of ideas, my question is: Can you think of another, generic way/workaround to write the depth buffer on an OpenGL ES 2.0 Android device?
Well, you are sure running out of options here. I don't know of any further workaround because I don't know Opengl ES soooo well.
The only thing that comes into my mind would be combining the brute-force multi-pass approach with some preprocessing:
Sort your spheres into groups where the atoms are not overlapping each other. It should be possible to sort all your spheres from proteins in less then ten groups. Then render all spheres of each group in one pass. The vertex-depth is sufficient here, because the spheres do not overlap. Then you can "depth-blend" the results.
This requires some preprocessing which could be a problem.

OpenGL ES 2.0 vs OpenGL 3 - Similarities and Differences

From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
What I'm curious to know is whether or not OpenGL 3 is comparable to OpenGL ES 2.0. In other words, given that I'm about to make a game engine for both desktop and Android, are there any differences I should be aware of in particular regarding OpenGL 3.x+ and OpenGL ES 2.0?
This can also include OpenGL 4.x versions as well.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
Define "isn't anything like" it. Desktop GL 2.1 has a bunch of functions that ES 2.0 doesn't have. But there is a mostly common subset of the two that would work on both (though you'll have to fudge things for texture image loading, because there are some significant differences there).
Desktop GL 3.x provides a lot of functionality that unextended ES 2.0 simply does not. Framebuffer objects are core in 3.x, whereas they're extensions in 2.0 (and even then, you only get one destination image without another extension). There's transform feedback, integer textures, uniform buffer objects, and geometry shaders. These are all specific hardware features that either aren't available in ES 2.0, or are only available via extensions. Some of which may be platform-specific.
But there are also some good API convenience features available on desktop GL 3.x. Explicit attribute locations (layout(location=#)), VAOs, etc.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
It rather depends on how much work you intend to do and what you're prepared to do to make it work. At the very least, you should read up on what OpenGL ES 2.0 does, so that you can know how it differs from desktop GL.
It's easy to avoid the actual hardware features. Rendering to texture (or to multiple textures) is something that is called for by your algorithm. As is transform feedback, geometry shaders, etc. So how much you need it depends on what you're trying to do, and there may be alternatives depending on the algorithm.
The thing you're more likely to get caught on are the convenience features of desktop GL 3.x. For example:
layout(location = 0) in vec4 position;
This is not possible in ES 2.0. A similar definition would be:
attribute vec4 position;
That would work in ES 2.0, but it would not cause the position attribute to be associated with the attribute index 0. That has to be done via code, using glBindAttribLocation before the program is linked. Desktop GL also allows this, but the book you linked to doesn't do it. For obvious reasons (it's a 3.3-based book, not one trying to maintain compatibility with older GL versions).
Uniform buffers is another. The book makes liberal use of them, particularly for shared perspective matrices. It's a simple and effective technique for that. But ES 2.0 doesn't have that feature; it only has the per-program uniforms.
Again, you can code to the common subset if you like. That is, you can deliberately forgo using explicit attribute locations, uniform buffers, vertex array objects and the like. But that book isn't exactly going to help you do it either.
Will it be a waste of your time? Well, that book isn't for teaching you the OpenGL 3.3 API (it does do that, but that's not the point). The book teaches you graphics programming; it just so happens to use the 3.3 API. The skills you learn there (except those that are hardware based) transfer to any API or system you're using that involves shaders.
Put it this way: if you don't know graphics programming very much, it doesn't matter what API you use to learn. Once you've mastered the concepts, you can read the various documentation and understand how to apply those concepts to any new API easily enough.
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord, gl_ModelViewMatrix and the various other matrices from the fixed function pipeline.
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.

Reason for no simple way to determine 3D coordinate of screen coordinate in Open GL ES 2.0 Android

My question comes from why OpenGL and/or Android does not have a way to simply grab the current matrix and store it as a float[]. All the research I have found suggests using these classes which it looks like I have to download and put in my project called MatrixGrabber to be able to grab the current state of the Open GL matrix.
My overall goal is to easily determine what the Open GL world location is of an event caused by touching the screen where I can retriever the X and Y coordinates by the event.
The best workaround I have found is Android OpenGL 3D picking. but I wonder why there isn't a way where you can simply retriever the matrices you want and then just call
GLU.gluUnProject(...);
My question comes from why OpenGL and/or Android does not have a way to simply grab the current matrix and store it as a float[].
Because OpenGL ES 2.0 (and core desktop GL 3.1 and above) do not necessarily have a "current matrix." All transforms are done via shader logic, so matrices don't even have to be involved. It could be doing anything in there.
There is no current matrix, so there is nothing to get. And nothing to unproject.
In ES 1 you can grab the current matrix and store it as a float using glGetFloatv, with the pname GL_MODELVIEW_MATRIX, GL_PROJECTION_MATRIX or GL_TEXTURE_MATRIX as applicable.
GLU is not inherently part of OpenGL ES because ES is intended to be a minimal specification and GLU is sort of an optional extra. But you can grab SGI's reference implementation of GLU and use its gluUnProject quite easily.
EDIT: and to round off the thought, as Nicol points out there's no such thing as the current matrix in ES 2. You supply to your shaders arbitrarily many matrices for arbitrarily many purposes, and since you supplied them in the first place you shouldn't need to ask GL to get them back again.
I just took a look at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/spritetext/MatrixGrabber.html which looks like the MatrixGrabber you're referring to and it doesn't look especially complicated — in fact, it's overbuilt. You can just use gl2.getMatrix directly and plug that into gluUnProject.
The probable reason for the design of the MatrixGrabber code is that it caches the values for multiple uses — because the GPU runs asynchronously to the CPU, your code may have to wait for the getMatrix response, so it is more efficient to get it as few times as possible and reuse the data.
Another source of complexity in the general problem is that a touch specifies only two dimensions. A single touch does not indicate any depth, so you have to do that in some application-specific way. The obvious approacj is to read the depth buffer (though some OpenGL implementations don't support that), but that doesn't work if you have e.g things that should be "transparent" to touches. An alternative is to construct a ray (such as by unprojecting twice with two different depths) and then do raycasting into your scene.

Categories

Resources