Issue with Android OpenGLES lighting - android

Hey all,
I'm writing a little 3D engine for android to better learn the platform and opengl. I hope to eventually use it as a base for little 3D games.
I'm trying to implement lighting right now, and generally following the NeHe opengl android port tutorials. I have a simple spinning cube and 1 light that should not change position. Upon rendering my scene on a device the light appears to "dim" and re-lighten as cube rotates.
This is a swf video of the behavior:
http://drop.io/obzfq4g
The code for my "engine" is located here: http://github.com/mlasky/Smashout-Android-3D-Engine/
The relevant bits are:
http://github.com/mlasky/Smashout-Android-3D-Engine/blob/master/src/com/supernovamobile/smashout/SmashoutGLRenderer.java
this is where I'm initializing opengl and setting up my scene.
and
http://github.com/mlasky/Smashout-Android-3D-Engine/blob/master/src/com/supernovamobile/smashout/SMMesh.java
this is the code for actually rotating / drawing the cube mesh.
I wish I could formulate a better question; but I'm very stuck/confused and can't even think of an intelligent question to ask besides "does anyone know what might cause the kind of behavior I'm seeing?"
Thanks for any help you can provide.
Edit: Also the slowness / choppyness of the animation in the video is a result of the screencap software. It's smooth throughout the whole rotation on the emu.

From the video it looks like your normals are incorrect.
For an axis-aligned cube the normals in cube.xml seem off. They should be axis-aligned, not whatever they are. Where did you get the model?
This answer may also be related if mVNormalsBuffer is empty.

You are using
gl.glNormalPointer(3, GL10.GL_FLOAT, mVNormalsBuffer);
wrong.
It should be:
gl.glNormalPointer(GL10.GL_FLOAT,0, mVNormalsBuffer);

Related

Dynamic Environment mapping from camera in Augmented Reality setting

I am trying to implement something like the technique described in this (old) paper to use the phone camera's video frames to create an illusion of environment mapping in an AR app.
I want to take the camera frame, divide it into sub-areas and then use those as faces on the cube map. The division of the camera frame would look something like this:
Now the X area is easy, I can use glCopyTexImage2D to copy that square area to my cubemap texture. But I need help with the trapezoid shaped areas around X (forget about the trianlges for now).
How can I take those trapezoidal areas and distort them into square textures? I think I need the opposite transformation of the later occurring perspective projection, so that the two will cancel each other out in the final render if I render the cubemap as a skybox around my camera (does that explain what I want?).
Before doing this I tried a simpler step of putting the square X area on every side of the cubemap just to see if glCopyTexImage2D can even be used for this. It can, but the results are not rotated right, some faces are "upside down" when I render the cubemap as a skybox. The question is similar: How can I rotate them before using them as textures?
I also thought about solving the problem from the other side and modifying the "texture coordinates" to make the necessary adjustments, but that also does not seem easy since the lookup in the fragment shader with "textureCube" is more complicated than a normal texture lookup.
Any ideas?
I'm trying to do this in my AR app on Android with OpenGL ES 2.0 but I guess more general OpenGL advice might also be useful.
Update
I have come to the conclusion that this is not worth pursuing anymore. The paper makes it look nice, but my experiments with a phone camera have shown a major contradiction. If you want to reflect the environment in an object rendered in AR, the camera view is very limited. When the camera is far away from the tracked object you have enough environment information for a good reflection, but you will barely see it because the camera is far away. But when you bring the camera closer to see the awesome reflection in detail, the tracked object will fill most of the camera's field of view and you barely have any environment to reflect anymore. So in either case you lose and the result is not worth the effort.
It seems that you need to create mesh with UV mapping described in article and render it with texture from camera to another texture. Then use it as cubemap.

SurfaceView with OpenGL ES

OpenGL ES 2.0 is implemented in a project that I have been working on with a couple shader components that define what a texture should look like after modifications from a Bitmap. The SurfaceView will only ever have a single image in it for my project.
While doing several different approaches and looking through code in the past 24 hours, just hoping for a quick response or two from the community. Not looking for solutions, I'll do that research.
It sounds as though since we are using shaders, that in order to do scaling and movements in the texture based on touch events, that I will have have to use the Matrix utilities and OpenGL translations or movements with the camera to get the same effect as what is currently done within an ImageView. Would this be the appropriate approach? Perhaps even modify the shader code so that I have some additional input variables?
I don't believe that I can use anything on the Android side that would get the same effect, such as modifying the canvas of the SurfaceView or altering dimensions of the UI in some other fashion that would achieve the same effect?
Thanks. Again, solutions for zooming and moving around aren't necessary, just trying to get a grasp on intermixing OpenGL and Android appropriately for the task.
Why does it seem that several elements in 1.0 are easier than 2.0; ease of use should improve between releases.
Yes. You will need to use an ortho projection and adjust the extents to zoom. See this link here. To pan, you can simply use a glTranslatef.
If you would like to do this entirely in the pixel shader, you can use the texture matrix stack with glScalef and glTranslatef.

Android 2D animation on characters

I have created a character and I wonder how I have to animate it regarding the fact that :
It has to move when on static position, I mean for example eyes blinking, mouth closing.
It has to move on the screen regarding to player gesture.
It has to be involved in collision.
I read about, AnimationDrawable, SurfaceView, Canvas ... but which way is the best ?
Thank very much !!!
If you're trying to make a game you should consider GLSurfaceView with textures.
This is a pretty good tutorial, though it contains a few coding errors, it makes the concept very clear!
The OpenGLES method would give much much more freedom, than animating Views, but maybe harder to master.

How can I create curves with openGL ES android

I'm currently working through a set of tutorials on Android OpenGL ES (1.1) and feel like I'm starting to get a grasp of how the vertices and textures work, along with some sprite animation. As I understand, the only primitives here are points, straight lines, and triangles.
I'm now trying to create a simple curve and really don't know where to start.
I want the curve to be drawn dynamically to represent something like a beam deflection like this where I could input a force and have the curve change.
Is it something I would create with a line loop or triangle fan with a ton of vertices? Or perhaps a texture that I then manipulate?
Any input or a point in the right direction is much appreciated, thanks.
I can recommend this blog post http://blog.uncle.se/2012/02/opengl-es-tutorial-for-android-part-ii-building-a-polygon/ sadly the original source returns a 404. Hopefully the link provides the same quality of information. Anyhow, a good read for openGL.
You have the general idea. Whatever you do must be made of lines, points or triangles. You can generate all the numbers for any pseudo-curve however you want, but you're always going to be passing the resulting vertices to OPENGL then connecting those with lines and triangles.

getting part of the (already rendered) screen as texture

I'm making an android opengl es 2d app, and trying to use a part of my rendered screen as a texture for a billboard.
so far, i had partial success with glCopyTexSubImage - it only works on some phones.
everywhere i read recommends using frameBufferObject to render to texture, but i can't grasp how to use it, so if anyone can help me get this, i would thank them greatly.
if i use a FBO that is binded to a texture, is it possible to render just part of the screen? if not, isn't that a bit overkill? (also much more work texture mapping and moving the texture. that and the texture would have to be big enough for the part i need to not be blurry)
i need to get a snapshot of something that should be rendered to screen anyway, does that mean i have to render my scene twice every frame(one for my texture and another for the actuall render)? am i missing something here?

Categories

Resources