glViewport different result in Android and iOS - android

I just got started on a renderer for my cross platform framework (iOS and Android) using opengl es. When I got to the viewport stuff (which is needed for splitscreen stuff) and noticed there is a difference between iOS and Android. Here are two images.
Android
There is actually another glitch. IT seems to wrap.
iOS
My question.
Which of the two is correct? I have no transformations applied but one to bring the drawn quad back a bit. glTranslatef(0.0f, 0.0f, -5.f);
Initialisation code:
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH); //Enable Smooth Shading
glClearColor(0.f, 0.f, 0.f, 1.0f); //Black Background
glClearDepthf(1.0f); //Depth Buffer Setup
glEnable(GL_DEPTH_TEST); //Enables Depth Testing
glDepthFunc(GL_LEQUAL); //The Type Of Depth Testing To Do
//Really Nice Perspective Calculations
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
Viewport and project code
glViewport(viewportX, viewportY, viewportW, viewportH);.
glEnable(GL_SCISSOR_TEST);
glScissor(viewportX, viewportY, viewportW, viewportH);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
... And finally the frustrum is calculated and set glFrustrum.
I have also used this code:
float widthH = width * .1f;
float heightH = height * .1f;
glOrthof(-widthH, widthH, -heightH, heightH, .1f, 100.f);
glScalef(widthH, heightH, 1.f);
Maybe Android or iOS has something set by default? I am clueless.

Answering my own question for those who have the same issue.
I use GLKView which apparently calls the glViewport on each render call, resetting what I just did in the previous frame. So if you use GLKView make sure to call glViewport each frame! ... or roll your own EAGLView to have some real control which I think, I am about to.

This looks like you are not accounting for the scale factor of the iOS device. Bear in mind that the most recent iOS devices have retina displays with an extremely high ppi. You can see this artifact in the bottom left of the iOS screenshot. It is only displaying the bottom 25% of your texture because the entire view has a scale factor of 2.
In short, ensure you account for the scaleFactor on iOS and use this factor in your glScalef call.

Related

Android : OpenGL 2.0 first person camera

So I am trying to learn OpenGL 2.0 on Android, I did play quite a bit with OpenGL 1 on iOS and really enjoyed it.
My simple question is about the camera and making a 3D environment where you can move around (First person)
Should I be using
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
to control the camera and where I am in the world (updated on onDrawFrame) , or setting that on the onSurfaceCreated (once) and using
Matrix.setIdentityM(mViewMatrix, 0);
Matrix.translateM(mViewMatrix, 0, mMoveY, 0.0f, mMoveX);
Matrix.rotateM(mViewMatrix, 0, mDeltaY, 1.0f, 0.0f, 0.0f);
Matrix.rotateM(mViewMatrix, 0, mDeltaX, 0.0f, 1.0f, 0.0f);
instead which feels like I am rotating the world around me.
I have seen examples where they do either, on OpenGL 1 I used to use GLLookAt
Any of the two methods is fine since you can get same results. The general difference is about how you want to store your objects state. For a 3D environment I would always use 3 vectors to determine the object state (position, forward, up) and use modified version of lookAt and modelMatrix that can place the object with same parameters as lookAt. The upside of this approach is that you can directly place the parameters depending on other object, for instance: A guided missile is following you and is always turned towards you no mater where you are or how you move. Then its forward vector is simply taregetPosition-missilePosition (usually normalized). On the other hand if you have to compute the angles you have quite some work, directly asin, acos and a few if statements for each of the 2 angles. Next for instance simple moving around the room, going forward: If you use base vectors, then position = position+forward*speedFactor while with angles you again have to compute what way are you facing and then do the same... (there are quite a few situations where that is useful)
But there are downsides. You need to have your own system to move and rotate those vectors. For instance if you want to say turn to your left for 45 degrees it would look something like this:
forward = (forward+cross(up,forward)*tan(45)).normalized
and this only works for angle in interval (-90, 90). It gets quite the same when turning up but you need to also correct the up vector.
So to wrap it up, IF you create all the methods to work with base vectors (rotations, look at, model matrix...) they are a real labor saving method. But it simply depends on the project you are writing to decide what to use.

Projection Matrix or not? (OpenGL ES 2.0)

Is it necessary to use a projection matrix like so:
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
and
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation and store results in mMVPMatrix
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
I'm having no end of trouble doing a simple 2d (sprite) rotation around the z axis.
The most success I've had so far is to manipulate the rotation matrix (rotate and translate) and pass it directly to the vertex shader.
It's not perfect and carries with it some shearing/skewing/distortion but at least it allows me to move the 'pivot'/centre point of the quad. If I put the above lines in the whole thing breaks and I get all kinds of odd results.
What is the actual purpose of the lines above (I have read the android docs but I dont understand them) and are they necessary? Do people write OpenGl apps without them?
Thanks!!
OpenGL is a C API but many frameworks will wrap its functions into other functions to make life easier. For example, in OpenGL ES 2.0 you must create and pass matrices to OpenGL. But OpenGL does not provide you with any tools to actually build and calculate these matrixes. This is where many other libraries exist to do this matrix creation for you, and then you pass these constructed matrixes to OpenGL -- or the function may very well pass the matrix to OpenGL for you, after making the calculation. Just depends on the library.
You can easily not use these frameworks and do it yourself, which is a great way to learn the math in 3D graphics -- and the math is really key to everything in this area.
I'm sure you have direct access to the OpenGL API in Android, but you are choosing to use a library that perhaps Android provides natively (similar to how Apple provides GLKit, a recent addition to their frameworks for iOS). But that doesn't mean you must use that library, but it might provide faster development if you know what the library is doing.
In this case, the three functions above appear to be pretty generic matrix/graphics utilities. You have a frustrum function that sets the projection in 3D space. You have the lookAt function that determines what the view of the camera is -- where is it looking and where is the camera while it looks there.
And you have a matrix multiplication function, since in the end all matrices must be combined before they are applied to the vertices of your 3D object.
It's important to understand that a typical modelview matrix will include the camera orientation/location but it will also include the rotation and scaling of your object. So just sending a modelview based on the camera (from LookAt) is not enough, unless you want your object to remain at the center of the screen, with no rotation.
If you were to expand all the math that goes into matrix multiplication, it might look like this for a typical setup:
Frustum * Camera * Translation * Rotation * Vertices
Those middle three, Camera, Translation, Rotation, are usually combined together into your modelview, so multiply those together for that particular matrix, then multiply the modelview by your frustum projection matrix, and this whole result can be applied to your vertices.
You must be very careful about the order of the matrix multiplication. Multiplying a frustum by a modelview is not the same as multiplying a modelview by a frustum.
Now you mention skewing, distortion, etc. One possible reason for this is your viewport. I'm sure somewhere in your API is an option to set the viewport's height and width, which are usually the height and width of your screen. If they are set differently, you will get an improper aspect ratio and some skewing that you see. Just one possible explanation. Or it could be that your parameters to your frustum aren't quite right, since that will certainly affect things like skew also.

Android and OpenGL - See object through another object

I'm a total noob and I'm trying to display a little submarine I built in a 3d modeling program in opengl (Blender).
The submarine is built using a long cylinder with a sphere intersecting on the end of it.
The problem that I'm getting is that when I look at the result, I can see the entire sphere through the cylinder. I can also see the end of the cylinder through the sphere. This appears when I have lighting on. I'm using ambient and diffuse lighting. I just want to see half the sphere on the outside of the cylinder and I don't want to see any innards.
I have face culling on and it removes the front faces of both objects, but I clearly see the sphere.
Below I pasted my onSurfaceCreated function where I set up all the opengl parameters. Any suggestions are appreciated!
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
//
gl.glEnable(GL10.GL_DEPTH_TEST);
//gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glEnable(GL10.GL_POLYGON_OFFSET_FILL);
gl.glEnable(GL10.GL_LIGHTING);
gl.glEnable(GL10.GL_LIGHT0);
// Define the ambient component of the first light
float[] light0Ambient = {0.1f, 0.1f, 0.1f, 1.0f};
gl.glLightfv(gl.GL_LIGHT0, gl.GL_AMBIENT, FloatBufferFromFloatArray(light0Ambient, 4));
// Define the diffuse component of the first light
float[] light0Diffuse = {0.7f, 0.7f, 0.7f, 1.0f};
gl.glLightfv(gl.GL_LIGHT0, gl.GL_DIFFUSE, FloatBufferFromFloatArray(light0Diffuse, 4));
// Define the specular component and shininess of the first light
float[] light0Specular = {0.7f, 0.7f, 0.7f, 1.0f};
float light0Shininess = 0.4f;
//gl.glLightfv(gl.GL_LIGHT0, gl.GL_SPECULAR, FloatBufferFromFloatArray(light0Specular, 4));
// Define the position of the first light
float[] light0Position = {1.0f, 0.0f, 0.0f, 0.0f};
gl.glLightfv(gl.GL_LIGHT0, gl.GL_POSITION, FloatBufferFromFloatArray(light0Position, 4));
// Define a direction vector for the light, this one points correct down the Z axis
float[] light0Direction = {0.0f, 0.0f, -1.0f};
//gl.glLightfv(gl.GL_LIGHT0, gl.GL_SPOT_DIRECTION, FloatBufferFromFloatArray(light0Direction, 3));
// Define a cutoff angle. This defines a 90° field of vision, since the cutoff
// is number of degrees to each side of an imaginary line drawn from the light's
// position along the vector supplied in GL_SPOT_DIRECTION above
//gl.glLightf(gl.GL_LIGHT0, gl.GL_SPOT_CUTOFF, 180.0f);
gl.glEnable(GL10.GL_CULL_FACE);
// which is the front? the one which is drawn counter clockwise
gl.glFrontFace(GL10.GL_CCW);
// which one should NOT be drawn
gl.glCullFace(GL10.GL_BACK);
gl.glClearDepthf(10f);
gl.glPolygonOffset(1.0f, 2);
initShape();
gl.glScalef(0.5f, 0.5f, 0.5f);
}
Have you tried checking to see if it's a backface culling issue? Check it by changing the glEnable(GL_CULL_FACE) to glDisable, just to check it's definitely not that. It can be possible to have both winding orders on the same mesh, so culling can be right for one part of the model but wrong for another - it's easiest just to disable it to be sure that's not the issue.
It's either that or your surface may have been created without a depth buffer for some reason? Check the EGLConfig parameter to be sure that's not the case. Usually it would be created with a depth buffer by default, but it's possible to override that behaviour.
Also modelling things with internal faces which will never be seen is not going to do good things for your realtime performance. You should consider using a boolean 'or' operation in blender at least to get rid of the internal faces but ultimately it's best to craft your models with lots of care and attention to their topology, not just for poly count but also for how well they fit together into a triangle strip (that doesn't excuse the issue you're getting right now - that's just a note for the future)
I see, that you're still thinking in terms of "initializing some scene graph". That's not how OpenGL works (gee, this is the third time in a row I write this as answer). All that stuff you've doing in onSurfaceCreated actually belong into the display routine. OpenGL is not a scene graph. You set all the state you need right before you're drawing the stuff that requires that state.
I see you've that function "initShape" there. I don't think this does what you intend.
Are you sure you have an EGL context with the depth buffer enabled? If you are using GLSurfaceView you probably are looking for something like SimpleEGLConfigChooser(false) which should be SimpleEGLConfigChooser(true).
Lets see your onDrawFrame()? What is your projection, it could be flipped normals AND a messed up projection making it look funny.
Problems with your depth buffer or depth testing are possible too.
Edit: It looks like depth-test is not your problem, since you can see depth-culling from your the polygon that is intersecting the the sphere. It looks like your geometry is the problem.
I'd make it so your camera can move, then you can look around and figure out whats messed up with it.
If you are using textures have a look at the blending function you are using. It might be the case that pixels that overlap get multiplied as in an overlay effect rather than overwritten which is what you want I suppose.

Problems rotating a sprite using drawtexture (OpenGl ES Android)

I´m trying to rotate a sprite using drawtexture but nothing happens. I´m using the following code:
gl.glRotatef(90, 0, 0, 1.0f);
gl.glBindTexture(GL10.GL_TEXTURE_2D, TextureID);
(GL11Ext) gl).glDrawTexfOES(x, y, z, width, height);
The texture is drawn to the screen but it is not rotated... Anyone? :)
From the OES_draw_texture extension:
Xs and Ys are given directly in window (viewport) coordinates.
So the passed in coordinates are not transformed by the modelview and projection matrices, which is what glRotatef changes. In short, this extension does not support rotated sprites.
If you want those, the simplest is to draw standard rotated quads instead.
After testing quite a bit og different ways to do this, I found the answer was right in front of me the whole time... I was using the SpriteMethodTest example as my codebase, but I ignored the VBO extension part there, wich basically has all the needed functionality.
SpriteMethodTest: http://code.google.com/p/apps-for-android/source/browse/trunk/#trunk/SpriteMethodTest

How to move a particular texture around screen in android using opengl es?

I am very new to OpenGL ES. I am implementing some demo app to load multiple textures on the screen. For demo purpose I have loaded 2 textures in 2 different locations on the screen using glTranslatef() and glBindTextures() twice.
Now I am able to see 2 different images on the screen. Now I want to move one particular texture across the screen using mouse.
I know it may be silly topic, but please help me in this..
Thanks in advance..
As mentioned above you will need to translate the coordinates of the surface.
If you are using orthagonal (2D) projection, the pixel/coord ratio can be set to 1:1 easily by defining the projection to be the same size as the screen. For example:
glOrthof(0.0f, screenWidth, -screenHeight, 0.0f, -1.0f, 1.0f);
should define a projection with (0,0) in the top left and the same size as your screen.
If you are using 3D projection, you may find this link helpful:
http://www.mvps.org/directx/articles/rayproj.htm
You don't actually want to move the texture, but either you move your Scene point of view ( gluortho2d / glulookat / gltranslatef - or anything else ), or you move the vertices of the shape you're applying your texture to.
this is how im doing it in my 2D game :
gl.glTranslatef(-cameraPosX % 32, -cameraPosY % 32, 0);

Categories

Resources