glDrawArrays works in vm, crashes on phone - android

I am drawing a line in opengl es from the Android NDK. I have been developing on the VM's and just recently tried my application on a phone. The application runs fine on the vm's. A line is drawn. However, on a motorola droid, the application just crashes, and on a HTC incredible it just shows a black screen. I have verified that the number being passed to the function are correct. The application haults on the glDrawArray(GL_LINES, 0, 2) call. The whole function looks like this:
void drawLine(GLfloat x1, GLfloat y1, GLfloat x2, GLfloat y2, GLfloat * color)
{
GLfloat vVertices[] =
{x1, y1,
x2, y2};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColor4f(color[0],color[1],color[2],color[3]);
glVertexPointer(2, GL_FLOAT, 0, vVertices);
glDrawArrays(GL_LINES, 0, 2);
__android_log_write(ANDROID_LOG_ERROR,"to mama","You drew arrays");
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
}
and the call to it looks like this:
drawLine(0.0f,0.0f,1.0f,0.0f,colorx);/*x is green*/
I can try drawelements next, but there is not reason draw arrays should not work (as far as i know).

You're enabling the color array (glEnableClientState(GL_COLOR_ARRAY)) without actually setting setting the glColorPointer().
Either set the color pointer or don't enable the color array.

Related

Trouble mapping device coordinate system to real-world (rotation vector) coordinate system in Processing Android

I know this question has been asked many many times, but with all the knowledge out there I still can't get it to work for myself in the specific setting I now find myself in: Processing for Android.
The coordinate systems involved are (1) the real-world coordinate system as per Android's view: y is tangential to the ground and pointing north, z goes up into the sky, and x goes to your right, if you're standing on the ground and looking north; and (2) the device coordinate system as per Processing's view: x points to the right of the screen, y down, and z comes out of the screen.
The goal is simply to draw a cube on the screen and have it rotate on device rotation such that it seems that it is stable in actual space. That is: I want a map between the two coordinate systems so that I can draw in terms of the real-world coordinates instead of the screen coordinates.
In the code I'm using the Ketai sensor library, and subscribe to the onRotationVectorEvent(float x, float y, float z) event. Also, I have a simple quaternion class lying around that I got from https://github.com/kynd/PQuaternion. So far I have the following code, in which I have two different ways of trying to map, that coincide, but nevertheless don't work as I want them to:
import ketai.sensors.*;
KetaiSensor sensor;
PVector rotationAngle = new PVector(0, 0, 0);
Quaternion rot = new Quaternion();
void setup() {
fullScreen(P3D);
sensor = new KetaiSensor(this);
sensor.start();
}
void draw() {
background(#333333);
translate(width/2, height/2);
lights();
// method 1: draw lines for real-world axes in terms of processing's coordinates
PVector rot_x_axis = rot.mult(new PVector(400, 0, 0));
PVector rot_y_axis = rot.mult(new PVector(0, 0, -400));
PVector rot_z_axis = rot.mult(new PVector(0, 400, 4));
stroke(#ffffff);
strokeWeight(8); line(0, 0, 0, rot_x_axis.x, rot_x_axis.y, rot_x_axis.z);
strokeWeight(5); line(0, 0, 0, rot_y_axis.x, rot_y_axis.y, rot_y_axis.z);
strokeWeight(2); line(0, 0, 0, rot_z_axis.x, rot_z_axis.y, rot_z_axis.z);
// method 2: first rotate appropriately
fill(#f4f7d2);
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, rotationAngle.y, rotationAngle.z);
box(200, 200, 200);
}
void onRotationVectorEvent(float x, float y, float z) {
rotationAngle = new PVector(x, y, z);
// I believe these two do the same thing.
rot.set(x, y, z, cos(asin(rotationAngle.mag())));
//rot.setAngleAxis(asin(rotationAngle.mag())*2, rotationAngle);
}
The above works well enough that the real-world axis lines coincide with the cube drawn, and both rotate in an interesting way. But still, there seems to be some "gimbal stuff" going on, in the sense that, when I rotate my device up and down standing one way, the cube also rotates up and down, but standing another way, the cube rotates sideways --- as if I'm applying the rotations in the wrong order. However, I'm trying to avoid gimbal madness by working with quaternions this way --- how does it still apply?
I've solved it now, just by a simple "click to test next configuration" UI, to test all possible 6 * 8 configurations of rotate(asin(rotationAngle.mag()) * 2, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>); -- the solution to which seemed to be 0, -1, 2, i.e.:
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, -rotationAngle.y, rotationAngle.z);

Result difference between android virtual device and real device

Following code is drawing two different cube (red/green) on android studio with JNI using opengl-ES. On the virtual device, the result is correct. However, on real device, the result looks like strange.
The structure(means 3d model and its view on 2d) is correct. but the color is different to AVD. In addition it looks like the depth test is not working. What's the problem?
Simply speaking,
AVD : gives correct result (red, green cube with depth test on with specific camera pose)
real device : gives strange result (same camera pose to AVD. but color is different. two green cubes are there. also depth test not working)
float color1[] = {1.0f, 0.0f, 0.0f};
float color2[] = {0.0f, 1.0f, 0.0f};
int mColorHandle1;
int mColorHandle2;
glViewport(0,1280,720,1280);
glEnable(GL_DEPTH_TEST);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], 1.0f);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 36);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], 1.0f);
glDrawArrays(GL_TRIANGLES, 36, 36);
glDisableVertexAttribArray(gvPositionHandle);
glDisable(GL_DEPTH_TEST);

Libgdx: Framebuffer for "Fog of War"-Effect

I am writing a RTS Game for Android and I want the "Fog of War" effect on the player's units. This effect means that only a "circle" around each unit shows the background map while on places where no player unit is located, the screen should be black. I don't want to use shaders.
I have a first version of it working. What I am doing is to render the map to the default framebuffer, then I have a second Framebuffer (similar to light technics) which is completely black. Where the units of the players are, I then batch-draw a texture which is completely transparent and has a white circle with blurred edges in its middle.
Finally I draw the second (light) FrameBuffer's colorTexture over the first one using Gdx.gl.glBlendFunc(GL20.GL_DST_COLOR, GL20.GL_ZERO);
The visual effect now is that indeed the whole map is black and a circle around my units is visible - but a lot of white color is added.
The reason is pretty clear as I drew the light textures for the units like this:
lightBuffer.begin();
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE);
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glClearColor(0.1f, 0.1f, 0.1f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
batch.setColor(1f, 1f, 1f, 1f);
for (RTSTroopAction i : unitList) {
batch.draw(lightSprite, i.getX() + (i.getWidth() / 2) - 230, i.getY() + (i.getHeight() / 2) - 230, 460, 460); //, 0, 0, lightSprite.getWidth(), lightSprite.getHeight(), false, true);
}
batch.end();
lightBuffer.end();
However, I don't want the "white stuff" on the original texture, I just want the original background shine through. How can I achieve that ?
I think it's playing around with the blendFuncs, but I was not able to figure out which values to use yet.
Thanks to Tenfour04 pointing into the right direction, I was able to find the solution. First of all, the problem is not directly within batch.end();. The problem is, that indeed a sprite batch maintains its own blendFunc Settings. These get applied when flush(); is called. (end() calls it also ).
However the batch is also calling flush when it draws a TextureRegion that is bound to a different texture than the one used in the previous draw() call.
So in my original code: whatever blendFunc I had set was always overridden when I called batch.draw(lightBuffer,...). The solution is to use the spritebatch's blendFunc and not the Gdx.gl.blendFunc.
The total working code finally looks like this:
lightBuffer.begin();
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Gdx.gl.glEnable(GL20.GL_BLEND);
// start rendering to the lightBuffer
// set the ambient color values, this is the "global" light of your scene
Gdx.gl.glClearColor(0.1f, 0.1f, 0.1f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
// start rendering the lights
batch.setProjectionMatrix(camera.combined);
batch.begin();
// set the color of your light (red,green,blue,alpha values)
batch.setColor(1f, 1f, 1f, 1f);
for (RTSTroopAction i : unitList) {
if (i.getOwnerId() == game.getCallback().getPlayerId()) {
batch.draw(lightSprite, i.getX() + (i.getWidth() / 2) - 230, i.getY() + (i.getHeight() / 2) - 230, 460, 460); //, 0, 0, lightSprite.getWidth(), lightSprite.getHeight(), false, true);
}
}
batch.end();
lightBuffer.end();
// now we render the lightBuffer to the default "frame buffer"
// with the right blending !
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_ZERO, GL20.GL_SRC_COLOR);
batch.setProjectionMatrix(getStage().getCamera().combined);
batch.enableBlending();
batch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_COLOR);
batch.begin();
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_ZERO, GL20.GL_SRC_COLOR);
batch.draw(lightBufferRegion,0, 0, getStage().getWidth(), getStage().getHeight());
batch.end();
batch.setBlendFunction(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);

Why are textures displayed on Android larger and icomplete compared to PC?

My aim is to write a game in C++ for Linux, Windows and Android. I use SDL and am able to draw basic geometric shapes using OpenGL ES 2.0 and shaders. But when I try to apply textures to these shapes I recognize, that they appear larger and incomplete on Android. On PC it works fine. My code does not has to be changed to compile for Android. I use Ubuntu 14.10 and test it on it as well as on my Nexus 5 with Android 5.0.1.
I set up an orthographic projection matrix, that gives me a "surface" with an aspect ration of 16:9 in the area x 0.0 to 1.0 and y 0.0 to 0.5625. In this area I draw a rectangle to check that "custom space":
//Clear
glClearColor(0.25, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Configure viewport
glViewport(0, 0, this->width, this->height);
//Update matrices
this->baseShader.bind();
this->baseShader.updateProjectionMatrix(this->matrix);
this->matrix.loadIdentity();
this->baseShader.updateModelViewMatrix(this->matrix);
//Draw rectangle
GLfloat vertices[] = {0.0, 0.0, 0.0, 0.5625, 1.0, 0.0, 1.0, 0.5625};
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
The results are following:
TextureComparison1.png - Dropbox
Then I draw a square and map a texture to it. Here is the code:
//Enable blending
bw::Texture::enableAlpha();
//Matrices
this->textureShader.bind();
this->textureShader.updateProjectionMatrix(this->matrix);
this->matrix.loadIdentity();
this->matrix.translate(this->sW/2.0, this->sH/2.0, 0.0);
this->matrix.rotate(rot, 0.0, 0.0, 1.0);
this->matrix.translate(-this->sW/2.0, -this->sH/2.0, 0.0);
this->textureShader.updateModelViewMatrix(this->matrix);
//Coordinates
float x3 = this->sW/2.0-0.15, x4 = this->sW/2.0+0.15;
float y3 = this->sH/2.0-0.15, y4 = this->sH/2.0+0.15;
GLfloat vertices2[] = {x3, y3, x3, y4, x4, y3, x4, y4};
GLfloat texCoords[] = {0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0};
//Send coordinations to GPU
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vertices2);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, texCoords);
glEnableVertexAttribArray(1);
//Bind texture
int u1 = glGetUniformLocation(this->textureShader.getId(), "u_Texture");
glActiveTexture(GL_TEXTURE0);
this->spriteTexture.bind();
glUniform1i(u1, 0);
//Draw
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
But this gives me different results:
TextureComparison2.png - Dropbox
Then I have tried to modify and test around with the texture coordinates. If I half them, so all 1.0s to 0.5, only the "1 field" of my texture should be displayed? On Linux it is that way, but on Android is just some random area of the texture displayed.
So can anyone give me a hint what I do wrong?
I figured out the problem. I bind my attributes a_Vertex to 0 and a_TexCoordinate to 1. On PC it happens, but on Android it is reversed. So I took a look in the OpenGL ES 2.0 Reference concerning glBindAttribLocation. You have to bind the locations before linking the program. Now it works perfectly the same on Android as on PC.

Scale, rotate, translate w. matrices in openGl ES 2.0

I'm working with OpenGL ES 2.0 and trying to build my object class with some methods to rotate/translate/scale them.
I just set up my object in 0,0,0 and move it afterwards to the desired position on the screen. Below are my methods to move it seperately. After that i run the buildObjectModelMatrix to pass all the matrices into one objectMatrix, so i can take the vertices and multiply them with my modelMatrix/objectMatrix and render it afterwards.
What i think is right, i have to multiply my matrices in this order:
[scale]x[rotation]x[translation]
->
[temp]x[translation]
->
[objectMatrix]
I've found some literature. Maybe i get it in a few Minutes, if i will, i will update it.
Beginning Android 3D
http://gamedev.stackexchange.com
setIdentityM(scaleMatrix, 0);
setIdentityM(translateMatrix, 0);
setIdentityM(rotateMatrix, 0);
public void translate(float x, float y, float z) {
translateM(translateMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
public void rotate(float angle, float x, float y, float z) {
rotateM(rotateMatrix, 0, angle, x, y, z);
buildObjectModelMatrix();
}
public void scale(float x, float y,float z) {
scaleM(scaleMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, scaleMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, translateMatrix, 0);
}
SOLVED:
The Problem within the whole thing is, if you scale before you translate you get a difference in the distance you translate! the correct code for multiplying your matrices should be (correct me if i'm wrong)
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, translateMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, scaleMatrix, 0);
}
with this you translate and rotate first. Afterwards you can scale the object.
Tested with multiple Objects... so i hope this helped :)
You know this is the most common issue with most people when beginning to deal with matrix operations. How matrix multiplication works is as if you were looking from the objects first person view getting some commands: For instance if you began at (0,0,0) facing toward positive X axis and up would be positive Y axis then translate (a,0,0) would mean "go forward", translate (0,0,a) would would mean "go left", rotate (a, 0, 1, 0) would mean "turn left"...
So if in your case you scaled by 3 units, rotated by 90 degrees and then translated by (2,0,0) what happens is you first enlarge yourself by scale of 3, then turn 90 degrees so you are now facing positive Z still being quite large. Then you go forward by 2 units measured in your own coordinate system which means you will actually go to (0,0,2*3). So you end up at (0,0,6) looking toward positive Z axis.
I believe this way is the best to be able to imagine what goes on when dealing with such operations. And might save your life when having a bug in matrix operation order.
You should know that although this kind of matrix operating is normal when beginning with a 3D scene you should try to move to a better system as soon as possible. What I mostly use is to have an object structure/class which contains 3 vectors: position, forward and up (this is much like using glLookAt but not totally the same). So when having these 3 vectors you can simply set a specific position or rotation using trigonometry or your matrix tools by multiplying the vectors with matrices instead of the matrices with matrices. Or you can work with them internally (first person) where for instance "go forward" would be done as position = position + position*forward*scale, turn left would be rotating a forward vector around the up vector. Anyway I hope can understand how to manipulate those 3 vectors to get a desired effect... So what you need to do to reconstruct the matrix from those 3 vector is need to generate another vector right which is a cross product of up and forward then the model matrix consists of:
right.x, right.y, right.z, .0
up.x, up.y, up.z, .0
forward.x, forward.y, forward.z, .0
position.x, position.y, position.z, 1.0
Just note the row-column order may change depending on what you are working with.
I hope this gives you some better understanding...

Categories

Resources