Simple textured quad rotation in OpenGL ES 2.0 - android

Edit 6 - Complete re-write in relation to comments/ongoing research
Edit 7 - Added projection / view matrix.....
As I'm not getting far with this, I added view/projection matrix from the Google demo - please see code below: If anyone can point out where I'm going wrong it really would be appreciated, as I'm still getting a blank screen when I put ""gl_position = a_position * uMVPMatrix;" + into my vertex shader (with "gl_position = a_position;" + my quad is displayed at least.......)
Declared at class level: (Quad class)
private final float[] rotationMat = new float[16];
private FloatBuffer flotRotBuf;
ByteBuffer rotBuf;
private int muRotationHandle = -1; // Handle to the rotation matrix in the vertex shader called "uRotate"
Declared at class lever: (Renderer class)
private final float[] mVMatrix = new float[16];
private final float[] mProjMatrix = new float[16];
private final float[] mMVPMatrix = new float[16];
Routine that sets texture and does (or is supposed to do) rotation (This is in my Quad class
public void setTexture(GLSurfaceView view, Bitmap imgTexture, float[] mvpMatrix){
this.imgTexture=imgTexture;
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Matrix.setRotateM(rotationMat, 0, 45f, 0, 0, 1.0f); //Set rotation matrix with angle and (z) axis
// rotBuf = ByteBuffer.allocateDirect(rotationMat.length * 4);
// use the device hardware's native byte order
// rotBuf.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
// flotRotBuf = rotBuf.asFloatBuffer();
// add the coordinates to the FloatBuffer
// flotRotBuf.put(rotationMat);
// set the buffer to read the first coordinate
// flotRotBuf.position(0);
// muRotationHandle = GLES20.glGetUniformLocation(iProgId, "uRotation"); // grab the variable from the shader
// GLES20.glUniformMatrix4fv(muRotationHandle, 1, false, flotRotBuf); //Pass floatbuffer contraining rotation matrix info into vertex shader
//GLES20.glUniformMatrix4fv(muRotationHandle, 1, false, rotationMat, 1); //Also tried this ,not use floatbuffer
//Vertex shader
String strVShader =
// "uniform mat4 uRotation;" +
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
"gl_Position = a_Position * uMVPMatrix;"+ //This is where it all goes wrong....
"v_texCoords = a_texCoords;" +
"}";
//Fragment shader
String strFShader =
"precision mediump float;" +
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"}";
iProgId = Utils.LoadProgram(strVShader, strFShader);
iBaseMap = GLES20.glGetUniformLocation(iProgId, "u_baseMap");
iPosition = GLES20.glGetAttribLocation(iProgId, "a_position");
iTexCoords = GLES20.glGetAttribLocation(iProgId, "a_texCoords");
texID = Utils.LoadTexture(view, imgTexture);
}
From my renderer class:
public void onSurfaceChanged(GL10 gl, int width, int height) {
// TODO Auto-generated method stub
//Set viewport size based on screen dimensions
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 gl) {
// TODO Auto-generated method stub
//Paint the screen the colour defined in onSurfaceCreated
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
quad1.setTexture(curView, myBitmap, mMVPMatrix); //SetTexture now modified to take a float array (See above) - Note I know it's not a good idea to have this in my onDrawFrame method - will move it once I have it working!
quad1.drawBackground();
}
I've now removed all rotation related stuff and am now just attempting to get a static quad to display after applying the uMVPMatrix in the vertex shader. But still nothing :-(
If I simply change that line back to the 'default' :
"gl_Position = a_position;\n"+
Then I at least get my textured quad displayed (Obviously no rotation and I would expect that).
Also just to point out, that mvpMatrix is definately being received intact into the setTexture method is valid (contains the same data as appears when I log the contents of mvpMatrix from the Google developers code). I'm not sure how to check if the shader is receiving it intact? I have no reason to believe it isn't though.
Really do appreciate and and all help - I must be going very wrong somewhere but I just can't spot it. Thank you!
EDIT 2: Having added a bounty to this question, I would just like to know how how to rotate my textured quad sprite (2D) keeping the code I have to render it as a base. (ie, what do I need to add to it in order to rotate and why). Thanks!
EDIT 3 N/A
EDIT 4 Re-worded / simplified question
EDIT 5 Added error screenshot

Edit: Edited to support Java using Android SDK.
As Tobias indicated, the idiomatic solution to any vertex transformation in OpenGL is accomplished through the use of matrix operations. If you plan to continue developing with OpenGL, it is important that you (eventually) understand the underlying linear algebra involved in matrix operations, but it is often best to utilize a math library for abstracting linear algebra computation into a more readable format. Under the android environment, you should manipulate float arrays with the [matrix][1] class to create a rotation matrix like this:
// initialize rotation matrix
float[16] rotationMat;
Matrix.setIdentityM(rotationMat,0);
// angle in degrees to rotate
float angle = 90;
// axis to rotate about (z axis in your case)
float[3] axis = { 0.0,0.0,1.0};
// For your case, rotate angle (in degrees) about the z axis.
Matrix.rotateM(rotationMat,0,angle,axis[0],axis[1],axis[2]);
Then you can bind the rotation Matrix to a shader program like this:
// assuming shader program is currently bound ...
GLES20.glUniformMatrix4fv(GLES20.glGetUniformLocation(shaderProgramID, "uRotation"), 1, GL_FALSE, rotationMat);
Where your vertex shader (of the program being passed rotationMat) would look something like:
precision mediump float;
uniform mat4 uMVPMatrix;
uniform mat4 uRotation;
attribute vec2 a_texCoords;
attribute vec3 a_position;
varying v_texCoord;
void main(void)
{
v_texCoord = a_texCoords;
gl_Position = uMVPMatrix* uRotation * vec4(a_position,1.0f);
}
Alternatively, you could premultiply uMVPMatrix* uRotation outside of this shader program and pass the result to your shader program to avoid excessive duplicate computation.
Once you are comfortable using this higher level API for matrix operations you can investigate how the internal operation is performed by reading this fantastic tutorial written by Nicol Bolas.

Rotation matrix for rotation around z:
cos a -sin a 0
sin a cos a 0
0 0 1
How to remember how to construct it:
a is the angle in radians, for a = 0 the matrix yields the identity-matrix. cos has to be on the diagonal. There has to be one sign in front of one sin, switching the signs inverses the rotation's direction.
Likewise rotations around x and y can be constructed:
1 0 0
0 cos a sin a
0 -sin a cos a
cos a 0 sin a
0 1 0
-sin a 0 cos a
If you are not familiar with matrix-arithmetic, here is some code:
for (int i=0; i<4; i++) {
vertices_new[i*5+0] = cos(a) * vertices[i*5+0] - sin(a) * vertices[i*5+1]; // cos(a) * v[i].x - sin(a) * v[i].y + 0 * v[i].z
vertices_new[i*5+1] = sin(a) * vertices[i*5+0] + cos(a) * vertices[i*5+1]; // sin(a) * v[i].x + cos(a) * v[i].y + 0 * v[i].z
vertices_new[i*5+2] = vertices[i*5+2]; // 0 * v[i].x + 0 * v[i].y + 1 * v[i].z
vertices_new[i*5+3] = vertices[i*5+3]; // copy texture u
vertices_new[i*5+4] = vertices[i*5+4]; // copy texture v
}

Related

OpenGL ES: Bad performance when calculating vertex position in vertex shader

I'm a beginner at OpenGL and I´m trying to animate a numer of "objects" from one position to another every 5 second. If I calculate the position in the vertex shader, the fps drops drastically, shouldn't these type of calculations be done on the GPU?
This is the vertex shader code:
#version 300 es
precision highp float;
precision highp int;
layout(location = 0) in vec3 vertexData;
layout(location = 1) in vec3 colourData;
layout(location = 2) in vec3 normalData;
layout(location = 3) in vec3 personPosition;
layout(location = 4) in vec3 oldPersonPosition;
layout(location = 5) in int start;
layout(location = 6) in int duration;
layout(std140, binding = 0) uniform Matrices
{ //base //offset
mat4 projection; // 64 // 0
mat4 view; // 64 // 0 + 64 = 64
int time; // 4 // 64 + 64 = 128
bool shade; // 4 // 128 + 4 = 132 two empty slots after this
vec3 midPoint; // 16 // 128 + 16 = 144
vec3 cameraPos; // 16 // 144 + 16 = 160
// size = 160+16 = 176. Alligned to 16, becomes 176.
};
out vec3 vertexColour;
out vec3 vertexNormal;
out vec3 fragPos;
void main() {
vec3 scalePos;
scalePos.x = vertexData.x * 3.0;
scalePos.y = vertexData.y * 3.0;
scalePos.z = vertexData.z * 3.0;
vertexColour = colourData;
vertexNormal = normalData;
float startFloat = float(start);
float durationFloat = float(duration);
float timeFloat = float(time);
// Wrap around catch to avoid start being close to 1M but time has wrapped around to 0
if (startFloat > timeFloat) {
startFloat = startFloat - 1000000.0;
}
vec3 movePos;
float elapsedTime = timeFloat - startFloat;
if (elapsedTime > durationFloat) {
movePos = personPosition;
} else {
vec3 moveVector = personPosition - oldPersonPosition;
float moveBy = elapsedTime / durationFloat;
movePos = oldPersonPosition + moveVector * moveBy;
}
fragPos = movePos;
gl_Position = projection * view * vec4(scalePos + movePos, 1.0);
}
Every 5 second the buffers are updated:
glBindBuffer(GL_ARRAY_BUFFER, this->personPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->positions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->personOldPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->oldPositions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->timeStartVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(int) * this->persons.size(), animiationStart, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->timeDurationVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(int) * this->persons.size(), animiationDuration, GL_STATIC_DRAW);
I did a test calculating the positions on the CPU, and updating the positions buffer every draw call, and that doesn't give me a performance drop, but feels fundamentally wrong?
void PersonView::animatePositions() {
float duration = 1500;
double currentTime = now_ms();
double elapsedTime = currentTime - animationStartTime;
if (elapsedTime > duration) {
return;
}
for (int i = 0; i < this->persons.size() * 3; i++) {
float moveDistance = this->positions[i] - this->oldPositions[i];
float moveBy = (float)(elapsedTime / duration);
this->moveByPositions[i] = this->oldPositions[i] + moveDistance * moveBy;
}
glBindBuffer(GL_ARRAY_BUFFER, this->personMoveByPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->moveByPositions, GL_STATIC_DRAW);
}
On devices with better SOC:s (Snapdragon 835 etc) the framedrop isn't as drastically as on devices with midrange SOC:s (Snapdragon 625)
Right off the bat, I can see that you're multiplying the projection and view matrices in the vertex shader, but there are no places where you rely on the view or projection matrix independently.
Multiplying two 4x4 matrices results in a very large amount of arithmetic calculations which are done for every vertex you're drawing. In your case - it seems you can avoid this all-together.
Instead of your current implementation - try multiplying the view and proj matrix outside of the shader, then bind the resulting matrix as a single viewProjection matrix:
Old:
gl_Position = projection * view * vec4(scalePos + movePos, 1.0);
New:
gl_Position = projectionView * vec4(scalePos + movePos, 1.0);
This way, the proj and view matrix are multiplied once per frame, instead of once per vertex. This change should drastically improve performance - especially if you have a large amount of vertices.
Generally speaking, the GPU is indeed a lot more efficient then the CPU at performing arithmetic calculations like this, but you should also consider the amount of calculations. The vertex shader is executed per vertex - and should only calculate things that differ between vertices.
Performing a 1-time calculation on the CPU is always better than performing the same calculation on the GPU n-times (n = total vertices).

index greater than GL_MAX_VERTEX_ATTRIBS (while it's actually lower than the HW limit)

I'm having a problem running my app on my phone, while it runs fine on the emulator. I'm using OPENGLES 2.0 for Android and the problem seems to be in OpenGL. The following errors are basically repeated every time I draw a frame:
gles_state_set_error_internal:63: GLES error info:<program> could not be made part of current state. <program> is not linked GLES a_norm-1
gles_state_set_error_internal:62: GLES ctx: 0x7fa2596008, error code:0x502
gles_state_set_error_internal:63: GLES error info:<index> is greater than or equal to GL_MAX_VERTEX_ATTRIBS
My phone is an Allview P5 Energy running Android 5.1, kernel 3.10.65+
The emulator that runs my code well is a Google Nexus 4, 4.2.2. API 17.
As the error code suggests I may be trying to write too many attributes per vertex, so I checked with the following code snippet the maximum amount of attributes supported by my (emulated) hardware:
IntBuffer max = IntBuffer.allocate(1);
GLES20.glGetIntegerv(GLES20.GL_MAX_VERTEX_ATTRIBS,max);
System.err.println("GLES MAX IS"+max.get(0));
For the emulator this gives 29, and for my real phone 16. Ok 16 is less but as you can see in my shader, I only use 3 attributes, and 3<16... The shaders are as follows:
public static final String vertexLightTexture = "uniform mat4 u_MVPMatrix;"+ // A constant representing the combined model/view/projection matrix.
"uniform mat4 u_MVMatrix;"+ // A constant representing the combined model/view matrix.
"attribute vec4 a_Position;"+ // Per-vertex position information we will pass in.
"attribute vec3 a_Normal;"+ // Per-vertex normal information we will pass in.
"attribute vec2 a_TexCoordinate;"+ // Per-vertex texture coordinate information we will pass in.
"varying vec3 v_Position;"+ // This will be passed into the fragment shader.
"varying vec3 v_Normal;"+ // This will be passed into the fragment shader.
"varying vec2 v_TexCoordinate;"+ // This will be passed into the fragment shader.
"void main()"+
"{"+
"v_Position = vec3(u_MVMatrix * a_Position);"+ // Transform the vertex into eye space.
"v_TexCoordinate = a_TexCoordinate;"+ // Pass through the texture coordinate.
"v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));"+ // Transform the normal's orientation into eye space.
"gl_Position = u_MVPMatrix * a_Position;"+ // regular position
"}";
public static final String fragmentLightTexture = "precision mediump float;"+
"uniform vec3 u_LightPos;"+ // The position of the light in eye space.
"uniform sampler2D s_Texture;"+ // The input texture.
"varying vec3 v_Position;"+ // Interpolated position for this fragment
"varying vec3 v_Normal;"+ // Interpolated normal for this fragment
"varying vec2 v_TexCoordinate;"+// Interpolated texture coordinate per fragment.\n" +
"void main()"+
"{"+
"float distance = length(u_LightPos - v_Position);"+ // Will be used for attenuation
"vec3 lightVector = normalize(u_LightPos - v_Position);" +// Get a lighting direction vector from the light to the vertex
"float diffuse = max(dot(v_Normal, lightVector), 0.0);" + // dot product for the illumination angle.
"diffuse = diffuse * (1.5 / (1.0 + (distance/8)));\n" + // Attenuation.
"diffuse = diffuse + 0.2;" + // ambient light
"gl_FragColor = (diffuse * texture2D(s_Texture, v_TexCoordinate));" +// Multiply the color by the diffuse illumination level and texture value to get final output color.
"}";
The code I use to pass data into the shader is the following (first for initialization):
int vertexShader = MyGLRenderer.loadShader(GLES20.GL_VERTEX_SHADER, Shaders.vertexLightTexture);
int fragmentShader = MyGLRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,Shaders.fragmentLightTexture);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
In the init method I also put the vertex, normal and texture coordinates through buffers in the GPU memory. Then for each frame I execute this code and basically for each data element I try to write I get the aforementioned errors. This is the draw method called for each frame:
public void draw(float[] mvpMatrix, float[] mvMatrix, float[] lightPosition){
// group all wall elements at once
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// Get handle to shape's transformation matrix]
int mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "u_MVPMatrix");
System.err.println("GLES MVPMatrix" + mMVPMatrixHandle);
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Get handle to shape's transformation matrix]
int mMVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "u_MVMatrix");
System.err.println("GLES MVMatrix" + mMVMatrixHandle);
// Apply the view transformation
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mvMatrix, 0);
// Get handle to textures locations
int mSamplerLoc = GLES20.glGetUniformLocation(mProgram, "s_Texture");
System.err.println("GLES s_tex" + mSamplerLoc);
// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set the sampler texture unit to 0, where we have saved the texture.
GLES20.glUniform1i(mSamplerLoc, 0);
// get handle to vertex shader's vPosition member
int mPositionHandle = GLES20.glGetAttribLocation(mProgram, "a_Position");
System.err.println("GLES a_pos" + mPositionHandle);
// Use the buffered data from GPU memory
int vertexStride = COORDINATES_PER_VERTEX * 4;
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, positionBufferIndex);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, vertexStride, 0);
// get handle to vertex shader's vPosition member
int mNormalHandle = GLES20.glGetAttribLocation(mProgram, "a_Normal");
System.err.println("GLES a_norm" + mNormalHandle);
// Use the buffered data from GPU memory
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, normalBufferIndex);
GLES20.glEnableVertexAttribArray(mNormalHandle);
GLES20.glVertexAttribPointer(mNormalHandle, 3, GLES20.GL_FLOAT, false, vertexStride, 0);
// Get handle to texture coordinates location
int mTextureCoordinateHandle = GLES20.glGetAttribLocation(mProgram, "a_TexCoordinate" );
System.err.println("GLES a_tex" + mTextureCoordinateHandle);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, uvBufferIndex);
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
GLES20.glVertexAttribPointer(mTextureCoordinateHandle, 2, GLES20.GL_FLOAT, false, 0, 0);
// Clear the currently bound buffer (so future OpenGL calls do not use this buffer).
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
// Pass in the light position in eye space.
int mLightPosHandle = GLES20.glGetUniformLocation(mProgram, "u_LightPos");
System.err.println("GLES u_light" + mLightPosHandle);
GLES20.glUniform3f(mLightPosHandle, lightPosition[0], lightPosition[1], lightPosition[2]);
// Draw the Square
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexBufferIndex);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,numberOfIndices,GLES20.GL_UNSIGNED_SHORT,0);
// Clear the currently bound buffer (so future OpenGL calls do not use this buffer).
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mNormalHandle);
GLES20.glDisableVertexAttribArray(mTextureCoordinateHandle);
}
Any suggestions would be really helpful, I've tried a lot already, like using a simple cube model instead of my high-poly figure I'm trying to draw, but nothing worked so far to get rid of the errors on the phone. On the emulator any model is drawn well...
The problem was in the fragment shader. Somehow the hardware in the phone couldn't handle dividing by 8, it had to be 8.0, so it would know it's a float.
public static final String fragmentLightTexture = "precision mediump float;"+
"uniform vec3 u_LightPos;"+ // The position of the light in eye space.
"uniform sampler2D s_Texture;"+ // The input texture.
"varying vec3 v_Position;"+ // Interpolated position for this fragment
"varying vec3 v_Normal;"+ // Interpolated normal for this fragment
"varying vec2 v_TexCoordinate;"+// Interpolated texture coordinate per fragment.\n" +
"void main()"+
"{"+
"float distance = length(u_LightPos - v_Position);"+ // Will be used for attenuation
"vec3 lightVector = normalize(u_LightPos - v_Position);" +// Get a lighting direction vector from the light to the vertex
"float diffuse = max(dot(v_Normal, lightVector), 0.0);" + // dot product for the illumination angle.
"diffuse = diffuse * (1.5 / (1.0 + (distance/8.0)));\n" + // Attenuation.
"diffuse = diffuse + 0.2;" + // ambient light
"gl_FragColor = (diffuse * texture2D(s_Texture, v_TexCoordinate));" +// Multiply the color by the diffuse illumination level and texture value to get final output color.
"}";
I have to say the error message of GL_MAX_VERTEX_ATTRIBS wasn't helpful at all and really threw me off to search in the wrong direction.

Understanding buffer in OpenGLES android

I have this vertex shader that i'm use to rotate or translate the obj depending of state values.
private final String vertexShader="" +
"attribute vec4 vertex;" +
"attribute vec4 colors;" +
"varying vec4 vcolors;" +
"uniform int x;" +
"uniform vec4 translate;"+
"uniform mat4 rotate;"+
"void main(){" +
"vec4 app_verte=vertex;" +
"if(x==1){" +
"app_verte=vertex+translate;" +
"}else if(x==2){" +
"app_verte=rotate*app_verte;" +
"}" +
"vcolors=colors;" +
"gl_Position=app_verte;" +
"}";
For the rotation i use the a matrix that using the matrix associeted is built from a float[16] array as follow:
|cos(angle),-sin(angle),0,0|
|sin(angle), cos(angle),0,0|
|0 , ,0,0|
Now i have different questions becouse i really hard understand. If i want to change the type of transformation i have to set the x value. Now to have a continius transformation i supposed that the vertex buffere will be the same and after a transformation the value of the buffer will be changed. Now nothing happend becouse it transform and draw with the same coordiates. i put only at first the coordinatesbuffer. There is a way to use the same buffer that is the the VRAM without put it every time and if there is not how can a pull the changed buffer to my buffer obj after the tranformation without transform the point using the array and put it into the buffer??
Sorry for my english and thanks to all indeed.
The vertex buffers are designed this way so you send them to the GPU only once to reduce the traffic. You then use matrices (or other systems such as translation vector) to apply the transform in the vertex shader so you send only up to 4x4 float buffer.
IF I understand what your issue is it lies in that you use multiple systems at the same time. You have a translation vector and a matrix but you use either one or the other. So in your case you might be able to simply apply both of them in the shader as app_verte = rotate*app_verte + translate; or app_verte = rotate*(app_verte + translate); already these 2 are not the same and I am guessing that at some point you will need something like app_verte = rotate2*(rotate1*app_verte + translate1) + translate2; which is not solvable since the number of operations will increase over time.
So you need to chose a single system which in your case should be a matrix. Rather then sending the translation matrix you can translate the matrix on the CPU part of your application and send only that to the shader. You need to find tools to multiply the matrices and to generate translation and rotation matrix. You can make them yourself but already looking at the one you posted I am pretty sure the second last value should be 1 and not 0 (though it must be a typo since the last row contains 3 values while others contain 4).
So have a single matrix which in beginning is set to identity which corresponds to x=0. Then for x=1 situation set that matrix as a translation matrix myMatrix = generateTranslationMatrix(x, y, z). And for x=2 do the same with rotation matrix myMatrix = generateRotationMatrix(x, y, z, angle). Now when you need to continue the operation, to concat the two you simply multiply them, so for both you would do myMatrix = generateTranslationMatrix(x, y, z)*generateRotationMatrix(x, y, z, angle). But there is no reason to keep the values separate as well so in the end you just want some methods to manipulate the state of the orientation:
Matrix myMatrix;
onLoad() {
myMatrix = Matrix.identity();
}
onTurnRight(float angle) {
myMatrix = myMatrix * generateRotationMatrix(0, 1, 0, angle);
}
onMove(float x, float y, float z) {
myMatrix = myMatrix * generateTranslationMatrix(x, y, z);
}
Then you can add other methods to your code as needed but for instance if you handle touch events and when a finger moves up or down you will move forward or backwards while left and right will rotate the object then it will look something like this:
onFingerMoved(float x, float y) {
const float xfactor = 0.01; // Modify to control the speed of rotation
const float yfactor = -0.1; // Modify to control the speed of movement
float dx = previousX - x;
float dy = previousy - y;
onTurnRight(dx);
onMove(.0, .0, dy); // Assuming the Z coordinate is forward
previousX = x;
previousY = y;
}

How to use OpenGL to emulate OpenCV's warpPerspective functionality (perspective transform)

I've done image warping using OpenCV in Python and C++, see the Coca Cola logo warped in place in the corners I had selected:
Using the following images:
and this:
Full album with transition pics and description here
I need to do exactly this, but in OpenGL. I'll have:
Corners inside which I've to map the warped image
A homography matrix that maps the transformation of the logo image
into the logo image you see inside the final image (using OpenCV's
warpPerspective), something like this:
[[ 2.59952324e+00, 3.33170976e-01, -2.17014066e+02],
[ 8.64133587e-01, 1.82580111e+00, -3.20053715e+02],
[ 2.78910149e-03, 4.47911310e-05, 1.00000000e+00]]
Main image (the running track image here)
Overlay image (the Coca Cola image here)
Is it possible ? I've read a lot and started OpenGL basics tutorials, but can it be done from just what I have? Would the OpenGL implementation be faster, say, around ~10ms?
I'm currently playing with this tutorial here:
http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html
Am I going in the right direction? Total OpenGL newbie here, please bear. Thanks.
After trying a number of solutions proposed here and elsewhere, I ended solving this by writing a fragment shader that replicates what 'warpPerspective' does.
The fragment shader code looks something like:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
// NOTE: you will need to pass the INVERSE of the homography matrix, as well as
// the width and height of your image as uniforms!
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
// Texture coordinates will run [0,1],[0,1];
// Convert to "real world" coordinates
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
// Determine what 'z' is
highp vec3 m = inverseHomographyMatrix[2] * frameCoordinate;
highp float zed = 1.0 / (m.x + m.y + m.z);
frameCoordinate = frameCoordinate * zed;
// Determine translated x and y coordinates
highp float xTrans = inverseHomographyMatrix[0][0] * frameCoordinate.x + inverseHomographyMatrix[0][1] * frameCoordinate.y + inverseHomographyMatrix[0][2] * frameCoordinate.z;
highp float yTrans = inverseHomographyMatrix[1][0] * frameCoordinate.x + inverseHomographyMatrix[1][1] * frameCoordinate.y + inverseHomographyMatrix[1][2] * frameCoordinate.z;
// Normalize back to [0,1],[0,1] space
highp vec2 coords = vec2(xTrans / width, yTrans / height);
// Sample the texture if we're mapping within the image, otherwise set color to black
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
}
Note that the homography matrix we are passing in here is the INVERSE HOMOGRAPHY MATRIX! You have to invert the homography matrix that you would pass into 'warpPerspective'- otherwise this code will not work.
The vertex shader does nothing but pass through the coordinates:
// Vertex shader
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main() {
// Nothing happens in the vertex shader
textureCoordinate = inputTextureCoordinate.xy;
gl_Position = position;
}
Pass in unaltered texture coordinates and position coordinates (i.e. textureCoordinates = [(0,0),(0,1),(1,0),(1,1)] and positionCoordinates = [(-1,-1),(-1,1),(1,-1),(1,1)], for a triangle strip), and this should work!
You can do perspective warping of the texture using texture2DProj(), or alternatively using texture2D() by dividing the st coordinates of the texture (which is what texture2DProj does).
Have a look here: Perspective correct texturing of trapezoid in OpenGL ES 2.0.
warpPerspective projects the (x,y,1) coordinate with the matrix and then divides (u,v) by w, like texture2DProj(). You'll have to modify the matrix so the resulting coordinates are properly normalised.
In terms of performance, if you want to read the data back to the CPU your bottleneck is glReadPixels. How long it will take depends on your device. If you're just displaying, the OpenGL ES calls will take much less than 10ms, assuming that you have both textures loaded to GPU memory.
[edit] This worked on my Galaxy S9 but on my car's Android it had an issue that the whole output texture was white. I've sticked to the original shader and it works :)
You can use mat3*vec3 ops in the fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
highp vec3 trans = inverseHomographyMatrix * frameCoordinate;
highp vec2 coords = vec2(trans.x / width, trans.y / height) / trans.z;
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
};
If you want to have transparent background don't forget to add
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glBlendEquation(GLES20.GL_FUNC_ADD);
And set transpose flag (in case you use the above shader):
GLES20.glUniformMatrix3fv(H_P2D, 1, true, homography, 0);

Android OpenGLES 2.0 2D Rendering

After two days of bashing my head and trying to figure this stuff out I have been woefully unsuccessful, hopefully someone can point me in the right direction.
I am trying to make a tile based game in GLES 2.0, but I cant get anything to show up the way I want. Basically I have an array of vertices that make up pairs of triangles that would form a square grid. I want to use GLES20.glDrawArrays() to draw subsections of this grid at a time.
I have figured out how to "view" from a different perspective using a combination of Matrix.orthoM() and Matrix.setLookAtM() but for the life of me I can figure out how to have my triangles not fill the entire screen.
I really need some guidance on setting up a projection so that if the triangle is defined as (0,0,0) (0,20,0) (20,0,0) it shows up on the screen as 20 pixels wide and 20 pixels tall, translated by my current view.
Here is what I have currently, but it just fills my entire screen with green. If someone could show me the correct way to manipulate the scene so that it fills the camera, or the camera only shows like 20 triangles wide by 10 triangle high that would make my week.
When the surface changes:
GLES20.glViewport(0, 0, ScreenX, ScreenY);
float ratio = ScreenX / ScreenY;
Matrix.orthoM(_ProjMatrix, 0,
-ratio,
ratio,
-1, 1,
3, 7);
Matrix.setLookAtM(_VMatrix, 0,
60, 60, 7,
60, 60, 0,
0, 1, 0);
Beginning drawing:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
Matrix.multiplyMM(_MVPMatrix, 0, _ProjMatrix, 0, _VMatrix, 0);
if (_activeMap != null)
_activeMap.draw(0, 0, (int)ScreenX, (int)ScreenY, _MVPMatrix);
The draw function:
public void draw(int x, int y, int width, int height, float[] MVPMatrix)
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUseProgram(_pHandle);
GLES20.glUniformMatrix4fv(_uMVPMatrixHandle, 1, false, MVPMatrix, 0);
int minRow, minCol, maxRow, maxCol;
minRow = (int) (y / Engine.TileSize);
minCol = (int) (x / Engine.TileSize);
maxRow = (int) (minRow + (height / Engine.TileSize));
maxCol = (int) (minCol + (width / Engine.TileSize));
minRow = (minRow < 0) ? 0 : minRow;
minCol = (minCol < 0) ? 0 : minCol;
maxRow = (maxRow > _rows) ? (int)_rows : maxRow;
maxCol = (maxCol > _cols) ? (int)_cols : maxCol;
for (int r = minRow; r < maxRow - 1; r++)
for (int d = 0; d < _vBuffers.length; d++)
{
_vBuffers[d].position(0);
GLES20.glVertexAttribPointer(_vAttHandle, 3, GLES20.GL_FLOAT,
false,
0, _vBuffers[d]);
GLES20.glEnableVertexAttribArray(_vAttHandle);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES,
(int) (r * 6 * _cols),
(maxCol - minCol) * 6);
}
}
Shader script:
private static final String _VERT_SHADER =
"uniform mat4 uMVPMatrix; \n"
+ "attribute vec4 vPosition; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = uMVPMatrix * vPosition; \n"
+ "} \n";
private static final String _FRAG_SHADER =
"precision mediump float; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = vec4 (0.63671875, 0.76953125, 0.22265625, 1.0); \n"
+ "} \n";
For a tile based game a simple translate would be far more appropriate, setLookAt is just overkill.
I hope it may help you. Check this link for OpenGL programming.
OpenGL Programming Guide
Go to Chapter 3.Viewing here you can find information about projection.

Categories

Resources