After two days of bashing my head and trying to figure this stuff out I have been woefully unsuccessful, hopefully someone can point me in the right direction.
I am trying to make a tile based game in GLES 2.0, but I cant get anything to show up the way I want. Basically I have an array of vertices that make up pairs of triangles that would form a square grid. I want to use GLES20.glDrawArrays() to draw subsections of this grid at a time.
I have figured out how to "view" from a different perspective using a combination of Matrix.orthoM() and Matrix.setLookAtM() but for the life of me I can figure out how to have my triangles not fill the entire screen.
I really need some guidance on setting up a projection so that if the triangle is defined as (0,0,0) (0,20,0) (20,0,0) it shows up on the screen as 20 pixels wide and 20 pixels tall, translated by my current view.
Here is what I have currently, but it just fills my entire screen with green. If someone could show me the correct way to manipulate the scene so that it fills the camera, or the camera only shows like 20 triangles wide by 10 triangle high that would make my week.
When the surface changes:
GLES20.glViewport(0, 0, ScreenX, ScreenY);
float ratio = ScreenX / ScreenY;
Matrix.orthoM(_ProjMatrix, 0,
-ratio,
ratio,
-1, 1,
3, 7);
Matrix.setLookAtM(_VMatrix, 0,
60, 60, 7,
60, 60, 0,
0, 1, 0);
Beginning drawing:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
Matrix.multiplyMM(_MVPMatrix, 0, _ProjMatrix, 0, _VMatrix, 0);
if (_activeMap != null)
_activeMap.draw(0, 0, (int)ScreenX, (int)ScreenY, _MVPMatrix);
The draw function:
public void draw(int x, int y, int width, int height, float[] MVPMatrix)
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUseProgram(_pHandle);
GLES20.glUniformMatrix4fv(_uMVPMatrixHandle, 1, false, MVPMatrix, 0);
int minRow, minCol, maxRow, maxCol;
minRow = (int) (y / Engine.TileSize);
minCol = (int) (x / Engine.TileSize);
maxRow = (int) (minRow + (height / Engine.TileSize));
maxCol = (int) (minCol + (width / Engine.TileSize));
minRow = (minRow < 0) ? 0 : minRow;
minCol = (minCol < 0) ? 0 : minCol;
maxRow = (maxRow > _rows) ? (int)_rows : maxRow;
maxCol = (maxCol > _cols) ? (int)_cols : maxCol;
for (int r = minRow; r < maxRow - 1; r++)
for (int d = 0; d < _vBuffers.length; d++)
{
_vBuffers[d].position(0);
GLES20.glVertexAttribPointer(_vAttHandle, 3, GLES20.GL_FLOAT,
false,
0, _vBuffers[d]);
GLES20.glEnableVertexAttribArray(_vAttHandle);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES,
(int) (r * 6 * _cols),
(maxCol - minCol) * 6);
}
}
Shader script:
private static final String _VERT_SHADER =
"uniform mat4 uMVPMatrix; \n"
+ "attribute vec4 vPosition; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = uMVPMatrix * vPosition; \n"
+ "} \n";
private static final String _FRAG_SHADER =
"precision mediump float; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = vec4 (0.63671875, 0.76953125, 0.22265625, 1.0); \n"
+ "} \n";
For a tile based game a simple translate would be far more appropriate, setLookAt is just overkill.
I hope it may help you. Check this link for OpenGL programming.
OpenGL Programming Guide
Go to Chapter 3.Viewing here you can find information about projection.
Related
I have reports from my users about issues with rendering of half-floats data on certain devices with Mali GPUs (Huawei Honor 9 and Samsung Galaxy S10+ w/ Mali G71 and G76 respectively).
It results in garbled rendering on these devices while works correctly on Adreno and PowerVR GPUs.
I've double checked code and it seems to be correct:
...
if (model.hasHalfFloats()) {
GLES20.glVertexAttribPointer(shaderOutline.getRm_Vertex(), 3, GLES20.GL_FLOAT, false, 18, 0);
GLES20.glVertexAttribPointer(shaderOutline.getRm_Normal(), 3, getGL_HALF_FLOAT(), false, 18, 12);
} else {
GLES20.glVertexAttribPointer(shaderOutline.getRm_Vertex(), 3, GLES20.GL_FLOAT, false, 24, 0);
GLES20.glVertexAttribPointer(shaderOutline.getRm_Normal(), 3, GLES20.GL_FLOAT, false, 24, 12);
}
...
/**
* Returns either OES extension for GL 16-bit floats (if used in ES 2.0) or ES 3.0 constant.
*/
protected int getGL_HALF_FLOAT() {
if(isES3()) {
return GLES30.GL_HALF_FLOAT;
} else {
return GL_HALF_FLOAT_OES;
}
}
Code seems to correctly detect OpenGL ES 3 and use GLES30.GL_HALF_FLOAT value in getGL_HALF_FLOAT().
Sample shader code:
vertexShaderCode = "attribute vec4 rm_Vertex;\r\n" +
"attribute mediump vec3 rm_Normal;\r\n" +
"uniform mat4 view_proj_matrix;\r\n" +
"uniform float uThickness1;\r\n" +
"void main( void )\r\n" +
"{\r\n" +
" vec4 pos = vec4(rm_Vertex.xyz, 1.0);\r\n" +
" float dist = (view_proj_matrix * pos).w;\r\n" +
" vec4 normal = vec4(rm_Normal, 0.0);\r\n" +
" pos += normal * uThickness1 * dist;\r\n" +
" gl_Position = view_proj_matrix * pos;\r\n" +
"}";
fragmentShaderCode = "precision mediump float;\r\n" +
"uniform vec4 uColor;\r\n" +
"void main( void )\r\n" +
"{\r\n" +
" gl_FragColor = uColor;\r\n" +
"}";
I think you have an alignment problem. From this snippet (and your vertex shader):
GLES20.glVertexAttribPointer(shaderOutline.getRm_Normal(), 3, getGL_HALF_FLOAT(), false, 18, 12);
I can infer that you're attempting a vertex structure like so:
float fPos[3];
half fNormal[3];
You have come up with a vertex stride of 18 which has presumably been arrived at by adding up the individual sizes of the elements (3*sizeof(float))+(3*sizeof(half)) = 12 + 6 = 18.
However, the stride should be 20 because otherwise your vertices are misaligned. The 4-byte floats must start on a 4-byte boundary, but that is not the case.
From the GLES3 spec:
Clients must align data elements consistently with the requirements of the client platform, with an additional base-level requirement that an offset within a buffer to a datum comprising N basic machine units be a multiple of N
I'm a beginner at OpenGL and I´m trying to animate a numer of "objects" from one position to another every 5 second. If I calculate the position in the vertex shader, the fps drops drastically, shouldn't these type of calculations be done on the GPU?
This is the vertex shader code:
#version 300 es
precision highp float;
precision highp int;
layout(location = 0) in vec3 vertexData;
layout(location = 1) in vec3 colourData;
layout(location = 2) in vec3 normalData;
layout(location = 3) in vec3 personPosition;
layout(location = 4) in vec3 oldPersonPosition;
layout(location = 5) in int start;
layout(location = 6) in int duration;
layout(std140, binding = 0) uniform Matrices
{ //base //offset
mat4 projection; // 64 // 0
mat4 view; // 64 // 0 + 64 = 64
int time; // 4 // 64 + 64 = 128
bool shade; // 4 // 128 + 4 = 132 two empty slots after this
vec3 midPoint; // 16 // 128 + 16 = 144
vec3 cameraPos; // 16 // 144 + 16 = 160
// size = 160+16 = 176. Alligned to 16, becomes 176.
};
out vec3 vertexColour;
out vec3 vertexNormal;
out vec3 fragPos;
void main() {
vec3 scalePos;
scalePos.x = vertexData.x * 3.0;
scalePos.y = vertexData.y * 3.0;
scalePos.z = vertexData.z * 3.0;
vertexColour = colourData;
vertexNormal = normalData;
float startFloat = float(start);
float durationFloat = float(duration);
float timeFloat = float(time);
// Wrap around catch to avoid start being close to 1M but time has wrapped around to 0
if (startFloat > timeFloat) {
startFloat = startFloat - 1000000.0;
}
vec3 movePos;
float elapsedTime = timeFloat - startFloat;
if (elapsedTime > durationFloat) {
movePos = personPosition;
} else {
vec3 moveVector = personPosition - oldPersonPosition;
float moveBy = elapsedTime / durationFloat;
movePos = oldPersonPosition + moveVector * moveBy;
}
fragPos = movePos;
gl_Position = projection * view * vec4(scalePos + movePos, 1.0);
}
Every 5 second the buffers are updated:
glBindBuffer(GL_ARRAY_BUFFER, this->personPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->positions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->personOldPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->oldPositions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->timeStartVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(int) * this->persons.size(), animiationStart, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->timeDurationVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(int) * this->persons.size(), animiationDuration, GL_STATIC_DRAW);
I did a test calculating the positions on the CPU, and updating the positions buffer every draw call, and that doesn't give me a performance drop, but feels fundamentally wrong?
void PersonView::animatePositions() {
float duration = 1500;
double currentTime = now_ms();
double elapsedTime = currentTime - animationStartTime;
if (elapsedTime > duration) {
return;
}
for (int i = 0; i < this->persons.size() * 3; i++) {
float moveDistance = this->positions[i] - this->oldPositions[i];
float moveBy = (float)(elapsedTime / duration);
this->moveByPositions[i] = this->oldPositions[i] + moveDistance * moveBy;
}
glBindBuffer(GL_ARRAY_BUFFER, this->personMoveByPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->moveByPositions, GL_STATIC_DRAW);
}
On devices with better SOC:s (Snapdragon 835 etc) the framedrop isn't as drastically as on devices with midrange SOC:s (Snapdragon 625)
Right off the bat, I can see that you're multiplying the projection and view matrices in the vertex shader, but there are no places where you rely on the view or projection matrix independently.
Multiplying two 4x4 matrices results in a very large amount of arithmetic calculations which are done for every vertex you're drawing. In your case - it seems you can avoid this all-together.
Instead of your current implementation - try multiplying the view and proj matrix outside of the shader, then bind the resulting matrix as a single viewProjection matrix:
Old:
gl_Position = projection * view * vec4(scalePos + movePos, 1.0);
New:
gl_Position = projectionView * vec4(scalePos + movePos, 1.0);
This way, the proj and view matrix are multiplied once per frame, instead of once per vertex. This change should drastically improve performance - especially if you have a large amount of vertices.
Generally speaking, the GPU is indeed a lot more efficient then the CPU at performing arithmetic calculations like this, but you should also consider the amount of calculations. The vertex shader is executed per vertex - and should only calculate things that differ between vertices.
Performing a 1-time calculation on the CPU is always better than performing the same calculation on the GPU n-times (n = total vertices).
I draw things on an FBO in libgdx. Than I just want to draw that fbo WITH TRANSPARENT background to my screen. But that is allways BLACK.
Is it possible to use a transparent background on the spriteBatch?
Tried a lot of things, but it looks like I cannot dop this simple task.
I created a custom shader and:
vec4 orig = texture2D(u_texture, tc);
if (orig.a==0.0) discard;
ths FBO create code:
fbo = new FrameBuffer(Format.RGB565, width, height, hasDepth);
than:
public void begin() {
fbo.begin();
Gdx.gl.glClearColor(1.0f, 0.0f, 0.0f, 0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
}
public void render() {
renderIt();
}
private void renderIt() {
spriteBatch.begin();
spriteBatch.setShader(shader);
spriteBatch.draw(fboRegion, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
spriteBatch.end();
}
public void end() {
Gdx.gl.glDisable(GLES20.GL_BLEND);
fbo.end();
}
It looks like the shader cannot determinate the FBO's alpha values. why? I just need to clear the alpha=0.0 pixels out.
So in my shader the background cut out:
if (orig.r==1.0) discard;
this is WORKING
if (orig.a==0.0) discard;
if (orig.a<0.2) discard;
if (orig.a<0.9) discard;
THIS IS NOT
my shader looks like this
void main() {
vec4 sum = vec4(0.0);
vec2 tc = v_texCoord0;
//the amount to blur, i.e. how far off center to sample from
//1.0 -> blur by one pixel
//2.0 -> blur by two pixels, etc.
float blur = radius/resolution;
//the direction of our blur
//(1.0, 0.0) -> x-axis blur
//(0.0, 1.0) -> y-axis blur
float hstep = dir.x;
float vstep = dir.y;
sum += texture2D(u_texture, vec2(tc.x - 4.0*blur*hstep, tc.y - 4.0*blur*vstep)) * 0.0162162162;
sum += texture2D(u_texture, vec2(tc.x - 3.0*blur*hstep, tc.y - 3.0*blur*vstep)) * 0.0540540541;
sum += texture2D(u_texture, vec2(tc.x - 2.0*blur*hstep, tc.y - 2.0*blur*vstep)) * 0.1216216216;
sum += texture2D(u_texture, vec2(tc.x - 1.0*blur*hstep, tc.y - 1.0*blur*vstep)) * 0.1945945946;
sum += texture2D(u_texture, vec2(tc.x, tc.y)) * 0.2270270270;
sum += texture2D(u_texture, vec2(tc.x + 1.0*blur*hstep, tc.y + 1.0*blur*vstep)) * 0.1945945946;
sum += texture2D(u_texture, vec2(tc.x + 2.0*blur*hstep, tc.y + 2.0*blur*vstep)) * 0.1216216216;
sum += texture2D(u_texture, vec2(tc.x + 3.0*blur*hstep, tc.y + 3.0*blur*vstep)) * 0.0540540541;
sum += texture2D(u_texture, vec2(tc.x + 4.0*blur*hstep, tc.y + 4.0*blur*vstep)) * 0.0162162162;
vec4 all = v_color * vec4(sum.rgb, 1.0);
if (all.a < 0.9) discard;
else gl_FragColor = all;
}
RGB565 format does not have an alpha channel. If you need transparency use a format like ARGB8888
LOL I was a fckin idiote.
vec4 all = v_color * vec4(sum.rgb, 1.0);
if (all.a < 0.9) discard;
else gl_FragColor = all;
this is the BAD shader (its alpha is allways 1.0)
this is the worked one:
vec4 all = sum;
if (all.a < 0.5) discard;
else
gl_FragColor = all;
This question refers to this one : How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time?
It is well explained in the best answer given by the author, but I have a little different issue concerning YV12 instead of NV12.
(Here are some specs : https://wiki.videolan.org/YUV and https://www.fourcc.org/yuv.php )
What about YUV-YV12 ? The Y buffer is the same, but the UV is not entrelaced so i looks like 2 buffers for V and U. But then, who to do to give them to the Shader ? Using an Pixmap.Format.Intensity texture I think, setting GL_LUMINANCE ?
I don't understand how the NV12 "UVUV" buffer is converted into RGBA with RGB = V and A = U using GL_LUMINANCE and Pixmap format with GL_LUMINANCEALPHA ?
YV12 is using "VVUU" buffer, so it is easy to split in V and U buffers, but how to bind them and get u and v in the shader ?
Thanks for any help, this sample is awesome ! but I need something a bit different and for that I need to understand deep in details the shader binding behavior.
Thanks!
Ok, I got it :
YUV-YV12 is 12 bytes per pixels : 8 bit Y plane followed by 8 bit 2x2 subsampled V and U planes.
Based on this answer (detailing the whole YUV-NV12 to RGB shader display) https://stackoverflow.com/a/22456885/4311503 let's make some littles changes.
So, we can split the buffer in 3 pieces
yBuffer = ByteBuffer.allocateDirect(640*480);
uBuffer = ByteBuffer.allocateDirect(640*480/4); //We have (width/2*height/2) pixels, each pixel is 2 bytes
vBuffer = ByteBuffer.allocateDirect(640*480/4); //We have (width/2*height/2) pixels, each pixel is 2 bytes
Then getting the datas
yBuffer.put(frame.getData(), 0, size);
yBuffer.position(0);
//YV12 : Y(8 bytes) then V(2 bytes) then U(2 bytes)
vBuffer.put(frame.getData(), size, size/4);
vBuffer.position(0);
uBuffer.put(frame.getData(), size * 5 / 4, size/4);
uBuffer.position(0);
Now, prepare the texture :
yTexture = new Texture(640, 480, Pixmap.Format.Intensity); //A 8-bit per pixel format
uTexture = new Texture(640 / 2, 480 / 2, Pixmap.Format.Intensity); //A 8-bit per pixel format
vTexture = new Texture(640 / 2, 480 / 2, Pixmap.Format.Intensity); //A 8-bit per pixel format
And change a bit the binding because we now use 3 textures instead of two :
//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();
//Y texture is (width*height) in size and each pixel is one byte;
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
640, 480, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Prepare the UV channel texture
*/
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uTexture.bind();
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
640 / 2, 480 / 2, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE,
uBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE2);
vTexture.bind();
//UV texture is (width/2*height/2) Using GL_Luminance, each pixel will match a buffer component
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
640 / 2, 480 / 2, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE,
vBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
shader.begin();
//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);
//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("u_texture", 1);
shader.setUniformi("v_texture", 2);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Last use the following shader (just changed a bit the fragment u and v texture part)
//Our vertex shader code; nothing special
String vertexShader =
"attribute vec4 a_position; \n" +
"attribute vec2 a_texCoord; \n" +
"varying vec2 v_texCoord; \n" +
"void main(){ \n" +
" gl_Position = a_position; \n" +
" v_texCoord = a_texCoord; \n" +
"} \n";
//Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
//Effectively making YUV to RGB conversion
String fragmentShader =
"#ifdef GL_ES \n" +
"precision highp float; \n" +
"#endif \n" +
"varying vec2 v_texCoord; \n" +
"uniform sampler2D y_texture; \n" +
"uniform sampler2D u_texture; \n" +
"uniform sampler2D v_texture; \n" +
"void main (void){ \n" +
" float r, g, b, y, u, v; \n" +
//We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
//that's why we're pulling it from the R component, we could also use G or B
//see https://stackoverflow.com/questions/12130790/yuv-to-rgb-conversion-by-fragment-shader/17615696#17615696
//and https://stackoverflow.com/questions/22456884/how-to-render-androids-yuv-nv21-camera-image-on-the-background-in-libgdx-with-o
" y = texture2D(y_texture, v_texCoord).r; \n" +
//Since we use GL_LUMINANCE, each compoentn it on it own map
" u = texture2D(u_texture, v_texCoord).r - 0.5; \n" +
" v = texture2D(v_texture, v_texCoord).r - 0.5; \n" +
//The numbers are just YUV to RGB conversion constants
" r = y + 1.13983*v; \n" +
" g = y - 0.39465*u - 0.58060*v; \n" +
" b = y + 2.03211*u; \n" +
//We finally set the RGB color of our pixel
" gl_FragColor = vec4(r, g, b, 1.0); \n" +
"} \n";
Here it is !
Edit 6 - Complete re-write in relation to comments/ongoing research
Edit 7 - Added projection / view matrix.....
As I'm not getting far with this, I added view/projection matrix from the Google demo - please see code below: If anyone can point out where I'm going wrong it really would be appreciated, as I'm still getting a blank screen when I put ""gl_position = a_position * uMVPMatrix;" + into my vertex shader (with "gl_position = a_position;" + my quad is displayed at least.......)
Declared at class level: (Quad class)
private final float[] rotationMat = new float[16];
private FloatBuffer flotRotBuf;
ByteBuffer rotBuf;
private int muRotationHandle = -1; // Handle to the rotation matrix in the vertex shader called "uRotate"
Declared at class lever: (Renderer class)
private final float[] mVMatrix = new float[16];
private final float[] mProjMatrix = new float[16];
private final float[] mMVPMatrix = new float[16];
Routine that sets texture and does (or is supposed to do) rotation (This is in my Quad class
public void setTexture(GLSurfaceView view, Bitmap imgTexture, float[] mvpMatrix){
this.imgTexture=imgTexture;
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Matrix.setRotateM(rotationMat, 0, 45f, 0, 0, 1.0f); //Set rotation matrix with angle and (z) axis
// rotBuf = ByteBuffer.allocateDirect(rotationMat.length * 4);
// use the device hardware's native byte order
// rotBuf.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
// flotRotBuf = rotBuf.asFloatBuffer();
// add the coordinates to the FloatBuffer
// flotRotBuf.put(rotationMat);
// set the buffer to read the first coordinate
// flotRotBuf.position(0);
// muRotationHandle = GLES20.glGetUniformLocation(iProgId, "uRotation"); // grab the variable from the shader
// GLES20.glUniformMatrix4fv(muRotationHandle, 1, false, flotRotBuf); //Pass floatbuffer contraining rotation matrix info into vertex shader
//GLES20.glUniformMatrix4fv(muRotationHandle, 1, false, rotationMat, 1); //Also tried this ,not use floatbuffer
//Vertex shader
String strVShader =
// "uniform mat4 uRotation;" +
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
"gl_Position = a_Position * uMVPMatrix;"+ //This is where it all goes wrong....
"v_texCoords = a_texCoords;" +
"}";
//Fragment shader
String strFShader =
"precision mediump float;" +
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"}";
iProgId = Utils.LoadProgram(strVShader, strFShader);
iBaseMap = GLES20.glGetUniformLocation(iProgId, "u_baseMap");
iPosition = GLES20.glGetAttribLocation(iProgId, "a_position");
iTexCoords = GLES20.glGetAttribLocation(iProgId, "a_texCoords");
texID = Utils.LoadTexture(view, imgTexture);
}
From my renderer class:
public void onSurfaceChanged(GL10 gl, int width, int height) {
// TODO Auto-generated method stub
//Set viewport size based on screen dimensions
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 gl) {
// TODO Auto-generated method stub
//Paint the screen the colour defined in onSurfaceCreated
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
quad1.setTexture(curView, myBitmap, mMVPMatrix); //SetTexture now modified to take a float array (See above) - Note I know it's not a good idea to have this in my onDrawFrame method - will move it once I have it working!
quad1.drawBackground();
}
I've now removed all rotation related stuff and am now just attempting to get a static quad to display after applying the uMVPMatrix in the vertex shader. But still nothing :-(
If I simply change that line back to the 'default' :
"gl_Position = a_position;\n"+
Then I at least get my textured quad displayed (Obviously no rotation and I would expect that).
Also just to point out, that mvpMatrix is definately being received intact into the setTexture method is valid (contains the same data as appears when I log the contents of mvpMatrix from the Google developers code). I'm not sure how to check if the shader is receiving it intact? I have no reason to believe it isn't though.
Really do appreciate and and all help - I must be going very wrong somewhere but I just can't spot it. Thank you!
EDIT 2: Having added a bounty to this question, I would just like to know how how to rotate my textured quad sprite (2D) keeping the code I have to render it as a base. (ie, what do I need to add to it in order to rotate and why). Thanks!
EDIT 3 N/A
EDIT 4 Re-worded / simplified question
EDIT 5 Added error screenshot
Edit: Edited to support Java using Android SDK.
As Tobias indicated, the idiomatic solution to any vertex transformation in OpenGL is accomplished through the use of matrix operations. If you plan to continue developing with OpenGL, it is important that you (eventually) understand the underlying linear algebra involved in matrix operations, but it is often best to utilize a math library for abstracting linear algebra computation into a more readable format. Under the android environment, you should manipulate float arrays with the [matrix][1] class to create a rotation matrix like this:
// initialize rotation matrix
float[16] rotationMat;
Matrix.setIdentityM(rotationMat,0);
// angle in degrees to rotate
float angle = 90;
// axis to rotate about (z axis in your case)
float[3] axis = { 0.0,0.0,1.0};
// For your case, rotate angle (in degrees) about the z axis.
Matrix.rotateM(rotationMat,0,angle,axis[0],axis[1],axis[2]);
Then you can bind the rotation Matrix to a shader program like this:
// assuming shader program is currently bound ...
GLES20.glUniformMatrix4fv(GLES20.glGetUniformLocation(shaderProgramID, "uRotation"), 1, GL_FALSE, rotationMat);
Where your vertex shader (of the program being passed rotationMat) would look something like:
precision mediump float;
uniform mat4 uMVPMatrix;
uniform mat4 uRotation;
attribute vec2 a_texCoords;
attribute vec3 a_position;
varying v_texCoord;
void main(void)
{
v_texCoord = a_texCoords;
gl_Position = uMVPMatrix* uRotation * vec4(a_position,1.0f);
}
Alternatively, you could premultiply uMVPMatrix* uRotation outside of this shader program and pass the result to your shader program to avoid excessive duplicate computation.
Once you are comfortable using this higher level API for matrix operations you can investigate how the internal operation is performed by reading this fantastic tutorial written by Nicol Bolas.
Rotation matrix for rotation around z:
cos a -sin a 0
sin a cos a 0
0 0 1
How to remember how to construct it:
a is the angle in radians, for a = 0 the matrix yields the identity-matrix. cos has to be on the diagonal. There has to be one sign in front of one sin, switching the signs inverses the rotation's direction.
Likewise rotations around x and y can be constructed:
1 0 0
0 cos a sin a
0 -sin a cos a
cos a 0 sin a
0 1 0
-sin a 0 cos a
If you are not familiar with matrix-arithmetic, here is some code:
for (int i=0; i<4; i++) {
vertices_new[i*5+0] = cos(a) * vertices[i*5+0] - sin(a) * vertices[i*5+1]; // cos(a) * v[i].x - sin(a) * v[i].y + 0 * v[i].z
vertices_new[i*5+1] = sin(a) * vertices[i*5+0] + cos(a) * vertices[i*5+1]; // sin(a) * v[i].x + cos(a) * v[i].y + 0 * v[i].z
vertices_new[i*5+2] = vertices[i*5+2]; // 0 * v[i].x + 0 * v[i].y + 1 * v[i].z
vertices_new[i*5+3] = vertices[i*5+3]; // copy texture u
vertices_new[i*5+4] = vertices[i*5+4]; // copy texture v
}