Consider a simple game with two classes of objects (ball and wall). Tutorials I've found suggests using a single vertex data array in the manner like:
[... initializing ...]
vdata = new float[ENOUGH_FOR_ALL];
vertexData = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
aPositionLocation = glGetAttribLocation(programId, "a_Position");
glVertexAttribPointer(aPositionLocation, 2, GL_FLOAT, false, 0, vertexData);
glEnableVertexAttribArray(aPositionLocation);
[...drawing...]
vertexData.position(0);
vertexData.put(vdata);
glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f); // ball is red
glDrawArrays(GL_TRIANGLE_FAN, 0, BALL_VERTICES + 2);
glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f); // wall is green
glDrawArrays(GL_LINES, BALL_VERTICES + 2, 2); // wall as a single line
and, the vertex shader is trivial:
attribute vec4 a_Position;
void main() {
gl_Position = a_Position;
}
This works but requires cumbersome calculation of offsets in a single buffer, moving when some object size changes...
Consider the following vertex shader
attribute vec4 a_Ball;
attribute vec4 a_Wall;
uniform float u_WhatDraw;
void main() {
if (u_WhatDraw == 1.0) {
gl_Position = a_Ball;
}
else if (u_WhatDraw == 2.0) {
gl_Position = a_Wall;
}
}
data prepared as:
ball_data = new float[BALL_VERTICES + 2];
wall_data = new float[4]; // a single line
ballVertexData = ByteBuffer.allocateDirect(ball_data.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
wallVertexData = ByteBuffer.allocateDirect(wall_data.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
aBallLocation = glGetAttribLocation(programId, "a_Ball");
glVertexAttribPointer(aBallLocation, 2, GL_FLOAT, false, 0, ballVertexData);
glEnableVertexAttribArray(aBallLocation);
aWallLocation = glGetAttribLocation(programId, "a_Wall");
glVertexAttribPointer(aWallLocation, 2, GL_FLOAT, false, 0, wallVertexData);
glEnableVertexAttribArray(aWallLocation);
[...generation of static wall_data skipped...]
[...drawing...]
ballVertexData.position(0);
ballVertexData.put(ball_data);
glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f); // ball is red
glUniform1f(uWhatDrawLocation, 1.0f);
glDrawArrays(GL_TRIANGLE_FAN, 0, BALL_VERTICES + 2);
glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f); // wall is green
glUniform1f(uWhatDrawLocation, 2.0f);
glDrawArrays(GL_LINES, 0, 2); // wall as a single line
This does not draw wall at all, despite the ball is drawn.
Please suggest in fixing this with 2-array approach or explain a limitation why I should stick with a single array for all activity.
This works but requires cumbersome calculation of offsets in a single buffer, moving when some object size changes...
If object size changes you need to reupload new mesh data, so computing offsets seems to be the least of the problems.
or explain a limitation why I should stick with a single array for all activity.
What happens when you add a third object, or a fourth, or a fifth?
You're optimizing the wrong problem.
Any time you are required to generate shaders with if (<insert uniform>) ... you're doing it wrong - never make the GPU make control plane decisions for every vertex when the application can just do it once.
Related
I'm trying to draw text and a square with OpenGL ES 2.0.
Each one could be in any position.
But so far I can only see one of them depending on the projection mode I choose :
I only see the square if I use "Matrix.setLookAtM" to make the camera (0, -3, 0) look at (0, 0, 0)
I only see the text if I use "Matrix.orthoM" to have an orthogonal projection
cf code from "Texample2Renderer.java" at the bottom.
I would like to see both, how is it possible ? I was thinking about modifying the Square class to make it work with the orthogonal projection Mode, but I have no idea how to do this.
For the text, I'm using this code (there's a lot of code, so I prefer to post the repo):
https://github.com/d3alek/Texample2
And for the square, I'm using this code :
public class Square {
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
static float squareCoords[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f}; // top right
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
// The matrix must be included as a modifier of gl_Position.
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
private final FloatBuffer vertexBuffer;
private final ShortBuffer drawListBuffer;
private final int mProgram;
private final short drawOrder[] = {0, 1, 2, 0, 2, 3}; // order to draw vertices
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
float color[] = {0.2f, 0.709803922f, 0.898039216f, 1.0f};
private int mPositionHandle;
private int mColorHandle;
private int mMVPMatrixHandle;
/**
* Sets up the drawing object data for use in an OpenGL ES context.
*/
public Square() {
// BUFFER FOR SQUARE COORDS
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (# of coordinate values * 4 bytes per float)
squareCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// BUFFER FOR DRAW ORDER
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// prepare shaders and OpenGL program
int vertexShader = Utilities.loadShader(
GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = Utilities.loadShader(
GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
mProgram = GLES20.glCreateProgram(); // create empty OpenGL Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // create OpenGL program executables
}
/**
* Encapsulates the OpenGL ES instructions for drawing this shape.
*
* #param mvpMatrix - The Model View Project matrix in which to draw
* this shape.
*/
public void draw(float[] mvpMatrix) {
// Add program to OpenGL environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
//Utilities.checkEglErrorEGL14Android("glGetUniformLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
//Utilities.checkEglErrorEGL14Android("glUniformMatrix4fv");
// Draw the square
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
}
}
Texample2Renderer.java:
public class Texample2Renderer implements GLSurfaceView.Renderer {
private static final String TAG = "TexampleRenderer";
private Square square;
private GLText glText; // A GLText Instance
private Context context; // Context (from Activity)
private int width = 100; // Updated to the Current Width + Height in onSurfaceChanged()
private int height = 100;
private float[] mProjMatrix = new float[16];
private float[] mVMatrix = new float[16];
private float[] mVPMatrix = new float[16];
private boolean usesOrtho = false;
public Texample2Renderer(Context context) {
super();
this.context = context; // Save Specified Context
}
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Set the background frame color
GLES20.glClearColor( 0.4f, 0.3f, 0.6f, 1.0f );
// Create the GLText
glText = new GLText(context.getAssets());
square = new Square();
// Load the font from file (set size + padding), creates the texture
// NOTE: after a successful call to this the font is ready for rendering!
glText.load( "Roboto-Regular.ttf", 20*3, 2, 2 ); // Create Font (Height: 14 Pixels / X+Y Padding 2 Pixels)
// enable texture + alpha blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
}
public void onSurfaceChanged(GL10 unused, int width, int height) { // gl.glViewport( 0, 0, width, height );
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// Take into account device orientation
if (width > height) {
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 1, 10);
}
else {
Matrix.frustumM(mProjMatrix, 0, -1, 1, -1/ratio, 1/ratio, 1, 10);
}
// Save width and height
this.width = width; // Save Current Width
this.height = height; // Save Current Height
if(usesOrtho) {
int useForOrtho = Math.min(width, height);
//TODO: Is this wrong?
Matrix.orthoM(mVMatrix, 0,
-useForOrtho / 2,
useForOrtho / 2,
-useForOrtho / 2,
useForOrtho / 2, 0.1f, 100f);
}
}
public void onDrawFrame(GL10 unused) {
// Redraw background color
int clearMask = GLES20.GL_COLOR_BUFFER_BIT;
GLES20.glClear(clearMask);
if(!usesOrtho)
Matrix.setLookAtM(mVMatrix,
0, // offset
0, 0, -3f, // eye (camera's position)
0f, 0f, 0f, // center (where to look at)
0f, 1.0f, 0.0f); // up
Matrix.multiplyMM(mVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
if(square != null)
square.draw(mVPMatrix);
// TEST: render the entire font texture
glText.drawTexture( width/2, height/2, mVPMatrix); // Draw the Entire Texture
// TEST: render some strings with the font
glText.begin( 1.0f, 1.0f, 1.0f, 1.0f, mVPMatrix ); // Begin Text Rendering (Set Color WHITE)
glText.drawC("Test String 3D!", 0f, 0f, 0f, 0, -30, 0);
// glText.drawC( "Test String :)", 0, 0, 0 ); // Draw Test String
glText.draw( "Diagonal 1", 40, 40, 40); // Draw Test String
glText.draw( "Column 1", 100, 100, 90); // Draw Test String
glText.end(); // End Text Rendering
glText.begin( 0.0f, 0.0f, 1.0f, 1.0f, mVPMatrix ); // Begin Text Rendering (Set Color BLUE)
glText.draw( "More Lines...", 50, 200 ); // Draw Test String
glText.draw( "The End.", 50, 200 + glText.getCharHeight(), 180); // Draw Test String
glText.end(); // End Text Rendering
}
}
I've just modified the code in Texample2Renderer.java to draw a square. I also added a boolean to switch between projection modes.
Any help would be much appreciated, thanks a lot in advance !
Try to reverse the order of vertices in your square cords:
static float squareCoords[] = {
0.5f, 0.5f, 0.0f}; // top right
0.5f, -0.5f, 0.0f, // bottom right
-0.5f, -0.5f, 0.0f, // bottom left
-0.5f, 0.5f, 0.0f, // top left
Its propably facing other direction then your camera is
Actually I just needed to scale up the square, and it worked with orthogonal projection. Since the square size was "1", it was almost not visible.
I'm trying to render on a Texture which resides in another class and I can't look at the code. I just have the access to it's pointer which is created by glGenTextures.
I'm trying to render on this texture. I'm creating my own shaders and linking to it but they don't affect anything. I just see a white screen on my phone. I've put opengl error check after every statement, and it passes without any error.
I wanted to ask, could there be any previously attached shaders to that texture or something like that which are hindering my own shaders. (I don't even know if this statement makes any sense.)
I'm using the utility class from grafika for program and shaders creation - https://github.com/google/grafika/blob/master/src/com/android/grafika/gles/GlUtil.java . That's why I'm pretty much sure my shaders related code is okay.
This is my main drawing loop -
log("Receiving frame");
GLES20.glUseProgram(programHandle);
GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mainTexture.glName);
//
// Copy the model / view / projection matrix over.
GLES20.glUniformMatrix4fv(muMVPMatrixLoc, 1, false, GlUtil.IDENTITY_MATRIX, 0);
GlUtil.checkGlError("glUniformMatrix4fv");
// Copy the texture transformation matrix over.
/*
GLES20.glUniformMatrix4fv(muTexMatrixLoc, 1, false, texMatrix, 0);
GlUtil.checkGlError("glUniformMatrix4fv"); */
// Enable the "aPosition" vertex attribute.
GLES20.glEnableVertexAttribArray(maPositionLoc);
GlUtil.checkGlError("glEnableVertexAttribArray");
// Connect vertexBuffer to "aPosition".
GLES20.glVertexAttribPointer(maPositionLoc, 2,
GLES20.GL_FLOAT, false, 2 * GlUtil.SIZEOF_FLOAT, GlUtil.FULL_RECTANGLE_BUF);
GlUtil.checkGlError("glVertexAttribPointer");
// Enable the "aTextureCoord" vertex attribute.
GLES20.glEnableVertexAttribArray(maTextureCoordLoc);
GlUtil.checkGlError("glEnableVertexAttribArray");
// Connect texBuffer to "aTextureCoord".
GLES20.glVertexAttribPointer(maTextureCoordLoc, 2,
GLES20.GL_FLOAT, false, 2 * GlUtil.SIZEOF_FLOAT, GlUtil.FULL_RECTANGLE_TEX_BUF);
GlUtil.checkGlError("glVertexAttribPointer");
GLES20.glViewport(0, 0, parameters.getPreviewSize().width, parameters.getPreviewSize().height);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
This is my fragment shader code -
public static final String FRAGMENT_SHADER_2D =
"precision mediump float;\n" +
"varying vec2 vTextureCoord;\n" +
"uniform sampler2D sTexture;\n" +
"void main() {\n" +
" gl_FragColor[0] = 1.0;gl_FragColor[1] = 0.0;gl_FragColor[2] = 0.0;gl_FragColor[3] = 1.0;\n" +
"}\n";
These are the coordinate matrices -
private static final float FULL_RECTANGLE_COORDS[] = {
-1.0f, -1.0f, // 0 bottom left
1.0f, -1.0f, // 1 bottom right
-1.0f, 1.0f, // 2 top left
1.0f, 1.0f, // 3 top right
};
private static final float FULL_RECTANGLE_TEX_COORDS[] = {
0.0f, 0.0f, // 0 bottom left
45.0f, 0.0f, // 1 bottom right
0.0f, 1.0f, // 2 top left
56.0f, 344.0f // 3 top right
};
There doesn't seem to be any code for connecting your texture to "sTexture". You are using texture slot 2 for your texture, so you need tell the shader this:
GLES20.glActiveTexture(GLES20.GL_TEXTURE2); // using texture slot 2
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mainTexture.glName);
int mSamplerLoc = GLES20.glGetUniformLocation (programHandle, "sTexture" );
GLES20.glUniform1i ( mSamplerLoc2, 2); // connect sTexture to texture slot 2
I'm just starting to learn OpenGL ES but am having some trouble understanding how the vertex and indices work. My current understanding is that a Vertex is a point on the shape itself, and that the indices represent the 'triangles' within the vertex points. I'm following a tutorial that has me define the vertex and indices points as below...
Vertex data
-1.0f, -1.0f
1.0f, -1.0f
-1.0f, 1.0f
1.0f, 1.0f
indices data
0,3,1,
0,2,3
I understand that defining indices should always start at one vertex but to me these numbers just dont add up. When I draw this on paper it looks like the actual image drawn should be two triangles together that create a 'crown' shape. Can someone explain why this is actually drawing a square instead of the 'crown' that I am expecting?
Source code for the Square class:
public class Square {
private FloatBuffer mFVertexBuffer;
private ByteBuffer mColorBuffer;
private ByteBuffer mIndexBuffer;
public Square() {
// 2D Points
float[] square = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
byte maxColor = (byte) 225;
/**
* Each line below represents RGB + Alpha transparency
*/
byte colors[] = {
0, maxColor, 0, maxColor,
0, maxColor, maxColor, maxColor,
0, 0, 0, maxColor,
maxColor, 0, maxColor, maxColor,
};
//triangles
byte[] indicies = {
0,3,1,
0,2,3
};
/**
* Make sure that bytes are in correct order, otherwise they might be
* drawn backwards
*/
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(square.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
mFVertexBuffer = byteBuffer.order(ByteOrder.nativeOrder())
.asFloatBuffer();
mFVertexBuffer.put(square);
mFVertexBuffer.position(0);
mColorBuffer = ByteBuffer.allocateDirect(colors.length);
mColorBuffer.put(colors);
mColorBuffer.position(0);
mIndexBuffer = ByteBuffer.allocateDirect(indicies.length);
mIndexBuffer.put(indicies);
mIndexBuffer.position(0);
}
public void draw(GL10 gl) {
/**
* Make open GL only draw the front of the triangle (GL_CW = Graphics
* Library Clockwise)
*
* Back of triangle will not be drawn
*/
gl.glFrontFace(GL11.GL_CW);
/**
* specifies number of elements per vertex
*
* specifies floating point type
*
* Sets stride = 0 bytes* (Stride allows to use different types of data
* interchangably with opengl )
*/
gl.glVertexPointer(2, GL11.GL_FLOAT, 0, mFVertexBuffer);
// 4 because we are using 4 colors in our color bufer array
gl.glColorPointer(4, GL11.GL_UNSIGNED_BYTE, 0, mColorBuffer);
/**
* draws the image
*
* first argument specifies geomety format
*/
gl.glDrawElements(GL11.GL_TRIANGLES, 6, GL11.GL_UNSIGNED_BYTE,
mIndexBuffer);
// Reset to CounterClockwise
gl.glFrontFace(GL11.GL_CCW);
}
}
Let me know if more info is needed...
You defined four vertices:
2 3
0 1
Your indices then defined two triangles, 0-3-1:
.
...
....
.....
and 0-2-3:
.....
....
...
.
put together they form a square.
I don't think your indexes are correct, try drawing the bottom line then moving to the top verts. If I am picturing your indexes correctly, they really are trying to draw a square.
Try:
0, 1, 3
0, 1, 2
Instead
Edit: Even I got the order mixed up, fixed for a mistake
Edit Code added, please see below
Edit 2 - Screenshots from device included at bottom along with explanation
Edit 3 - New code added
I have 2 classes, a rendered and a custom 'quad' class.
I have these declared at class level in my renderer class:
final float[] mMVPMatrix = new float[16];
final float[] mProjMatrix = new float[16];
final float[] mVMatrix = new float[16];
And in my onSurfaceChanged method I have:
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
and....
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// TODO Auto-generated method stub
myBitmap = BitmapFactory.decodeResource(curView.getResources(), R.drawable.box);
//Create new Dot objects
dot1 = new Quad();
dot1.setTexture(curView, myBitmap);
dot1.setSize(300,187); //These numbers are the size but are redundant/not used at the moment.
myBitmap.recycle();
//Set colour to black
GLES20.glClearColor(0, 0, 0, 1);
}
And finally from this class, onDrawFrame:
#Override
public void onDrawFrame(GL10 gl) {
// TODO Auto-generated method stub
//Paint the screen the colour defined in onSurfaceCreated
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix) so looking from the front
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Combine
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
dot1.rotateQuad(0,0,45, mMVPMatrix); //x,y,angle and matrix passed in
}
Then, in my quad class:
This declared at class level:
private float[] mRotationMatrix = new float[16];
private final float[] mMVPMatrix = new float[16];
private final float[] mProjMatrix = new float[16];
private final float[] mVMatrix = new float[16];
private int mMVPMatrixHandle;
private int mPositionHandle;
private int mRotationHandle;
//Create our vertex shader
String strVShader =
"uniform mat4 uMVPMatrix;" +
"uniform mat4 uRotate;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
// "gl_Position = a_position * uRotate;\n"+
// "gl_Position = uRotate * a_position;\n"+
"gl_Position = a_position * uMVPMatrix;\n"+
// "gl_Position = uMVPMatrix * a_position;\n"+
"v_texCoords = a_texCoords;" +
"}";
//Fragment shader
String strFShader =
"precision mediump float;" +
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"}";
Then method for setting texture (don't think this is relevant to this problem though!!)
public void setTexture(GLSurfaceView view, Bitmap imgTexture){
this.imgTexture=imgTexture;
iProgId = Utils.LoadProgram(strVShader, strFShader);
iBaseMap = GLES20.glGetUniformLocation(iProgId, "u_baseMap");
iPosition = GLES20.glGetAttribLocation(iProgId, "a_position");
iTexCoords = GLES20.glGetAttribLocation(iProgId, "a_texCoords");
texID = Utils.LoadTexture(view, imgTexture);
}
And finally, my 'rotateQuad' method (which currently is supposed to draw and rotate the quad).
public void rotateQuad(float x, float y, int angle, float[] mvpMatrix){
Matrix.setRotateM(mRotationMatrix, 0, angle, 0, 0, 0.1f);
// Matrix.translateM(mRotationMatrix, 0, 0, 0, 0); //Removed temporarily
// Combine the rotation matrix with the projection and camera view
Matrix.multiplyMM(mvpMatrix, 0, mRotationMatrix, 0, mvpMatrix, 0);
float[] vertices = {
-.5f,.5f,0, 0,0,
.5f,.5f,0, 1,0,
-.5f,-.5f,0, 0,1,
.5f,-.5f,0, 1,1
};
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
//Bind the correct texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
//Use program
GLES20.glUseProgram(iProgId);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// get handle to shape's rotation matrix
mRotationHandle = GLES20.glGetUniformLocation(iProgId, "uRotate");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mRotationHandle, 1, false, mRotationMatrix, 0);
//Set starting position for vertices
vertexBuf.position(0);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for position
GLES20.glEnableVertexAttribArray(iPosition);
//Set starting position for texture
vertexBuf.position(3);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for texture
GLES20.glEnableVertexAttribArray(iTexCoords);
//Draw it
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
for Edit 2.
This is my quad as drawn in the center of the screen. No rotation.
This is the same quad rotated at +45 Degrees with the code "gl_Position = a_position * uMVPMatrix;" + in my vertex shader (it's from a different project now so the shader variable is a_position and not vPosition), it's looks correct!
However, this is the same quad rotated at +45 Degrees with the 2 shader variables switched (so they read "gl_Position = uMVPMatrix * a_position;" - as you can see, it's not quite right.
Also just a side note, you can't see it here as the quare is symetrical, but each method also rotates in the opposite direction to the other....
Any help appreciated.
It's really impossible to tell because we don't know what you are passing to these two variables.
OpenGL is column-major format, so if vPosition is in fact a vector, and uMVPMatrix is a matrix, then the first option is correct, if this is in your shader.
If this is not in your shader but in your program code, then there is not enough information.
If you are using the first option but getting unexpected results, you are likely not computing your matrix properly or not passing the correct vertices.
Normally in the vertex shader you should multiple the positions by the MVP, that is
gl_Position = uMVPMatrix *vPosition;
When you change the order this should work...
Thanks to all for the help.
I managed to track down the problem (For the most part). I will show what I did.
It was the following line:
Matrix.multiplyMM(mvpMatrix, 0, mvpMatrix, 0, mRotationMatrix, 0);
As you can see I was multiplying the matrices and storing them back into one that I was using in the multiplication.
So I created a new matrix called mvpMatrix2 and stored the results in that. Then passed that to my vertex shader.
//Multiply matrices
Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0);
//get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
//Give to vertex shader variable
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0);
After applying this, there is no distortion (And also, with regards to my other question here Using Matrix. Rotate in OpenGL ES 2.0 I am able to translate the centre of the quad). I say 'for the most part' because however, when I rotate it, it rotates backwards (so if I say rotate +45 degrees, (Clockwise), it actually rotates the quad by -45 degrees (Anit-clockwise).
But hopefully, this will help anyone who has a similar problem in the future.
I'm developing a 2D side-scrolling game.I have a tiled bitmap for background with size 2400x480.How can i scroll this bitmap?
I know that i can do it with following code:
for(int i=0;i<100;i++)
draw(bitmap,2400*i,480);
So it will scroll bitmap for 240000 pixels.
But i don't want to draw images which stays out of screen(with size 800x480).
How can i render scrolling tile?How can i give velocity to it?(For parallax scrolling after normal scrolling)
The following code does not use two quads, but one quad, and adapts the texture coordinates accordingly. Please note that I stripped all texture handling code. The projection matrix solely contains glOrthof(0.0f, 1.0f, 0.0f, 1.0f, -1.0f, 1.0f);. Quad2D defines a quad whose tex coords can be animated using updateTexCoords:
static class Quad2D {
public Quad2D() {
/* SNIP create buffers */
float[] coords = { 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, };
mFVertexBuffer.put(coords);
float[] texs = { 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, };
mTexBuffer.put(texs);
/* SNIP reposition buffer indices */
}
public void updateTexCoords(float t) {
t = t % 1.0f;
mTexBuffer.put(0, 0.33f+t);
mTexBuffer.put(2, 0.0f+t);
mTexBuffer.put(4, 0.33f+t);
mTexBuffer.put(6, 0.0f+t);
}
public void draw(GL10 gl) {
glVertexPointer(2, GL_FLOAT, 0, mFVertexBuffer);
glTexCoordPointer(2, GL_FLOAT, 0, mTexBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
/* SNIP members */
}
Now how to call updateTexCoords? Inside onDrawFrame, put the following code:
long time = SystemClock.uptimeMillis() % 3000L;
float velocity = 1.0f / 3000.0f;
float offset = time * velocity;
mQuad.updateTexCoords(offset);
Since the texture coordinates go beyond 1.0f, the texture wrap for the S coordinate must be set to GL_REPEAT.
If you are ok working with shaders, we can have a cleaner and may be more efficient way of doing this.
Create a VBO with a quad covering the whole screen. The witdth and height should be 800 and 480 respectively.
then create your vertex shader as:
attribute vec4 vPos;
uniform int shift;
varying mediump vec2 fTexCoord;
void main(void){
gl_Position = vPos;
fTexCoord = vec2((vPos.x-shift)/800.0, vPos.y/480.0);//this logic might change slightly based on which quadarnt you choose.
}
The fragment shader would be something like:
precision mediump float;
uniform sampler2D texture;
varying mediump vec2 fTexCoord;
void main(void){
gl_FragColor = texture2D(texture, fTexCoord);
}
and you are done.
To scroll just set the uniform value using "glUniformX()" apis.