in this posting http://www.badlogicgames.com/wordpress/?p=504 Mr. libgdx Mario writes:
OpenGL from the ground up: an extremely well written tutorial series on OpenGL ES 1.x. Covers all the basics you need to get started with OpenGL. Note that the tutorial is written for the IPhone and uses Objective C/C++. This shouldn’t be a big problem though as the API is the same.
To my shame, I wasn't able to get a libgdx aequivalent of the very first example in that tutorial running, which is this:
- (void)drawView:(GLView*)view;
{
Vertex3D vertex1 = Vertex3DMake(0.0, 1.0, -3.0);
Vertex3D vertex2 = Vertex3DMake(1.0, 0.0, -3.0);
Vertex3D vertex3 = Vertex3DMake(-1.0, 0.0, -3.0);
Triangle3D triangle = Triangle3DMake(vertex1, vertex2, vertex3);
glLoadIdentity();
glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, &triangle);
glDrawArrays(GL_TRIANGLES, 0, 9);
glDisableClientState(GL_VERTEX_ARRAY);
}
my code...
public void render () {
Gdx.gl11.glLoadIdentity();
Gdx.gl11.glRotatef(rotation, 0.0f, 0.0f, 1.0f);
Gdx.gl11.glClearColor((float)0.7, (float)0.7, (float)0.7, (float)1.0);
Gdx.gl11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
Gdx.gl11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
Gdx.gl11.glColor4f((float)1.0, (float)0.0, (float)0.0, (float)1.0);
Gdx.gl11.glVertexPointer(3, GL11.GL_FLOAT, BYTES_PER_VERTEX, vertices);
Gdx.gl11.glDrawArrays(GL11.GL_TRIANGLES, 0, 9);
Gdx.gl11.glDisableClientState(GL11.GL_VERTEX_ARRAY);
}
The problem here is 'vertices'. I have no idea what that should be. After lots of googling I came up with:
final int BYTES_PER_VERTEX = (3 + 4) * 4;
public void create () {
ByteBuffer buffer = ByteBuffer.allocateDirect(BYTES_PER_VERTEX * 3);
buffer.order(ByteOrder.nativeOrder());
vertices = buffer.asFloatBuffer();
float[] verts = {
0.0f, 1.0f, 0.0f, 1, 0, 0, 0,
1.0f, 0.0f, 0.0f, 0, 1, 0, 0,
-1.0f, 0.0f, 0.0f, 0, 0, 1, 0};
vertices.put(verts);
vertices.flip();
}
. and that seems to be displaying a triangle, but the values for the vertices are not the same as in the original example (z value is 0 instead of -3, in which case I wouldn't see anything).
Can anyone shed any light on vertices?
Here's what I have. I create a square instead of a triangle but you get the jist. A lot of the random code that is in the iphone tutorial is taken care internally. You can actually dissect the source of libgdx if you are curious how (in java) it's going about the calls internally (camera management, mesh management, etc).
in create() :
mesh = new Mesh(true, 4, 4,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.ColorPacked, 4, "a_color"));
mesh.setVertices(new float[] {
-1.0f, -1.0f, -3.0f, Color.toFloatBits(255, 0, 0, 255),
1.0f, -1.0f, -3.0f, Color.toFloatBits(255, 0, 0, 255),
-1.0f, 1.0f, -3.0f, Color.toFloatBits(255, 0, 0, 255),
1.0f, 1.0f, -3.0f, Color.toFloatBits(255, 0, 0, 255)});
mesh.setIndices(new short[] { 0, 1, 2, 3 });
in resize()
float aspectRatio = (float) width / (float) height;
camera = new PerspectiveCamera(67, 2f * aspectRatio, 2f);
camera.near = 0.1f;
camera.translate(0, 0, 0);
in render()
Gdx.gl11.glClearColor(0.7f, 0.7f, 0.7f, 1.0f);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
mesh.render(GL10.GL_TRIANGLE_STRIP, 0, 4);
Hope that helps.
Resources:
http://dpk.net/2011/03/07/libgdx-cubes-handling-inputs-in-applicationlistener-render/
http://www.badlogicgames.com/wordpress/?p=2032
Related
Perhaps I have some stupid problems. I'll be appreciate if someone could reply them.
All the problems are based on Android environment and OpenGL ES.
How to verified whether I has opened the MSAA or not ? If I draw some GL_POINTS with points size 50, there are some small squares. If I enabled 4x MSAA, can the small squares become round points ?
I tried my best to enable MSAA with FBO and BlitFBO. But it draw nothing and there is an error INVALID_OPERATION after glBlitFramebuffer() calling.
Here is the complete projects I mentioed above:https://github.com/Enoch-Liu/GL
And the following is the key codes:
void Renderer::MultisampleAntiAliasing() {
glGenRenderbuffers(1, &m_MSColor);
glBindRenderbuffer(GL_RENDERBUFFER, m_MSColor);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_RGBA8, m_width, m_height);
checkGLError("GenMSColorBuffer");
glGenFramebuffers(1, &m_MSFBO);
glBindFramebuffer(GL_FRAMEBUFFER, m_MSFBO);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, m_MSColor);
checkGLError("FboRbo,COLORATTACHMENT");
glGenRenderbuffers(1, &m_MSDepth);
glBindRenderbuffer(GL_RENDERBUFFER, m_MSDepth);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT16, m_width, m_height);
checkGLError("GenDepthBuffer");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_MSDepth);
checkGLError("DepthBuffer,Renderbuffer");
GLenum drawBufs[] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, drawBufs);
checkGLError("DrawBuffer");
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOG_ERROR("failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
}
void Renderer::drawFrame() {
//LOG_INFO("drawFrame %d x %d", width, height);
static float r=0.9f;
static float g=0.2f;
static float b=0.2f;
LOG_INFO("xxx %d, %d", m_width,m_height);
if (OPENMSAA)
{
glBindFramebuffer(GL_FRAMEBUFFER, m_MSFBO);
glBindRenderbuffer(GL_RENDERBUFFER, m_MSColor);
checkGLError("BindTwoBuffers");
}
glViewport(0,0,m_width,m_height);
glScissor(0,0,m_width,m_height);
glClearColor(r, g, b, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
const GLfloat landscapeOrientationMatrix[16] = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f };
const GLfloat color[4] = {
1.0f, 0.0f, 0.0f, 1.0f
};
glUseProgram( m_program );
glUniformMatrix4fv(m_uMvp, 1, GL_FALSE, landscapeOrientationMatrix);
glUniform4fv(m_uColor, 1, color);
m_p = glGetAttribLocation(m_program, "vPosition");
m_p1 = glGetAttribLocation(m_program, "vPosition1");
glEnableVertexAttribArray( m_p );
glVertexAttribPointer( m_p , 3, GL_FLOAT, false, 3 * sizeof( float ), squareCoords);
glDrawArrays(GL_POINTS, 0, 4);
glDisableVertexAttribArray( m_p );
glFlush();
checkGLError("Before Blit");
if (OPENMSAA)
{
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_MSFBO);
checkGLError("BindReadBuffer");
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
checkGLError("BindFramebuffer");
glBlitFramebuffer(0, 0, m_width, m_height, 0, 0, m_width, m_height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
checkGLError("BlitFramebufferColor");
glBlitFramebuffer(0, 0, m_width, m_height, 0, 0, m_width, m_height, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
checkGLError("BlitFramebufferDepth");
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
}
The framebuffer is complete.
The internal format of the depth buffers have to match: https://www.opengl.org/discussion_boards/showthread.php/173275-Alternative-to-glBlitFramebuffer%28%29
Looking at your github project you are not configuring a depth buffer at all. From your project:
const EGLint attribs[] = {
// EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_SAMPLE_BUFFERS, 1,
EGL_SAMPLES, 4,
EGL_NONE
};
I have a vertex (Which i will not be showing/rendering in the scene)
float vertex[] = {
1.0f, 1.0f, 1.0f,
};
And i have a mesh, which i have translated and rotated using:
Matrix.translateM(World.mModelMatrix, tmOffset, globalPositionX, globalPositionY, globalPositionZ);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationZ, 0, 0, 1);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationY, 0, 1, 0);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationX, 1, 0, 0);
How can apply those translations and rotations to the vertex, and get its global position (x,y,z) after?
Use the Matrix.multiplyMV method:
float vertex[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float result[] = { 0.0f, 0.0f, 0.0f, 0.0f };
Matrix.multiplyMV(result, 0, matrix, 0, vertex, 0);
Note, that you will have to add a homogeneous coordinate to your vector to make it work.
I use OpenGL ES 2.0 in order to draw Square objects like from this tutorial:
developer.android.com/training/graphics/opengl/draw.html
and to draw text like from this tutorial:
fractiousg.blogspot.com/2012/04/rendering-text-in-opengl-on-android.html
d1.draw(mMVPMatrix);
d2.draw(mMVPMatrix);
d3.draw(mMVPMatrix);
d4.draw(mMVPMatrix);
p1.draw(mMVPMatrix);
if(punkty>0)
p2.draw(mMVPMatrix);
koala.draw(mMVPMatrix);
punktyString = String.valueOf(punkty);
glText.begin( 1.0f, 1.0f, 1.0f, 1.0f, mVPMatrix );
glText.drawC(punktyString, width/2, height/2);
glText.end(); <i>
To draw square object I use mMVPMatrix. To draw text I use mVPMatrix.
onSurfaceChanged:
float ratio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -1, 1, -1 / ratio, 1 / ratio, 1, 10);
int useForOrtho = Math.min(width, height);
//TODO: Is this wrong?
Matrix.orthoM(mVMatrix, 0,
-useForOrtho / 2,
useForOrtho / 2,
-useForOrtho / 2,
useForOrtho / 2, 0.1f, 100f);
Matrix.frustumM(mProjectionMatrix, 0, 1, -1, -1, 1, 3, 7); </i>
onDrawFrame:
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
Matrix.multiplyMM(mVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
On most devices app works well:
http://prntscr.com/75ucsf
But on some devices app works like this:
http://prntscr.com/75ucle
Only text is rendered, Square objects aren't rendered.
What should I do to render both Square objects and text on all devices?
Ask me if you need more informations.
I have, a problem with the setLookAtM function. My goal is to create a cube within a cube something like this (yep, it's paint :P ):
So basically everything works... almoust... I have the smaller cube and I have the bigger one.
However, there is a problem. I created the bigger one with coords from -1 to 1 and now I want to upscale it. With scale 1.0f i have something like this (the inner cube is rotating):
And thats good, but now... when I try to scale the bigger cube (so that it looks like in the paint drawing) the image goes black or white (i guess it's because the "camera" looks at the white cube but still i dont know why does my inner cube disappear :/ I don't understand what I'm doing wrong. Here is my code:
public void onDrawFrame(GL10 unused) {
float[] scratch = new float[16];
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -5.0f, 0f, 0f, -1.0f, 0f, 1.0f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
mRoom.mScale = 1.0f;
Matrix.setIdentityM(mScaleMatrix, 0);
Matrix.scaleM(mScaleMatrix, 0, mRoom.mScale, mRoom.mScale, mRoom.mScale);
float[] scaleTempMatrix = new float[16];
Matrix.multiplyMM(scaleTempMatrix, 0, mMVPMatrix, 0, mScaleMatrix, 0);
mRoom.draw(scaleTempMatrix);
When I set for example:
mRoom.mScale = 3.0f;
And
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -2.0f, 0f, 0f, 0.0f, 1.0f, 1.0f, 0.0f);
My camera should be at (0, 0, -2) looking at (0,0, -1) and it should be inside the white cube (since scale is 3.0 so the coords should be from -3 to 3 right?) But all I get is a white screen without the smaller cube rotating inside :/
If your scale is 3x in this code, then your visible coordinate range is actually going to be [-1/3,1/3].
You are thinking about things backwards, it might help if you considered the order in which the scale operation is applied. Right now you are scaling the object-space coordinates, then applying the view matrix and then projection. It may not look that way, but that is how matrix multiplication in GL works; GL effectively flips the operands when it does matrix multiplication and matrix multiplication is not commutative.
I believe this is what you actually want:
public void onDrawFrame(GL10 unused) {
float[] scratch = new float[16];
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -5.0f, 0f, 0f, -1.0f, 0f, 1.0f, 0.0f);
mRoom.mScale = 3.0f;
Matrix.setIdentityM(mScaleMatrix, 0);
Matrix.scaleM(mScaleMatrix, 0, mRoom.mScale, mRoom.mScale, mRoom.mScale);
Matrix.multiplyMM(mMVPMatrix, 0, mScaleMatrix, 0, mProjectionMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mMVPMatrix, 0, mViewMatrix, 0);
mRoom.draw(mMVPMatrix);
I would like to have a gradient background in OpenGL
I found these two links, but I cannot reproduce it:
OpenGL gradient fill on iPhone looks striped
OpenGL gradient banding on Android
I tried the following of the first link:
// Begin Render
//IntBuffer redBits = null, greenBits = null, blueBits = null;
//gl.glGetIntegerv (GL10.GL_RED_BITS, redBits); // ==> 8
//gl.glGetIntegerv (GL10.GL_GREEN_BITS, greenBits); // ==> 8
//gl.glGetIntegerv (GL10.GL_BLUE_BITS, blueBits); // ==> 8
gl.glDisable(GL10.GL_BLEND);
gl.glDisable(GL10.GL_DITHER);
gl.glDisable(GL10.GL_FOG);
gl.glDisable(GL10.GL_LIGHTING);
gl.glDisable(GL10.GL_TEXTURE_2D);
gl.glShadeModel(GL10.GL_SMOOTH);
float[] vertices = {
0, 0,
320, 0,
0, 480,
320, 480,
};
FloatBuffer vertsBuffer = makeFloatBuffer(vertices);
int[] colors = {
255, 255, 255, 255,
255, 255, 255, 255,
200, 200, 200, 255,
200, 200, 200, 255,
};
IntBuffer colorBuffer = makeIntBuffer(colors);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertsBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glColorPointer(4, GL10.GL_UNSIGNED_BYTE, 0, colorBuffer);
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
// End Render
protected static FloatBuffer makeFloatBuffer(float[] arr) {
ByteBuffer bb = ByteBuffer.allocateDirect(arr.length*4);
bb.order(ByteOrder.nativeOrder());
FloatBuffer fb = bb.asFloatBuffer();
fb.put(arr);
fb.position(0);
return fb;
}
protected static IntBuffer makeIntBuffer(int[] arr) {
ByteBuffer bb = ByteBuffer.allocateDirect(arr.length*4);
bb.order(ByteOrder.nativeOrder());
IntBuffer ib = bb.asIntBuffer();
ib.put(arr);
ib.position(0);
return ib;
}
But it just shows a rectangle in the right upper corner. But I don't know if the
glGetIntegerv
would have an effect? Any ideas/links how to make it run?
SOLUTION
// set orthographic perspective
setOrtho2D(activity, gl);
gl.glDisable(GL10.GL_BLEND);
//gl.glDisable(GL10.GL_DITHER);
gl.glDisable(GL10.GL_FOG);
gl.glDisable(GL10.GL_LIGHTING);
gl.glDisable(GL10.GL_TEXTURE_2D);
gl.glShadeModel(GL10.GL_SMOOTH);
float[] vertices = {
0, 0,
_winWidth, 0,
0, _winHeight,
_winWidth, _winHeight
};
FloatBuffer vertsBuffer = makeFloatBuffer(vertices);
float[] colors = {
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
0.2f, 0.2f, 0.2f, 1.0f,
0.2f, 0.2f, 0.2f, 1.0f
};
FloatBuffer colorBuffer = makeFloatBuffer(colors);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertsBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer);
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
I forgot to comment in the perspective line again. I also changed the vertices layed order from "U" shape to the "Z" shape (as commented from Nick).
Now it looks like how I want it:
This is a problem:
int[] colors;
....
gl.glColorPointer(4, GL10.GL_UNSIGNED_BYTE, 0, colorBuffer);
You are using signed four-byte integers for your color channels, and then telling opengl that they are unsigned one-byte integers. You should be using a buffer full of unsigned bytes.
It would be easier however, to just use floats instead:
float[] colors = {
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f, 1.0f,
0.5f, 0.5f, 0.5f, 1.0f,
};
float vertices[] = {
0, 0,
800, 0,
0, 480,
480, 800,
};
FloatBuffer colorBuffer = makeFloatBuffer(colors);
gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer);