I use this code to render single textured rectangle. I have 2 floats for vertex position, 2 floats for texture coordinates and 1 for texture id.This code works correctly and renders my rectangle (don't mind the weird float data):
public void renderNonVBO(){
float [] data = new float[]{66.61024f, 27.520004f, 0.0f, 0.206f, 0.0f,
77.950134f, 49.80019f, 0.141f, 0.206f, 0.0f,
33.389763f, 72.479996f, 0.0f, 0.206f, 0.0f,
22.049864f, 50.19981f, 0.0f, 0.206f, 0.0f};
FloatBuffer b = ByteBuffer.allocateDirect(20 * FLOAT_SIZE).order(ByteOrder.nativeOrder()).asFloatBuffer();
b.put(data);
b.position(0);
GLES20.glEnableVertexAttribArray(aPosition);
GLES20.glVertexAttribPointer(aPosition, POSITION_SIZE, GLES20.GL_FLOAT, false, TOTAL_SIZE * FLOAT_SIZE, b);
b.position(2);
GLES20.glEnableVertexAttribArray(aTexPos);
GLES20.glVertexAttribPointer(aTexPos, TEXTURE_SIZE, GLES20.GL_FLOAT, false, TOTAL_SIZE * FLOAT_SIZE, b);
b.position(4);
GLES20.glEnableVertexAttribArray(aTexIdPos);
GLES20.glVertexAttribPointer(aTexIdPos, TEXTURE_ID_SIZE, GLES20.GL_FLOAT, false, TOTAL_SIZE * FLOAT_SIZE, b);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4);
}
But when I try to switch to VBOs using this code, my app crashes:
public void renderVBO(){
float [] data = new float[]{66.61024f, 27.520004f, 0.0f, 0.206f, 0.0f,
77.950134f, 49.80019f, 0.141f, 0.206f, 0.0f,
33.389763f, 72.479996f, 0.0f, 0.206f, 0.0f,
22.049864f, 50.19981f, 0.0f, 0.206f, 0.0f};
FloatBuffer b = ByteBuffer.allocateDirect(20 * FLOAT_SIZE).order(ByteOrder.nativeOrder()).asFloatBuffer();
b.put(data);
b.position(0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, b.capacity() * FLOAT_SIZE, b, GLES20.GL_STATIC_DRAW);
GLES20.glEnableVertexAttribArray(aPosition);
GLES20.glVertexAttribPointer(aPosition, POSITION_SIZE, GLES20.GL_FLOAT, false, TOTAL_SIZE * FLOAT_SIZE, 0);
GLES20.glEnableVertexAttribArray(aTexPos);
GLES20.glVertexAttribPointer(aTexPos, TEXTURE_SIZE, GLES20.GL_FLOAT, false, TOTAL_SIZE * FLOAT_SIZE, 8);
GLES20.glEnableVertexAttribArray(aTexIdPos);
GLES20.glVertexAttribPointer(aTexIdPos, TEXTURE_ID_SIZE, GLES20.GL_FLOAT, false, TOTAL_SIZE * FLOAT_SIZE, 16);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4);
}
The only difference is using VBO and last parameter of glVertexAttribPointer, that is now numeric offset instead of FloatBuffer. buffer[0] was returned by glGenBuffers. When I look into logcat I see this message:
D/emuglGLESv2_enc: sendVertexAttributes: bad offset / len!!!!!
Which is weird, because my offsets are lengths must be correct here, they are the same for both methods. My constants:
final int FLOAT_SIZE = 4;
final int POSITION_SIZE = 2;
final int TEXTURE_SIZE = 2;
final int TEXTURE_ID_SIZE = 1;
final int TOTAL_SIZE = POSITION_SIZE + TEXTURE_SIZE + TEXTURE_ID_SIZE;
What am I doing wrong?
In open GL there is a term called picking. Which is used to determine which object on the screen was selected. Can someone explain to me what the difference between using picking and putting a touch based listener in each and every instance of a object ex. A Cube class.
Hypothetically; What i want to do is demonstrate multiple cubes on a screen randomly. I figured if I give the Cube class a listener, upon touching the cube the listener should fire off accordingly for each cube pressed.
This is the code I would add the listener to.
Would this be possible or is picking necessary?
public class Cube extends Shapes {
private FloatBuffer mVertexBuffer;
private FloatBuffer mColorBuffer;
private ByteBuffer mIndexBuffer;
private Triangle[] normTris = new Triangle[12];
private Triangle[] transTris = new Triangle[12];
// every 3 entries represent the position of one vertex
private float[] vertices =
{
-1.0f, -1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, -1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, 1.0f
};
// every 4 entries represent the color (r,g,b,a) of the corresponding vertex in vertices
private float[] colors =
{
1.0f, 0.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f
};
// every 3 entries make up a triangle, every 6 entries make up a side
private byte[] indices =
{
0, 4, 5, 0, 5, 1,
1, 5, 6, 1, 6, 2,
2, 6, 7, 2, 7, 3,
3, 7, 4, 3, 4, 0,
4, 7, 6, 4, 6, 5,
3, 0, 1, 3, 1, 2
};
private float[] createVertex(int Index)
{
float[] vertex = new float[3];
int properIndex = Index * 3;
vertex[0] = vertices[properIndex];
vertex[1] = vertices[properIndex + 1];
vertex[2] = vertices[properIndex + 2];
return vertex;
}
public Triangle getTriangle(int index){
Triangle tri = null;
//if(index >= 0 && index < indices.length){
float[] v1 = createVertex(indices[(index * 3) + 0]);
float[] v2 = createVertex(indices[(index * 3) + 1]);
float[] v3 = createVertex(indices[(index * 3) + 2]);
tri = new Triangle(v1, v2, v3);
// }
return tri;
}
public int getNumberOfTriangles(){
return indices.length / 3;
}
public boolean checkCollision(Ray r, OpenGLRenderer renderer){
boolean isCollide = false;
int i = 0;
while(i < getNumberOfTriangles() && !isCollide){
float[] I = new float[3];
if(Shapes.intersectRayAndTriangle(r, transTris[i], I) > 0){
isCollide = true;
}
i++;
}
return isCollide;
}
public void translate(float[] trans){
for(int i = 0; i < getNumberOfTriangles(); i++){
transTris[i].setV1(Vector.addition(transTris[i].getV1(), trans));
transTris[i].setV2(Vector.addition(transTris[i].getV2(), trans));
transTris[i].setV3(Vector.addition(transTris[i].getV3(), trans));
}
}
public void scale(float[] scale){
for(int i = 0; i < getNumberOfTriangles(); i++){
transTris[i].setV1(Vector.scalePoint(transTris[i].getV1(), scale));
transTris[i].setV2(Vector.scalePoint(transTris[i].getV2(), scale));
transTris[i].setV3(Vector.scalePoint(transTris[i].getV3(), scale));
}
}
public void resetTransfomations(){
for(int i = 0; i < getNumberOfTriangles(); i++){
transTris[i].setV1(normTris[i].getV1().clone());
transTris[i].setV2(normTris[i].getV2().clone());
transTris[i].setV3(normTris[i].getV3().clone());
}
}
public Cube()
{
Buffer[] buffers = super.getBuffers(vertices, colors, indices);
mVertexBuffer = (FloatBuffer) buffers[0];
mColorBuffer = (FloatBuffer) buffers[1];
mIndexBuffer = (ByteBuffer) buffers[2];
}
public Cube(float[] vertices, float[] colors, byte[] indices)
{
if(vertices != null) {
this.vertices = vertices;
}
if(colors != null) {
this.colors = colors;
}
if(indices != null) {
this.indices = indices;
}
Buffer[] buffers = getBuffers(this.vertices, this.colors, this.indices);
mVertexBuffer = (FloatBuffer) buffers[0];
mColorBuffer = (FloatBuffer) buffers[1];
mIndexBuffer = (ByteBuffer) buffers[2];
for(int i = 0; i < getNumberOfTriangles(); i++){
normTris[i] = getTriangle(i);
transTris[i] = getTriangle(i);
}
}
public void draw(GL10 gl)
{
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
// draw all 36 triangles
gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
}
}
Using a Listener does not work in this case.
If you for example take a look at the onTouchListener. This is basically an interface providing just a single method onTouch(). Now when android is processing touch inputs and the target view was touched it knows that your listener can be informed about the touch by calling onTouch() of your listener.
When using OpenGL you have the problem, that noone handles the touch input inside your opengl surface. You have to do it yourself. So there is noone who will call your listener.
Why? What you render inside your gl surface is up to you. You only know what the actual geometry is and therefore you are the only one who can decide which object was selected.
You basically have two options to do the selection:
Ray Shooting - Shoot a ray through the eye of the viewer and the touched point into the scene and check which object was hit.
Color Picking - assign ids to your objects, encode id as a color, render scene with this color. finally check the color at the touch position and decode the color to get the id of the object.
For most applications I would prefer the second solution.
Because of performance I moved to OpenGL ES 2D from canvas.drawBitmap
This is sprite sheet 4x1:
Now to make it work I had followed class:
public Vulcan(ScreenObjectsView objectsView, int vulkanSpriteID, Context context) {
this.b = BitmapFactory.decodeResource(context.getResources(), vulkanSpriteID);
// 1x4
height = b.getHeight();
width = b.getWidth()/4;
WindowManager wm = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
Display display = wm.getDefaultDisplay();
x = display.getWidth()/2-width/2; // deprecated
y = display.getHeight()-height; // deprecated
}
public void update() {
frameFreq++;
if(frameFreq > 0){
currentFrame = ++currentFrame % 4;
frameFreq = 0;
}
}
#Override
public void draw(Canvas canvas) {
update();
int srcX = currentFrame * width;
Rect src = new Rect(srcX, 0, srcX+width, height);
Rect dst = new Rect(x, y, x+width, y+height);
canvas.drawBitmap(b, src, dst, null);
}
Each period of time I take Rect and shift from left to right (in loop):
currentFrame = ++currentFrame % 4;
So far so good.
How can I animate above mentioned sprite sheet in in OpenGL ES?
Today, I know how to draw and move objects in OpenGL ES (thanks to good demo)
but don't know to play with sprites.
Any ideas, links, snippets of code?
[Edit]
Ther is no mater to use sprite sheet or 4 images like:
, and so on.
Strange that still didn't get any answer or direction.
Thank you,
[Edit 2]
According to what Aert says I implemented the following code and it works.
But it seems messy
Too much code for OpenGL ES. For each texture (I have 4), I need create FloatBuffer:
Maybe someone have shorter way, or I did something wrong.
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.opengles.GL10;
public class DevQuestSpriteBase {
private static final String LOG_TAG = "Fess";//DevQuestSpriteBase.class.getSimpleName();
protected int mFrame = 0;
protected int mSwitcher = 0;
private int textureCount = 1; // frame animation
protected int[] textures = new int[textureCount]; // frame animation
// texture and verts
protected FloatBuffer vertexBuffer,
textureBuffer1,
textureBuffer2,
textureBuffer3,
textureBuffer4;
ByteBuffer bb1;
protected float vertices[] = {
0f,0f,0.0f,
1f,0f,0.0f,
0f,1f,0.0f,
1f,1f,0.0f
};
/** 1 frame */
protected float texture1[] = {
0.0f, 1.0f,
0.0f, 0.0f,
0.25f, 1.0f,
0.25f, 0.0f
};
/** 2 frame */
protected float texture2[] = {
0.25f, 1.0f,
0.25f, 0.0f,
0.5f, 1.0f,
0.5f, 0.0f
};
/** 3 frame */
protected float texture3[] = {
0.5f, 1.0f,
0.5f, 0.0f,
0.75f, 1.0f,
0.75f, 0.0f
};
/** 4 frame */
protected float texture4[] = {
0.75f, 1.0f,
0.75f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
public DevQuestSpriteBase(){
// vertices buffer
bb1 = ByteBuffer.allocateDirect(vertices.length * 4);
bb1.order(ByteOrder.nativeOrder());
vertexBuffer = bb1.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
// texture buffer
bb1 = ByteBuffer.allocateDirect(texture1.length * 4);
bb1.order(ByteOrder.nativeOrder());
textureBuffer1 = bb1.asFloatBuffer();
textureBuffer1.put(texture1);
textureBuffer1.position(0);
//#########################################################
// texture buffer
bb1 = ByteBuffer.allocateDirect(texture2.length * 4);
bb1.order(ByteOrder.nativeOrder());
textureBuffer2 = bb1.asFloatBuffer();
textureBuffer2.put(texture2);
textureBuffer2.position(0);
//#########################################################
// texture buffer
bb1 = ByteBuffer.allocateDirect(texture3.length * 4);
bb1.order(ByteOrder.nativeOrder());
textureBuffer3 = bb1.asFloatBuffer();
textureBuffer3.put(texture3);
textureBuffer3.position(0);
//#########################################################
// texture buffer
bb1 = ByteBuffer.allocateDirect(texture4.length * 4);
bb1.order(ByteOrder.nativeOrder());
textureBuffer4 = bb1.asFloatBuffer();
textureBuffer4.put(texture4);
textureBuffer4.position(0);
}
private void update() {
if(mSwitcher == 5){
mFrame = ++mFrame % 4;
mSwitcher = 0;
// Log.e(LOG_TAG, "DevQuestSpriteBase :: " + mFrame);
}
else{
mSwitcher++;
}
}
public void draw(GL10 gl){
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
if(mFrame == 0){
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer1);
}
else if(mFrame == 1){
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer2);
}
else if(mFrame == 2){
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer3);
}
else if(mFrame == 3){
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer4);
}
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
//Log.e(LOG_TAG, "DevQuestSpriteBase :: draw");
update();
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
//gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer1);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
public int[] getTextures() {
return textures;
}
}
Without going into a lot of detail, you need to do the following (assuming you are already drawing a sprite using 4 vertices):
Define the texture coordinates corresponding to the vertices of the sprite for each animation frame, e.g.
texCoordsFrame1 = [0.0f, 0.0f, 0.25f, 0.0f, 0.0f, 1.0f, 0.25f, 1.0f];
Upload the spritesheet texture, e.g.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
Draw using the texture coordinates corresponding to the frame you want to show when required, e.g.
...
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexCoordPointer(2, GL_FLOAT, 0, texCoordsFrame1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Alternatively, you can upload the separate frames as individual textures, but that is undesirable from a performance point of view.
There are a few gotcha's
When using GLES1, you can only use power-of-two textures. In that case you'll have to scale the texture or increase its size to be power-of-two and adjust the texture coordinates.
The bitmap vs GL y-coordinate direction difference is a bit confusing, and you might end up with a vertically flipped sprite.
Working through some OpenGL-ES tutorials, using the Android emulator. I've gotten up to texture mapping and am having some trouble mapping to a cube. Is it possible to map a texture to all faces of a cube that has 8 vertices and 12 triangles for the 6 faces as described below?
// Use half as we are going for a 0,0,0 centre.
width /= 2;
height /= 2;
depth /= 2;
float vertices[] = { -width, -height, depth, // 0
width, -height, depth, // 1
width, height, depth, // 2
-width, height, depth, // 3
-width, -height, -depth, // 4
width, -height, -depth, // 5
width, height, -depth, // 6
-width, height, -depth, // 7
};
short indices[] = {
// Front
0,1,2,
0,2,3,
// Back
5,4,7,
5,7,6,
// Left
4,0,3,
4,3,7,
// Right
1,5,6,
1,6,2,
// Top
3,2,6,
3,6,7,
// Bottom
4,5,1,
4,1,0,
};
float texCoords[] = {
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
I have gotten the front and back faces working correctly, however, none of the other faces are showing the texture.
Drawing code
public void draw(GL10 gl) {
// Counter-clockwise winding.
gl.glFrontFace(GL10.GL_CCW);
// Enable face culling.
gl.glEnable(GL10.GL_CULL_FACE);
// What faces to remove with the face culling.
gl.glCullFace(GL10.GL_BACK);
// Enabled the vertices buffer for writing and to be used during
// rendering.
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer);
if (normalsBuffer != null) {
// Enabled the normal buffer for writing and to be used during rendering.
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
// Specifies the location and data format of an array of normals to use when rendering.
gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer);
}
// Set flat color
gl.glColor4f(rgba[0], rgba[1], rgba[2], rgba[3]);
// Smooth color
if ( colorBuffer != null ) {
// Enable the color array buffer to be used during rendering.
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
// Point out the where the color buffer is.
gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer);
}
// Use textures?
if ( textureBuffer != null) {
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
}
// Translation and rotation before drawing
gl.glTranslatef(x, y, z);
gl.glRotatef(rx, 1, 0, 0);
gl.glRotatef(ry, 0, 1, 0);
gl.glRotatef(rz, 0, 0, 1);
gl.glDrawElements(GL10.GL_TRIANGLES, numOfIndices, GL10.GL_UNSIGNED_SHORT, indicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
gl.glDisableClientState(GL10.GL_NORMAL_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Disable face culling.
gl.glDisable(GL10.GL_CULL_FACE);
}
You have to use 24 vertexes. In OpenGL, a vertex is more than just position, it is the collection of all vertex attributes. Every vertex array is accessed with the same index.
The infamous cube example is something almost everyone feels is inefficient when starting to use OpenGL, but in real-world, more complex models, the degree of duplication is quite low.
I have "only" modified the GLES20TriangleRenderer.java file into the SDK example BasicGLSurfaceView, compile it and test it on two Android devices, an Android Phone and a Nexus 7 and this work good on the two devices :)
/*
* Copyright (C) 2011 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
Modified by YLP (06 January 2014) for to handle a rotated texture mapped cube
*/
package com.example.android.basicglsurfaceview;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.GLUtils;
import android.opengl.Matrix;
import android.os.SystemClock;
import android.util.Log;
class GLES20TriangleRenderer implements GLSurfaceView.Renderer {
public GLES20TriangleRenderer(Context context) {
mContext = context;
// mTriangleVertices = ByteBuffer.allocateDirect(mTriangleVerticesData.length * FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();
// mTriangleVertices.put(mTriangleVerticesData).position(0);
mTriangleVertices = ByteBuffer.allocateDirect(cubeVerticesStrip.length * FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();
mTriangleVertices.put(cubeVerticesStrip).position(0);
mTriangleTexcoords = ByteBuffer.allocateDirect(cubeTexCoordsStrip.length * FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();
mTriangleTexcoords.put(cubeTexCoordsStrip).position(0);
}
public void onDrawFrame(GL10 glUnused) {
// Ignore the passed-in GL10 interface, and use the GLES20
// class's static methods instead.
GLES20.glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable( GLES20.GL_DEPTH_TEST );
GLES20.glDepthFunc( GLES20.GL_LEQUAL );
GLES20.glDepthMask( true );
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID);
// mTriangleVertices.position(TRIANGLE_VERTICES_DATA_POS_OFFSET);
// GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false,
// TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
// checkGlError("glVertexAttribPointer maPosition");
// mTriangleVertices.position(TRIANGLE_VERTICES_DATA_UV_OFFSET);
// GLES20.glEnableVertexAttribArray(maPositionHandle);
// checkGlError("glEnableVertexAttribArray maPositionHandle");
// GLES20.glVertexAttribPointer(maTextureHandle, 2, GLES20.GL_FLOAT, false,
// TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
// checkGlError("glVertexAttribPointer maTextureHandle");
// GLES20.glEnableVertexAttribArray(maTextureHandle);
// checkGlError("glEnableVertexAttribArray maTextureHandle");
// From http://www.endodigital.com/opengl-es-2-0-on-the-iphone/part-fourteen-creating-the-cube
// (but slighty modified)
mTriangleVertices.position(0);
// GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false, 0, mTriangleVertices);
GLES20.glEnableVertexAttribArray(maPositionHandle);
mTriangleTexcoords.position(0);
// GLES20.glVertexAttribPointer(maTextureHandle, 2, GLES20.GL_FLOAT, false,TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
GLES20.glVertexAttribPointer(maTextureHandle, 2, GLES20.GL_FLOAT, false, 0, mTriangleTexcoords);
GLES20.glEnableVertexAttribArray(maTextureHandle);
long time = SystemClock.uptimeMillis() % 4000L;
float angle = 0.090f * ((int) time);
float scale = 0.7f;
Matrix.setRotateM(mMMatrix, 0, angle, 0, 0, 1.0f);
// YLP : add others movements cycles
Matrix.rotateM(mMMatrix, 0, angle, 1.0f, 0.0f, 0.0f );
// Matrix.rotateM(mMMatrix, 0, angle, 0.0f, 1.0f, 0.0f );
// float scale = (float)( Math.abs( Math.sin( ((float)time) * (6.28f/4000.0f) ) ));
Matrix.scaleM(mMMatrix, 0, scale, scale, scale);
Matrix.multiplyMM(mMVPMatrix, 0, mVMatrix, 0, mMMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(muMVPMatrixHandle, 1, false, mMVPMatrix, 0);
// Somes tests with only somes triangles
// GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3); // worked initialy but only one triangle
// GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6); // worked initialy but only two triangles
// GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); // GL_QUADS does not exist in GL 2.0 :(
// GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 8); // GL_QUADS does not exist in GL 2.0 :(
// Draw the cube
// TODO : make only one glDraWArrays() call instead one per face
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 4, 4);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 8, 4);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 12, 4);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 16, 4);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 20, 4);
checkGlError("glDrawArrays");
}
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
// Ignore the passed-in GL10 interface, and use the GLES20
// class's static methods instead.
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
// Ignore the passed-in GL10 interface, and use the GLES20
// class's static methods instead.
mProgram = createProgram(mVertexShader, mFragmentShader);
if (mProgram == 0) {
return;
}
maPositionHandle = GLES20.glGetAttribLocation(mProgram, "aPosition");
checkGlError("glGetAttribLocation aPosition");
if (maPositionHandle == -1) {
throw new RuntimeException("Could not get attrib location for aPosition");
}
maTextureHandle = GLES20.glGetAttribLocation(mProgram, "aTextureCoord");
checkGlError("glGetAttribLocation aTextureCoord");
if (maTextureHandle == -1) {
throw new RuntimeException("Could not get attrib location for aTextureCoord");
}
muMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
checkGlError("glGetUniformLocation uMVPMatrix");
if (muMVPMatrixHandle == -1) {
throw new RuntimeException("Could not get attrib location for uMVPMatrix");
}
/*
* Create our texture. This has to be done each time the
* surface is created.
*/
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureID = textures[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_REPEAT);
InputStream is = mContext.getResources()
.openRawResource(R.raw.robot);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
}
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
}
private int loadShader(int shaderType, String source) {
int shader = GLES20.glCreateShader(shaderType);
if (shader != 0) {
GLES20.glShaderSource(shader, source);
GLES20.glCompileShader(shader);
int[] compiled = new int[1];
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0) {
Log.e(TAG, "Could not compile shader " + shaderType + ":");
Log.e(TAG, GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
shader = 0;
}
}
return shader;
}
private int createProgram(String vertexSource, String fragmentSource) {
int vertexShader = loadShader(GLES20.GL_VERTEX_SHADER, vertexSource);
if (vertexShader == 0) {
return 0;
}
int pixelShader = loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentSource);
if (pixelShader == 0) {
return 0;
}
int program = GLES20.glCreateProgram();
if (program != 0) {
GLES20.glAttachShader(program, vertexShader);
checkGlError("glAttachShader");
GLES20.glAttachShader(program, pixelShader);
checkGlError("glAttachShader");
GLES20.glLinkProgram(program);
int[] linkStatus = new int[1];
GLES20.glGetProgramiv(program, GLES20.GL_LINK_STATUS, linkStatus, 0);
if (linkStatus[0] != GLES20.GL_TRUE) {
Log.e(TAG, "Could not link program: ");
Log.e(TAG, GLES20.glGetProgramInfoLog(program));
GLES20.glDeleteProgram(program);
program = 0;
}
}
return program;
}
private void checkGlError(String op) {
int error;
while ((error = GLES20.glGetError()) != GLES20.GL_NO_ERROR) {
Log.e(TAG, op + ": glError " + error);
throw new RuntimeException(op + ": glError " + error);
}
}
private static final int FLOAT_SIZE_BYTES = 4;
private static final int TRIANGLE_VERTICES_DATA_STRIDE_BYTES = 5 * FLOAT_SIZE_BYTES;
private static final int TRIANGLE_VERTICES_DATA_POS_OFFSET = 0;
private static final int TRIANGLE_VERTICES_DATA_UV_OFFSET = 3;
private static final int TRIANGLE_TEXCOORDS_DATA_STRIDE_BYTES = 2 * FLOAT_SIZE_BYTES;
private final float[] mTriangleVerticesData =
{
// X, Y, Z, U, V
// initial triangle from the source
// -1.0f, -0.5f, 0, -0.5f, 0.0f,
// 1.0f, -0.5f, 0, 1.5f, -0.0f,
// 0.0f, 1.11803399f, 0, 0.5f, 1.61803399f,
// YLP : transform this to two triangles for to have a quad
// -1, -1, 0, 0, 0,
// 1, -1, 0, 1, 0,
// -1, 1, 0, 0, 1,
// 1, 1, 0, 1, 1,
// -1, 1, 0, 0, 1,
// 1, -1, 0, 1, 0
// YLP : use two quads with GL_TRIANGLE_STRIP
// Don't work because this make one accordeon effect :(
-1, -1, -1, 0, 0,
1, -1, -1, 1, 0,
-1, 1, -1, 0, 1,
1, 1, -1, 1, 1,
-1, -1, 1, 0, 0,
1, -1, 1, 1, 0,
-1, 1, 1, 0, 1,
1, 1, 1, 1, 1,
};
// From http://www.endodigital.com/opengl-es-2-0-on-the-iphone/part-fourteen-creating-the-cube/
// (only moodify "static const GLfloat" to "private final float" on it)
private final float cubeVerticesStrip[] = {
// Front face
-1,-1,1, 1,-1,1, -1,1,1, 1,1,1,
// Right face
1,-1,1, 1,-1,-1, 1,1,1, 1,1,-1,
// Back face
1,-1,-1, -1,-1,-1, 1,1,-1, -1,1,-1,
// Left face
-1,-1,-1, -1,-1,1, -1,1,-1, -1,1,1,
// Bottom face
-1,-1,-1, 1,-1,-1, -1,-1,1, 1,-1,1,
// Top face
-1,1,1, 1,1,1, -1,1,-1, 1,1,-1
};
private final float cubeTexCoordsStrip[] = {
// Front face
0,0, 1,0, 0,1, 1,1,
// Right face
0,0, 1,0, 0,1, 1,1,
// Back face
0,0, 1,0, 0,1, 1,1,
// Left face
0,0, 1,0, 0,1, 1,1,
// Bottom face
0,0, 1,0, 0,1, 1,1,
// Top face
0,0, 1,0, 0,1, 1,1
};
private FloatBuffer mTriangleVertices;
private FloatBuffer mTriangleTexcoords;
private final String mVertexShader =
"uniform mat4 uMVPMatrix;\n" +
"attribute vec4 aPosition;\n" +
"attribute vec2 aTextureCoord;\n" +
"varying vec2 vTextureCoord;\n" +
"void main() {\n" +
" gl_Position = uMVPMatrix * aPosition;\n" +
" vTextureCoord = aTextureCoord;\n" +
"}\n";
private final String mFragmentShader =
"precision mediump float;\n" +
"varying vec2 vTextureCoord;\n" +
"uniform sampler2D sTexture;\n" +
"void main() {\n" +
" gl_FragColor = texture2D(sTexture, vTextureCoord);\n" +
"}\n";
private float[] mMVPMatrix = new float[16];
private float[] mProjMatrix = new float[16];
private float[] mMMatrix = new float[16];
private float[] mVMatrix = new float[16];
private float[] mMMatrix2 = new float[16];
private int mProgram;
private int mTextureID;
private int muMVPMatrixHandle;
private int maPositionHandle;
private int maTextureHandle;
private Context mContext;
private static String TAG = "GLES20TriangleRenderer";
}
=> I managed to implement this in just a few hours and this work :)
==> so if Android is not the best platform, this seem however a really good and viable plateform for that I begin to play a little more with some multimédia developments on Android's devices :)